Você está na página 1de 32

Expressão e Reconhecimento de

Emoções

Ana P. Pinheiro
Faculty of Psychology,
University of Lisbon
1
Expressão e Reconhecimento de Emoções

O caso especial da voz


2
Expressão e Reconhecimento de Emoções

Tipos de estímulos auditivos emocionais:

• Parte de comportamentos inatos que comunicam


estados emocionais (e.g., gargalhadas, gritos).
Vocalizações • Equivalente filogenético mais directo das vocalizações
emocionais animais.
não-verbais • Equivalente auditivo das expressões faciais (Belin et al.,
2004).

• Expressão vocal da emoção no discurso.


Prosódia • Contudo, algumas emoções são raramente expressas,
emocional de forma natural, através da prosódia: e.g., moans,
groans or bursts of laughter (Scherer, 1981).
Expressão e Reconhecimento de Emoções

1. Com que precisão reconhecemos emoções


expressas vocalmente?

https://vimeo.com/97449717 4
Expressão e Reconhecimento de Emoções

O que sabemos?

• Suporte para a universalidade do reconhecimento das emoções


básicas:
– face (Ekman, 1992; Izard, 1994; Russell, 1994);

– prosódia (Bryant & Clark Barrett, 2008; Scherer, Banse, & Wallbott, 2001; van Bezooijen, Otto,
& Heenan, 1983);

– vocalizações não-verbais (Sauter, Eisner, Ekman, &As emoções


Scott, 2010). (básicas)
parecem ser reconhecidas
de forma precisa em
diferentes culturas,
independentemente da
sua modalidade sensorial
(relação com teorias
evolutivas)

5
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional

(Schirmer & Adolphs, 2017)

Event-Related Potential (ERP) Correlates of Facial and Vocal Emotion Processing.


• Left: The facial emotion effect occurs for a negative deflection termed the
N170.
• Right: The vocal emotion effect occurs for a positive deflection termed the
P200. 6
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional

• A precisão no reconhecimento de expressões vocais emocionais é, em geral,


menor do que no reconhecimento de expressões faciais.

– 75-80% (Ekman, 1994) vs. 55-65% (Scherer, 2003)

• As diferenças específicas da modalidade sensorial podem estar relacionadas com


funções biológicas e pressões evolutivas (e.g., Banse & Scherer, 1996).

• Voz vs. face:

– Comunicação de estados emocionais abrangendo maiores distâncias e um maior nº de


pessoas (Liebenthal et al., 2016)
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional


• Algumas emoções básicas podem ser identificadas através de estímulos vocais com
uma precisão bem acima do acaso (~60% em média – Scherer, 1981)

• A alegria parece ser reconhecida quase com precisão total quando transmitida pela
face (precisão 96% nas culturas Ocidentais – Russell, 1994), mas é difícil reconhecer
inequivocamente esta emoção quando transmitida pela voz (Johnstone & Scherer, 2000).

• Quando transmitidas através da voz, a tristeza e a raiva são geralmente associadas


às maiores taxas de acerto (Scherer, 2003; Zupan, 2015), seguidas do medo.

• As expressões vocais de nojo (prosódia) são reconhecidas com precisão pouco


acima do acaso (Adolphs et al., 2002; Scherer, 2003).
Anger 65.2 63.7

Contempt 46.9

Disgust 62.1

Fear 58.3 50.8

Happiness 87.6 28.9

Sadness 68.4 62.8

(Elfenbein & Ambady,


Surprise 69.1 2002; meta-analysis) 9
Expressão e Reconhecimento de Emoções

2. Mas a precisão com que reconhecemos emoções


expressas vocalmente varia em função da
autenticidade do som?

https://vimeo.com/97449717 10
[0.03, 0.11]); (AudioVisual: posed: t(39) = 9.53, p
Expressão e Reconhecimento
< .001, Cohen’s de Emoções
d: 1.51, 95% CI [0.19, 0.30]; spon-
taneous: t(39) = 9.57, p < .001, Cohen’s d: 1.51, 95% CI
[0.20, 0.31]). Thus, naïve observers reliably recognised
COGNITION AND EMOTION, 2017
https://doi.org/10.1080/02699931.2017.1320978
emotional states from both posed and spontaneous
emotional expressions. The results are displayed in
Can perceivers recognise emotions from spontaneous expressions?
Disa A. Sauter and Agneta H. Fischer
Figure 2 (see Table 3 for a breakdown of results per
Department of Social Psychology, University of Amsterdam, Amsterdam, Netherlands

ABSTRACT ARTICLE HISTORY


Posed stimuli dominate the study of nonverbal communication of emotion, but Received 31 August 2016
concerns have been raised that the use of posed stimuli may inflate recognition Revised 6 April 2017
accuracy relative to spontaneous expressions. Here, we compare recognition of Accepted 6 April 2017
emotions from spontaneous expressions with that of matched posed stimuli.
KEYWORDS
Participants made forced-choice judgments about the expressed emotion and Nonverbal communication;
whether the expression was spontaneous, and rated expressions on intensity genuine; vocal expressions;
(Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to vocalisations; facial
accurately infer emotions from both posed and spontaneous expressions, from expressions
auditory, visual, and audiovisual cues. Furthermore, perceived intensity and
prototypicality were found to play a role in the accurate recognition of emotion,
particularly from spontaneous expressions. Our findings demonstrate that
perceivers can reliably recognise emotions from spontaneous expressions, and that
depending on the comparison set, recognition levels can even be equivalent to
that of posed stimulus sets.

The vast majority of research into nonverbal com- expressions that uses posed expressions, it is impor-
munication of emotions uses posed stimuli, because tant to establish whether it is scientifically sound to
of the high degree of experimental control that they generalise from findings using posed expressions to
afford researchers. However, critics have argued that real-life situations involving spontaneous emotional
the use of posed expressions inflates recognition accu- expressions. The current study aimed to contribute
racy relative to spontaneous expressions (e.g. Nelson to addressing the question of how spontaneous
& Russell, 2013), and concerns have been raised over emotional expressions are perceived compared to
whether observers can in fact reliably recognise the typical stimuli used in the field of emotion
emotions from spontaneous expressions at all research, that is, posed expressions.
(Russell, 1994). Posed stimuli have also been criticised
for being artificial and consequently not representa-
Studies comparing recognition of posed and
tive of expressions that occur in real life (see Scherer,
spontaneous expressions
Clark-Polner, & Mortillaro, 2011 for a discussion). But
although some studies have examined the recog- As noted, only a handful of studies have directly com-
nition of individual emotions from spontaneous pared the perception of spontaneous and posed facial
expressions (e.g. Fernandez-Dols, Carrera, & Crivelli, expressions, and they have generally lent support to
2011; Tracy & Matsumoto, 2008; Wagner, 1990), sur- the proposal that recognition is more accurate for
prisingly few studies have directly compared recog-
nition of emotions from spontaneous and posed
Figure 2. Emotion recognition in Experiment 2 (arcsine Hu scores) for
posed than for spontaneous expressions (Russell,
1994). In an early study, Zuckerman and colleagues
stimuli within a single paradigm. But given the posed (dark
examined whether viewers could judge valence and
Emotion recognition boxes) and for spontaneous
posed and (light spontaneous
boxes) emotional
Performance on the emotion recognition task. The wealth of research into nonverbal emotional intensity from spontaneous facial expressions of positive
expressions. Lines through the boxes are the medians, box edges
dashed line represents chance
CONTACT Disa A. Sauter (calculated d.a.sauter@uva.nl as 1/4
Supplemental data for this article can be accessed here. https://doi.org/10.1080/02699931.2017.1320978
emotional
are the 25th expressions. and 75th percentiles, The
and thedashed
whiskers line
extend rep-
to the
correct, as there were Thislicenses/by-nc-nd/4.0/),
isfour options
distributed under the terms of each
© 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
an Open Access article of the Creative resentsmostLicenseextreme
Commons Attribution-NonCommercial-NoDerivatives chance data points
(http://creativecommons.org/ excluding outliers.
(calculated as 1/4 The correct,
dashed line as
rep-
resents chance (calculated
which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not

there were four options of each valence). as 1/4 correct, as there were four options
valence). altered, transformed, or built upon in any way.

of each valence). 11
20, 0.31]). Thus, naïve observers reliably recognised difference 0.026, p > .2; AudioVisual mean difference
otional states from both posed and spontaneous Expressão
0.09, p > .6). e Reconhecimento
These results fail todesupport
Emoçõesthe
otional expressions. The results are displayed in
ure 2 (see Table 3 for a breakdown of results per
Table 3. Table showing mean recognition rates (raw Hu scores) in
Experiment 2 in each modality, separately for spontaneous (top) and
posed (bottom) expressions. Means as arcsine transformed Hu
scores (used in the statistical analyses) can be found in the
Supplementary Materials.
Audio (n = 42) Visual (n = 40) AudioVisual (n = 42)
Emotion Spontaneous
COGNITION AND EMOTION, 2017
https://doi.org/10.1080/02699931.2017.1320978
Triumph .16 (.17) .12 (.13) .24 (.20)
Amusement .54 (.16) .35 (.16) .47 (.20)
Anger .47 (.27) .21 (.15) .40 (.23)
Can perceivers recognise emotions from spontaneous expressions?
Disa A. Sauter and Agneta H. Fischer
Disgust .34 (.25) .45 (.23) .58 (.27)
Department of Social Psychology, University of Amsterdam, Amsterdam, Netherlands
Fear .25 (.13) .25 (.21) .42 (.20)
Relief .26 (.20) .14 (.13) .37 (.22)
ABSTRACT
Posed stimuli dominate the study of nonverbal communication of emotion, but
ARTICLE HISTORY
Received 31 August 2016
Revised 6 April 2017
Sadness .59 (.28) .63 (.18) .89 (.19)
concerns have been raised that the use of posed stimuli may inflate recognition
accuracy relative to spontaneous expressions. Here, we compare recognition of
emotions from spontaneous expressions with that of matched posed stimuli.
Accepted 6 April 2017
Pleasure .58 (.20) .63 (.32) .72 (.25)
Participants made forced-choice judgments about the expressed emotion and
whether the expression was spontaneous, and rated expressions on intensity
KEYWORDS
Nonverbal communication;
genuine; vocal expressions;
Surprise .09 (.11) .26 (.15) .21 (.15)
(Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to
accurately infer emotions from both posed and spontaneous expressions, from
vocalisations; facial
expressions
Total .36 (.20) .34 (.18) .48 (.21)
auditory, visual, and audiovisual cues. Furthermore, perceived intensity and
prototypicality were found to play a role in the accurate recognition of emotion, Posed
particularly from spontaneous expressions. Our findings demonstrate that
perceivers can reliably recognise emotions from spontaneous expressions, and that Triumph .21 (.21) .10 (.16) .18 (.18)
depending on the comparison set, recognition levels can even be equivalent to
that of posed stimulus sets. Amusement .55 (.12) .36 (.13) .52 (.20)
Anger .74 (.19) .76 (.25) .84 (.22)
The vast majority of research into nonverbal com- expressions that uses posed expressions, it is impor- Disgust .28 (.21) .22 (.16) .39 (.14)
ure 2. Emotion recognition
munication of emotions in Experiment
uses posed stimuli, 2 (arcsine
because tant to establish whether itHu scores)sound
is scientifically forto
of the high degree of experimental control that they generalise from findings using posed expressions to Fear .51 (.20) .67 (.19) .71 (.25)
ed (dark boxes)However,
afford researchers. andcriticsspontaneous (light
have argued that real-life situations boxes) emotional
involving spontaneous emotional
the use of posed expressions inflates recognition accu- expressions. The current study aimed to contribute Relief .45 (.22) .30 (.20) .44 (.21)
essions.
racy Lines throughexpressions
relative to spontaneous the boxes(e.g. Nelson are the medians,
to addressing the question ofbox edges
how spontaneous
Sadness .07 (.11) .37 (.17) .22 (.21)
the 25th and 75th can percentiles, and thethetypical
whiskers
stimuli used extend
in the field to
& Russell, 2013), and concerns have been raised over emotional expressions are perceived compared to
the
whether observers in fact reliably recognise of emotion
Pleasure .39 (.19) .12 (.13) .46 (.31)
t extreme data points excluding outliers. The dashed line rep-
emotions from spontaneous expressions at all research, that is, posed expressions.
(Russell, 1994). Posed stimuli have also been criticised Surprise .35 (.18) .35 (.17) .55 (.23)
nts chance
for being (calculated as 1/4
artificial and consequently correct,Studies
not representa- as there
comparingwere four options
tive of expressions that occur in real life (see Scherer,
recognition of posed and
Total .40 (.18) .36 (.18) .48 (.22)
ach valence).
Clark-Polner, & Mortillaro, 2011 for a discussion). But
spontaneous expressions
12
although some studies have examined the recog- As noted, only a handful of studies have directly com-
nition of individual emotions from spontaneous pared the perception of spontaneous and posed facial
expressions (e.g. Fernandez-Dols, Carrera, & Crivelli, expressions, and they have generally lent support to
Expressão e Reconhecimento de Emoções

3. Mas será que determinadas categorias emocionais


são reconhecidas mais rapidamente do que outras?

https://vimeo.com/97449717 13
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional

• A avaliação do significado emocional de expressões vocais é um


processo cognitivo temporalmente dinâmico (Pihan, 2006).

– Integra diferentes aspectos dimensionais da entoação vocal numa


impressão que é continuamente actualizada.
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional

• O curso temporal do reconhecimento vocal emocional é semelhante


para todas as emoções básicas? (Pell & Kotz, 2011)

– Recognition of vocal emotion attributes in speech builds incrementally


over the course of an utterance.
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional


ANGER NEUTRAL

DISGUST HAPPY

FEAR SAD

(Pell & Kotz, 2011)


Expressão e Reconhecimento de Emoções

4. A capacidade de reconhecimento de emoções


vocais é afetada em perturbações psiquiátricas?

https://vimeo.com/97449717 17
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional e psicopatologia


ORIGINAL RESEARCH ARTICLE
published: 23 July 2013
doi: 10.3389/fpsyg.2013.00461

Judgment of emotional information expressed by prosody


and semantics in patients with unipolar depression
Sarah Schlipf 1 , Anil Batra 1 , Gudrun Walter 1 , Christina Zeep 1 , Dirk Wildgruber 1 , Andreas Fallgatter 1
and Thomas Ethofer 1,2*
1
Department of General Psychiatry, University of Tübingen, Tübingen, Germany
2
Department of Biomedical Resonance, University of Tübingen, Tübingen, Germany

Edited by: It was the aim of this study to investigate the impact of major depressive disorder (MDD)
Pascal Belin, University of Glasgow, on judgment of emotions expressed at the verbal (semantic content) and non-verbal
UK
(prosody) level and to assess whether evaluation of verbal content correlate with
Reviewed by:
self-ratings of depression-related symptoms as assessed by Beck Depression Inventory
Jan Van Den Stock, KU Leuven,
Belgium (BDI). We presented positive, neutral, and negative words spoken in happy, neutral,
Linda Isaac, Palo Alto VA & Stanford and angry prosody to 23 MDD patients and 22 healthy controls (HC) matched for age,
University, USA sex, and education. Participants rated the valence of semantic content or prosody on
Sascha Frühholz, University of
Geneva, Switzerland
a 9-point scale. MDD patients attributed significantly less intense ratings to positive
words and happy prosody than HC. For judgment of words, this difference correlated
*Correspondence:
Thomas Ethofer, Department of significantly with BDI scores. No such correlation was found for prosody perception.
General Psychiatry, University of MDD patients exhibited attenuated processing of positive information which generalized
Tübingen, Osianderstraße 24, 72076 across verbal and non-verbal channels. These findings indicate that MDD is characterized
Tübingen, Germany
e-mail: thomas.ethofer@
by impairments of positive rather than negative emotional processing, a finding which
med.uni-tuebingen.de could influence future psychotherapeutic strategies as well as provide straightforward
hypotheses for neuroimaging studies investigating the neurobiological correlates of
impaired emotional perception in MDD.

Keywords: depression, anhedonia, emotion, semantic content, prosody

INTRODUCTION of paramount importance to maintaining one’s social network.


Correct interpretation of emotional signals plays a key role in Far less evidence, however, is available on how MDD alters
social interaction and can be compromised in patients with major perception of emotional information in the voice on the ver-
depressive disorder (MDD). These patients display a cognitive bal (i.e., semantic meaning) and the non-verbal (i.e., prosody)
triad pattern of thinking, consisting of negative thinking about level. Analogous to results in the facial domain, studies requir-
oneself, one’s life experience (and the world in general), and one’s ing the participants to make categorical judgments on acoustic 18
future (Wright and Beck, 1983). This triad is accompanied by emotional information revealed a negative bias specifically for
maladaptive cognitive schemata (Kovacs and Beck, 1978) includ- ambiguous prosodic stimuli representing surprise (Kan et al.,
Expressão e Reconhecimento de Emoções

Reconhecimento vocal emocional e psicopatologia


SCHRES-08107; No of Pages 8
Schizophrenia Research xxx (xxxx) xxx Schizophrenia Research 152 (2014) 235–241

Psychological Medicine
Contents lists available at ScienceDirect http://journals.cambridge.org/PSM Contents lists available at ScienceDirect

Schizophrenia Research Additional services for Psychological Medicine: Schizophrenia Research

journal homepage: www.elsevier.com/locate/schres


Email alerts: Click here journal homepage: www.elsevier.com/locate/schres
Subscriptions: Click here
Commercial reprints: Click here
Terms of use : Click here

Altered attentional processing of happy prosody in schizophrenia Abnormalities in the processing of emotional prosody from single words
in schizophrenia
Ana P. Pinheiro a,b,⁎, Margaret Niznikiewicz a
Ana P. Pinheiro a,b, Neguine Rezaii b, Andréia Rauber c, Taosheng Liu d, Paul G. Nestor e,
Sensory­based and higher­order operations contribute to abnormal 
a
Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School, Brockton, MA, United States
b
Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal Robert W. McCarley b, Óscar F. Gonçalves a, Margaret A. Niznikiewicz b
emotional prosody processing in schizophrenia: an electrophysiological  a

b
Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School, Brockton, MA, United States
a r t i c l e i n f o a b s t r a c t investigation c

d
Catholic University of Pelotas, Pelotas, Brazil
Department of Psychology, Second Military Medical University (SMMU), Shanghai, China
e
Article history: Background: Abnormalities in emotional prosody processing have been consistently reported in schizophrenia. University of Massachusetts, Boston, MA, United States
Received 4 June 2018 Emotionally salient changes in vocal expressions attract attention in social interactions. However, it remains to A. P. Pinheiro, E. del Re, J. Mezin, P. G. Nestor, A. Rauber, R. W. McCarley, Ó. F. Gonçalves and M. A. Niznikiewicz
Received in revised form 17 November 2018 be clarified how attention and emotion interact during voice processing in schizophrenia. The current study ad-
Accepted 19 November 2018 dressed this question by examining the P3b event-related potential (ERP) component.
Available online xxxx Psychological Medicine / Volume 43 / Issue 03 / March 2013, pp 603 ­ 618 a r t i c l e i n f o a b s t r a c t
Method: The P3b was elicited with a modified oddball task, in which frequent (p = .84) neutral stimuli were DOI: 10.1017/S003329171200133X, Published online: 10 July 2012
intermixed with infrequent (p = .16) task-relevant emotional (happy or angry) targets. Prosodic speech was Article history: Background: Abnormalities in emotional prosody processing have been consistently reported in schizophrenia
Keywords:
presented in two conditions - with intelligible (semantic content condition - SCC) or unintelligible semantic con- Received 4 July 2013 and are related to poor social outcomes. However, the role of stimulus complexity in abnormal emotional prosody
Voice Link to this article: http://journals.cambridge.org/abstract_S003329171200133X Received in revised form 25 October 2013
Emotion tent (prosody-only condition - POC). Fifteen chronic schizophrenia patients and 15 healthy controls were processing is still unclear.
instructed to silently count the target vocal sounds. Accepted 28 October 2013 Method: We recorded event-related potentials in 16 patients with chronic schizophrenia and 16 healthy controls
Attention
Event-related potentials Results: Compared to controls, P3b amplitude was specifically reduced for happy prosodic stimuli in schizophre- How to cite this article: Available online 14 December 2013
to investigate: 1) the temporal course of emotional prosody processing; and 2) the relative contribution of
P300 nia, irrespective of semantic status. Groups did not differ in the processing of neutral standards or angry targets. A. P. Pinheiro, E. del Re, J. Mezin, P. G. Nestor, A. Rauber, R. W. McCarley, Ó. F. Gonçalves and M. A. Niznikiewicz (2013).  prosodic and semantic cues in emotional prosody processing. Stimuli were prosodic single words presented in
Keywords:
Schizophrenia Discussion: The selectively reduced P3b for happy prosody in schizophrenia suggests top-down attentional re- Sensory­based and higher­order operations contribute to abnormal emotional prosody processing in schizophrenia: an  Schizophrenia
two conditions: with intelligible (semantic content condition—SCC) and unintelligible semantic content (pure
sources were less strongly engaged by positive relative to negative prosody, reflecting alterations in the evalua- electrophysiological investigation. Psychological Medicine, 43, pp 603­618 doi:10.1017/S003329171200133X Emotional prosody prosody condition—PPC).
tion of the emotional salience of the voice. These results highlight the role played by higher-order processes in Language Results: Relative to healthy controls, schizophrenia patients showed reduced P50 for happy PPC words, and
emotional prosody dysfunction in schizophrenia. Event-related potentials reduced N100 for both neutral and emotional SCC words and for neutral PPC stimuli. Also, increased P200 was
Request Permissions : Click here observed in schizophrenia for happy prosody in SCC only. Behavioral results revealed higher error rates in schizo-
© 2018 Elsevier B.V. All rights reserved.
phrenia for angry prosody in SCC and for happy prosody in PPC.
Conclusions: Together, these data further demonstrate the interactions between abnormal sensory processes
and higher-order processes in bringing about emotional prosody processing dysfunction in schizophrenia. They
further suggest that impaired emotional prosody processing is dependent on stimulus complexity.
© 2013 Published by Elsevier B.V.

1. Introduction and Kotz, 2008a, 2008b; Pinheiro et al., 2014, 2013; Wildgruber et al.,
2006).
Abnormalities in the perception and recognition of emotional pros- Relative to the study of facial affect processing, fewer studies have
ody have been increasingly recognized as a core feature of schizophre- examined emotional prosody dysfunction in schizophrenia. The existing 1. Introduction In healthy subjects, perception of emotional prosody is thought to
nia (Couture et al., 2006). Deficits in emotional perception seem to be studies revealed alterations in emotional prosody processing in schizo- reflect three interacting stages: 1) sensory processing of a speech signal;
independent of antipsychotic medication and to represent a trait deficit phrenia using behavioral (Edwards et al., 2001; Kucharska-Pietura et al., Among the most significant predictors of long-term disability in 2) implicit categorization of salient acoustic features into emotional and
(Edwards et al., 2001; Kucharska-Pietura et al., 2005). Further, they pre- 2005; Leitman et al., 2010a; Pawełczyk et al., 2018; Shaw et al., 1999; schizophrenia (e.g., Couture et al., 2006) is impaired detection and non-emotional features; and 3) explicit evaluation and assignment of
recognition of emotions from voice, i.e., emotional prosody [EP]. Affect emotional meaning to a speech signal (Schirmer and Kotz, 2006;
dict functional outcome and quality of life (Kee et al., 2003). Emotional Shea et al., 2007; Vaskinn et al., 2007), functional magnetic resonance
recognition from both voice and face is an aspect of social cognition, Paulmann and Kotz, 2008; Paulmann et al., 2010). Event-related poten-
prosody, the non-verbal vocal expression of emotion (Kotz and imaging (fMRI – Leitman et al., 2011; Mitchell et al., 2004), and event-
which has been recently recognized as an important predictor of func- tial (ERP) studies demonstrated that the first two stages are indexed by
Paulmann, 2011), is a cornerstone of adaptive functioning in a social en- related potential (ERP – Kantrowitz et al., 2015; Leitman et al., 2010b;
tional outcomes at all stages of schizophrenia pathology: clinical high N100 and P200, respectively (Paulmann and Kotz, 2008; Paulmann
vironment (Schirmer and Kotz, 2006). At perceptual and physical levels, Pinheiro et al., 2014, 2013) measures. Impaired recognition of emotion
risk (Addington et al., 2008; Green et al., 2012), first episode (Horan et al., 2010; Pinheiro et al., 2012).
vocal emotions are primarily communicated by means of pitch (funda- from a tone of a voice (e.g., Dondaine et al., 2014) may lead to dysfunc-
et al., 2012) and chronic schizophrenia (Kee et al., 2003; Kucharska- Despite the importance of a detailed understanding of emotional
mental frequency – F0), intensity and duration (e.g., duration of sylla- tional social interactions (e.g., Hooker and Park, 2002) and contribute to
Pietura et al., 2005; Green et al., 2012). While face processing abnormal- prosody processing deficits in schizophrenia, few studies have exam-
bles and pauses) (Banse and Scherer, 1996; Juslin and Laukka, 2003). positive symptoms such as auditory verbal hallucinations (Alba-Ferrara
ity in schizophrenia has been well characterized (e.g., Li et al., 2010), ined these abnormalities and their underlying neural mechanisms
Perceiving the emotional quality of the voice is a multi-stage process et al., 2012; Shea et al., 2007). There is some evidence that impairments
voice and prosody processing have been understudied, especially are not well understood. Recent studies suggested that sensory-based
that includes: 1) decoding the acoustic properties of the voice; 2) detect- in emotional vocal recognition in schizophrenia are enhanced when using event-related potential (ERP) approaches, which remain the dysfunction might not exclusively account for abnormal prosody pro-
ing emotionally salient acoustic cues; and 3) cognitively evaluating the stimuli have a negative valence (Bozikas et al., 2006; Edwards et al., only tool to examine temporal changes in neurophysiological events cessing in schizophrenia. Instead, an interaction between dysfunctional
emotional significance of the voice (Paulmann et al., 2010; Paulmann 2001; Huang et al., 2009; Ito et al., 2013; Pinheiro et al., 2013), even that correspond to early stages of analysis of a speech signal. The sensory and higher-order cognitive processes may better explain it
though other studies found worse performance in the recognition of existing studies on vocal emotional processing include just a handful (Leitman et al., 2010, 2011; Pinheiro et al., 2012). A recent ERP study
more complex vocal stimuli with a positive valence such as alluring of behavioral (e.g., Edwards et al., 2001), functional magnetic resonance provided further evidence for these abnormalities (Pinheiro et al.,
⁎ Corresponding author at: Faculdade de Psicologia, Universidade de Lisboa, Lisbon,
voices (Vogel et al., 2016). Notwithstanding, alterations in ERP re- imaging (fMRI—e.g., Mitchell et al., 2004; Leitman et al., 2011) and ERP 2012). This study investigated prosody processing in 15 chronic schizo-
Portugal. sponses of the electroencephalogram to vocal emotional information investigations (Pinheiro et al., 2012). phrenia patients and 15 healthy controls (HC). Additionally, it explored
E-mail address: appinheiro@psicologia.ulisboa.pt (A.P. Pinheiro). occurring before a response is made (i.e., deciding whether the voice
0920-9964/$ – see front matter © 2013 Published by Elsevier B.V.
http://dx.doi.org/10.1016/j.schres.2013.10.042
https://doi.org/10.1016/j.schres.2018.11.024
0920-9964/© 2018 Elsevier B.V. All rights reserved.

Please cite this article as: A.P. Pinheiro and M. Niznikiewicz, Altered attentional processing of happy prosody in schizophrenia, Schizophrenia Re-
search, https://doi.org/10.1016/j.schres.2018.11.024
Downloaded from http://journals.cambridge.org/PSM, IP address: 128.103.149.52 on 04 Feb 2013

19
Expressão e Reconhecimento de Emoções
Social Cognitive and Affective Neuroscience, 2018, 1–13

doi: 10.1093/scan/nsx142
Advance Access Publication Date: 23 November 2017

A importância da modalidade sensorial Original article

Is the voice an auditory face? An ALE meta-analysis


comparing vocal and facial emotion processing

Downloaded from https://academic.oup.com/scan/article-abstract/13/1/1/4653774 by guest on 20 October 2019


Annett Schirmer1,2,3
1
Department of Psychology, 2Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, Hong
Kong, and 3Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences,
10 | |
10 SocialCognitive
Social Cognitive and
and
Leipzig
Affective
Affective
04103,
2018,Vol.
Neuroscience,2018,
Neuroscience,
Germany
Vol.13,13,No.
No.11
Correspondence should be addressed to Annett Schirmer, Department of Psychology, The Chinese University of Hong Kong, Shatin, Sino Building, NT,
Hong Kong. E-mail: schirmer@cuhk.edu.hk

Abstract
This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition.
Neuroimaging studies contrasting emotional with neutral (face: N ¼ 76, voice: N ¼ 34) and explicit with implicit emotion
processing (face: N ¼ 27, voice: N ¼ 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively.
Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced,
modality-matched data sets for modality comparison. Stimulus-driven emotion processing engaged large networks with
significant modality differences in the superior temporal (voice-specific) and the medial temporal (face-specific) cortex.
Goal-driven processing was associated with only a small cluster in the dorsomedial prefrontal cortex for voices but not
faces. Neither stimulus- nor goal-driven processing showed significant modality overlap. Together, these findings suggest
that stimulus-driven processes shape activity in the social brain more powerfully than goal-driven processes in both the
visual and the auditory domains. Yet, whereas faces emphasize subcortical emotional and mnemonic mechanisms, voices
emphasize cortical mechanisms associated with perception and effortful stimulus evaluation (e.g. via subvocalization).
These differences may be due to sensory stimulus properties and highlight the need for a modality-specific perspective
when modeling emotion processing in the brain.

Key words: fMRI; nonverbal; affective; visual; auditory; social; PET

Fig. 2. Summary of brain regions highlighted in this meta-analysis. Lateral and medial areas are marked in nontransparent and transparent color, respectively. Early
Fig. 2. Summary of brain regions highlighted in this meta-analysis. Lateral and medial areas are marked in nontransparent and transparent color, respectively. Early
modality specific processingIntroduction
is indicated for faces in red and for voices in green. Later, into potential modality
other expressive convergence
channels and becameistoindicated
dominate ourin violet. Arrows illustrate hypothe-
modality specific processing is indicated for faces in red and for voices in green. Later, potential modality convergence is indicated in violet. Arrows illustrate hypothe-
sized up- and downstream modulations (not tested in the present study). Modality thinking about strong
effects with social perception
evidencemore generally by
are marked (Belin et lines
solid al., and those with weak evidence
sized up- and downstream modulations
[. . .] an innate(not
feelingtested
must have intoldthe present
him study).crying
that the pretended Modality effects with
2011, 2000; strongand
Schirmer evidence
Adolphs, are marked
2017). by solid
Taking issue lines and those with weak evidence
with this
or with evidence from previousofstudies are marked
his nurse expressed grief . . . by dashed
(Darwin, lines. Although
1872, concluding remarks, dmPFC failed to show for faces in this meta-analysis, there is other work implicating this
or with evidence from previous italics
studies are marked by dashed lines. Although dmPFC
added for emphasis)
failed to
situation, thisshow
articlefor faces in
compares andthis meta-analysis,
contrasts there is other work implicating this
the brain struc-
region when the analysis of facial expressions is challenging (e.g. reading the mindtures in the eyes
and test). Leftunderpinning
mechanisms IFG activityfacewasperception
found in thewithemotion contrast of the full voice
region when the analysis of facial expressions is challenging (e.g. reading the mind in the
those
eyes test).
underpinning
Left IFG activity
voice perception.
was found in the emotion contrast of the full voice
and face data sets. However, itsIn exact
his bookfunctionality
The expressionand activation
of emotions in manconditions
and animals,in the context of emotion perception remain to be determined. Amy, amygdala; dmPFC,
and face data sets. However, Darwin
its exact functionality and activation
recognized emotions as nonprivate experiences and conditions in the context of emotion perception remain to be determined. Amy, amygdala; dmPFC, 20
dorso-medial prefrontal cortex; PHG, parahippogampal gyrus; IFG, inferior frontal gyrus; STG, superior temporal gyrus.
described
dorso-medial prefrontal cortex; their characteristic displays.
PHG, parahippogampal gyrus;Whereas Darwinfrontal
IFG, inferior could gyrus; STG, superior
Why emotion temporal
processing gyrus. for faces and
may compare
only speculate about the ‘innate feeling’ by which these dis- voices
plays are perceived and understood, modern science helped
Expressão e Reconhecimento de Emoções

A importância da modalidade sensorial

• Dificuldades no reconhecimento emocional após lesão cerebral


dependem da modalidade sensorial do estímulo (e.g., défices no
reconhecimento emocional facial mas não vocal).

• Mas… Evidência para recursos neurais partilhados durante o


processamento emocional em ambas as modalidades.

• Características percetivas numa modalidade sensorial podem ser


mais salientes do que numa outra modalidade sensorial.

21
emotion modulation [26] appear preferentially lateralized to the right hemisphere, the laterali-
Expressão e Reconhecimento de Emoções
zation of subcortical emotion effects remains unclear.

There is evidence that expressions of different emotions recruit somewhat segregated face-
processing circuits. This was demonstrated with multivoxel pattern analysis (MVPA) within
A importância da modalidade sensorial the FFA and primary visual cortex [27], the medial prefrontal cortex (mPFC) [28], and the

Right lateral view Right lateral sagi!al sec"on

Review Posterior
Posterior
Emotion Perception from superior temporal
sulcus
Lateral insula
prefrontal
Face, Voice, and Touch: cortex
Comparisons and Fusiform
Superior
Convergence gyrus
temporal
Annett Schirmer1,* and Ralph Adolphs2,* gyrus
Historically, research on emotion perception has focused on facial expressions,
Voice Touch
Trends Face Mul!modal
and findings from this modality have come to dominate our thinking about other
Facial expression and perception have
modalities. Here we examine emotion perception through a wider lens by long been the primary emphasis in
comparing facial with vocal and tactile processing. We review stimulus char-
acteristics and ensuing behavioral and brain responses and show that audition
Right medial sagi!al sec"on
research. However, there is a growing
interest in other channels like the voice Coronal sec"on
and touch.
and touch do not simply duplicate visual mechanisms. Each modality provides
a distinct input channel and engages partly nonoverlapping neuroanatomical Facial, vocal, and tactile emotion pro-
cessing have been explored with a
systems with different processing specializations (e.g., specific emotions range of techniques including beha- Medial Medial
versus affect). Moreover, processing of signals across the different modalities vioral judgments, electroencephalo-
converges, first into multi- and later into amodal representations that enable Posterior
graphy/event-related potentials, fMRI prefrontal prefrontal
contrast studies, and multivoxel pat-
holistic emotion judgments. tern analyses. gingulate cortex cortex
Results point to similarities (e.g.,
Nonverbal Emotions: Moving from a Unimodal to a Multimodal Perspective increased responses to social and
Emotion (see Glossary) perception plays a ubiquitous role in human interactions and is hence emotional signals) as well as differ-
of interest to a range of disciplines including psychology, psychiatry, and social neuroscience. ences (e.g., differentiation of individual
However, its study has been dominated by facial emotions, with other modalities explored less emotions versus encoding of affect) Insula
frequently and often anchored within a framework derived from what we know about vision. Fusiform
between communication channels.

Here we take a step back and put facial emotions on a par with vocal and tactile emotions. Like Channel gyrus
similarities and differences
a frightened face, a shaking voice or a cold grip can be meaningfully interpreted as emotional. enable holistic emotion recognition, a
process that depends on multisensory
Amygdala
We first explore these three modalities in terms of signal properties and brain processes
integration during early, perceptual
underpinning unimodal perception. We then examine how signals from different channels and later, conceptual stages. Superior
converge into a holistic understanding of another’s feelings. Throughout we address the
question of whether emotion perception is the same or different across modalities. Amygdala temporal
gyrus
Sensory Modalities for Emotion Expression
There is vigorous debate about what exactly individuals can express nonverbally. There are
1
[41_TD$IF]open questions about whether people are in an objective sense ‘expressing emotions’ as [410_TD$IF]Chinese University of Hong Kong,
Hong Kong; Max Planck Institute for
opposed to engaging in more strategic social communication, whether they express discrete Human Cognitive and Brain Sciences,
emotions and which ones, and whether their expressions are culturally universal. For practical
reasons we ignore these debates here and simply assume that people do express and perceive
Figure 2. Key Brain Regions Involved in Nonverbal Emotion Processing. Regions typically more active for
Germany; National University of
Singapore, Singapore
2
California Institute of Technology,
what we usually call emotions (Box 1). emotional than for neutral stimuli are marked in green, blue, and red for voice, face, and touch, respectively. Regions
Pasadena, CA, USA
22
Facial expressions have been studied in the most detail, possibly because they seem most
typically more active for emotional multimodal than for unimodal stimulation are marked in beige.
apparent in everyday life and their controlled presentation in an experimental setting is relatively *Correspondence:
schirmer@cuhk.edu.hk (A. Schirmer)
easy. Facial expressions in humans depend on 17 facial muscle pairs that we share fully with
Quão eficazmente reconhecemos
emoções transmitidas através da
face?

23
Expressão e Reconhecimento de Emoções

Accuracy in Facial Emotion Recognition


avior Research Methods, Instruments, & Computers
4, 36 (4), 634-638

Photographs of facial expression: Accuracy,


response times, and ratings of intensity
ROMINA PALERMO and MAX COLTHEART
Macquarie University, Sydney, New South Wales, Australia

Equal numbers of male and female participants judged which of seven facial expressions (anger, dis-
gust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we
measured both accuracy and response times. In addition, the participants rated how well the expres-
sion was displayed (i.e., the intensity of the expression).636 PALERMO
These three measures are AND
reportedCOLTHEART
for each
face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that
some expressions were recognized more accurately in female than in male faces. The full set of these
norms may be downloaded from www.psychonomic.org/archive/.
Female Face Sex Male Face Sex
100
The information conveyed by facial expressions is so METHOD
logically and socially important that humans have 90
Participants
lved complex neural systems that rapidly and accu- Twelve females (M ! 24 years, SD ! 8.7) and 12 males (M ! 25
ely decode facial expressions displayed by friends and 80
years, SD ! 6.6) participated in the experiment in return for either
s (Rolls, 2000). Over the past few decades, numerous
Percentage Accuracy

$10 or course credit.


dies have investigated how emotion is recognized from 70
ial expressions (see Adolphs, 2002, for a comprehen- Stimuli
e review). Behavioral, neuropsychological, neuro- 60 of facial expression stimuli. Faces were
Source and selection
sourced from five databases: Pictures of Facial Affect (Ekman &
aging, and electrophysiological techniques have been Friesen, 1976), Gur et al. (2002), Mazurski and Bond (1993), NimStim
ployed with infants, children, and adults of varying 50 Ellertsen, Marcus, & Nelson, 2002), and
(Tottenham, Borscheid,
s. In addition, studies have examined why individuals Watson (2001). In this section, we will briefly describe each face set
h specific psychiatric or neurological impairments and explain how and40 why we selected images from each set to be in-
g., autistic spectrum disorder, Williams syndrome, cluded in this study. To be considered, the photographs needed to be
izophrenia, Huntington’s disease, or Alzheimer’s dis- of a Caucasian adult 30 displaying a good representation of all seven
facial expressions: angry, disgusted, fearful, happy, neutral, sad, and
e) are often impaired in their ability to recognize fa- surprised (except for
l expressions of emotion. In the majority of studies, 20those faces selected from the Gur et al. data-
base, from which surprised expressions were not available).
es from the Pictures of Facial Affect (Ekman & Friesen, The Pictures of Facial Affect (Ekman & Friesen, 1976) consist of
76) have been used. This set consists of photographs 110 black-and-white 10photographs of Caucasian participants. The
five males and six females displaying each of the six happy photographs were taken from spontaneous expressions,
ic facial expressions (happiness, sadness, anger, fear, 0 were asked to move their facial muscles
whereas the participants
gust, and surprise) and a neutral expression. Although Angry
into particular patterns for the other Disgusted
expressions (see Young, Perrett, Fearful Happy Neutral Sad Surprised
Calder, Sprengelmeyer, & Ekman, 2002, for further information).
s database has proved an invaluable resource, use of Eleven individuals (6 female) were photographed displaying all Facial expression displayed by model
database is limited to those researchers who need only seven expressions, and these faces were included in the experiment.
mall number of faces. Gur et al. (2002) collected three-dimensional color photographs
We collected, from a number of sources, a set of face from 139 actorsFigure
(70 male1.and
Percentage
69 female) ofofdiverse
expressions
ethnicity displayed
and by female and male models correctly judged to be the intended expres-
age. Half of sion. Standard
the expressions error
were posed,bars areother
and the shown.
half were 24
otographs of 50 individuals displaying basic emotions.
evoked by asking the actors to relive appropriate experiences. From
ticipants were asked to judge the expression displayed this set, we selected a subset of 14 Caucasian individuals (7 female)
each face and rate the intensity of the expression. Re-
0
Angry Expressão
Disgusted Fearful e Reconhecimento
Happy Neutral de Emoções
Sad Surprised
Facial expression displayed by model

Figure 1. Percentage of expressions displayed by female and male models correctly judged to be the intended expres-
Behavior Research Methods, Instruments, & Computers
2004, 36 (4), 634-638
Accuracy in Facial Emotion Recognition sion. Standard error bars are shown.

Photographs of facial expression: Accuracy,


response times, and ratings ofimately intensity 15 (surprised expressions) and 18 (all other ex- As with the accuracy data, a three-way ANOVA w
pressions) valid RTs per cell of the design. There were conducted with one between-subjects factor (particip
ROMINA PALERMO and MAX COLTHEART two missing cells, since 1 participant failed to accurately sex, female or male), and two repeated measures facto
Macquarie University, Sydney, New South Wales, Australia
recognize any fearful faces. The mean RTs to correctly expression (angry, disgusted, fearful, happy, neutral, sad
Equal numbers of male and female participants judged which of seven facial expressions (anger, dis-
gust, fear, happiness, neutrality, sadness, and surprise) were displayedrecognize each
by a set of 336 faces, andexpression
we are displayed in AppendixA.xls. surprised) and face sex (female or male). There was a m
measured both accuracy and response times. In addition, the participants rated how well the expres-
sion was displayed (i.e., the intensity of the expression). These three measures are reported for each
face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that
some expressions were recognized more accurately in female than in male faces. The full set of these
norms may be downloaded from www.psychonomic.org/archive/.

Happy Surprised Fearful Angry Disgusted Sad Neutral


The information conveyed by facial expressions is so METHOD 100
biologically and socially important that humans have
Participants
evolved complex neural systems that rapidly and accu- Twelve females (M ! 24 years, SD ! 8.7) and 12 males (M ! 25
rately decode facial expressions displayed by friends and
foes (Rolls, 2000). Over the past few decades, numerous $10 or course credit.
90
years, SD ! 6.6) participated in the experiment in return for either

studies have investigated how emotion is recognized from


facial expressions (see Adolphs, 2002, for a comprehen- Stimuli
sive review). Behavioral, neuropsychological, neuro- 80
Source and selection of facial expression stimuli. Faces were
sourced from five databases: Pictures of Facial Affect (Ekman &
imaging, and electrophysiological techniques have been Friesen, 1976), Gur et al. (2002), Mazurski and Bond (1993), NimStim
employed with infants, children, and adults of varying (Tottenham, Borscheid, Ellertsen, Marcus, & Nelson, 2002), and
Percentage of responses

ages. In addition, studies have examined why individuals


with specific psychiatric or neurological impairments 70
Watson (2001). In this section, we will briefly describe each face set
and explain how and why we selected images from each set to be in-
(e.g., autistic spectrum disorder, Williams syndrome, cluded in this study. To be considered, the photographs needed to be
of a Caucasian adult displaying a good representation of all seven
schizophrenia, Huntington’s disease, or Alzheimer’s dis-
ease) are often impaired in their ability to recognize fa- 60
facial expressions: angry, disgusted, fearful, happy, neutral, sad, and
surprised (except for those faces selected from the Gur et al. data-
cial expressions of emotion. In the majority of studies, base, from which surprised expressions were not available).
faces from the Pictures of Facial Affect (Ekman & Friesen, The Pictures of Facial Affect (Ekman & Friesen, 1976) consist of
1976) have been used. This set consists of photographs
of five males and six females displaying each of the six
50
110 black-and-white photographs of Caucasian participants. The
happy photographs were taken from spontaneous expressions,
basic facial expressions (happiness, sadness, anger, fear, whereas the participants were asked to move their facial muscles
into particular patterns for the other expressions (see Young, Perrett,
disgust, and surprise) and a neutral expression. Although
this database has proved an invaluable resource, use of 40
Calder, Sprengelmeyer, & Ekman, 2002, for further information).
Eleven individuals (6 female) were photographed displaying all
the database is limited to those researchers who need only seven expressions, and these faces were included in the experiment.
a small number of faces. Gur et al. (2002) collected three-dimensional color photographs
We collected, from a number of sources, a set of face
photographs of 50 individuals displaying basic emotions.
30
from 139 actors (70 male and 69 female) of diverse ethnicity and
age. Half of the expressions were posed, and the other half were
evoked by asking the actors to relive appropriate experiences. From
Participants were asked to judge the expression displayed this set, we selected a subset of 14 Caucasian individuals (7 female)
by each face and rate the intensity of the expression. Re-
sponse times (RTs) were also collected. In addition to 20
whose heads were not overly tilted from the vertical and who were
between 25 and 51 years of age. Expressions were portrayed in low,
providing agreement scores and intensity ratings for a medium, and high intensity, and if available, we selected high-in-
larger set of stimuli, we also examined whether there tensity expressions for use in this study.
were any differences between male and female expres- 10
The NimStim Face Stimulus Set (Tottenham et al., 2002) consists
of 646 color photographs. Actors of different ethnicities posed each
sion production and recognition.
facial expression, and muscles were adjusted until the desired ex-
pression was achieved. Each expression was photographed once with

Correspondence concerning this article should be addressed to


0
the mouth open and once with the mouth closed, except for surprise,
for which all the photographs included open mouths. From this data-
R. Palermo, Macquarie Centre for Cognitive Science (MACCS), Mac-
quarie University, Sydney, New South Wales 2109, Australia (e-mail:
Happy
base, a subset of photographs of 19 Caucasian individuals (9 female)
was selected. Both open- and closed-mouth photographs were used,
Surprised Fearful Angry Disgusted Sad Neutral
rpalermo@maccs.mq.edu.au). with selection based on which image appeared to represent the ex-
Facial expression displayed by model
Copyright 2004 Psychonomic Society, Inc. 634 25
Figure 2. Percentage of responses for each facial expression type, indicating the types of errors made by the participants.
volving rating were not significant (see Table 1). There happiness was recognized at ceiling levels. Similarly,
was also a significant negative correlation between RT Hess, Blairy, and Kleck (1997) found that recognition ac-
and accuracy (r ! #.74, p " .0001), with those faces Expressão
curacy for disgusted, e Reconhecimento de Emoções
angry, and sad faces increased as the
with higher rater agreement taking less time to identify. intensity of the expression increased, whereas this was not
the case for the recognition of happiness, with even low-
Accuracy in Facial Emotion Recognition
DISCUSSION intensity happy expressions being accurately recognized.
Overall, expressions were recognized more accurately
Behavior Research Methods, Instruments, & Computers
For each face, we have collected three measures: the when displayed by females rather than by males. In par-
2004, 36 (4), 634-638

percentage of raters who recognizedPhotographs the intendedofemo- ticular, bothAccuracy,


facial expression: anger and sadness were recognized more
tion, the average time taken to recognize response the intended
times, and often ratings in of female
intensity than in male faces with this sample of
emotion, and the average intensity rating. This informa- faces. Other studies also have suggested that expressions
tion (displayed in AppendixA.xls) will beMacquarie of use to re-Sydney,are more
ROMINA PALERMO and MAX COLTHEART
University, New South Wales,accurately
Australia judged when displayed by females
searchers needing to select expressive faces for their rather than by males; however,
Equal numbers of male and female participants judged which of seven facial expressions (anger, dis- there does not seem to be
studies, because it will allow them tobothdifferentiate be-In addition,
accuracy and response times. anytheconsistent evidence
participants rated how of sex differences for only spe-
gust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we
measured well the expres-
sion was displayed (i.e., the intensity of the expression). These three measures are reported for each
face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that
some expressions were recognized more accurately in female than in male faces. The full set of these
norms may be downloaded from www.psychonomic.org/archive/.

The information conveyed by facial expressions is so METHOD


biologically and socially important that humans have
Participants
evolved complex neural systems that rapidly and accu- Twelve females (M ! 24 years, SD ! 8.7) and 12 males (M ! 25
rately decode facial expressions displayed by friends and years, SD ! 6.6) participated in the experiment in return for either
foes (Rolls, 2000). Over the past few decades, numerous $10 or course credit.
studies have investigated how emotion is recognized from
facial expressions (see Adolphs, 2002, for a comprehen- Stimuli
Source and selection of facial expression stimuli.
sive review). Behavioral, neuropsychological, neuro- sourced from five databases: Pictures of Facial Affect Faces
were
(Ekman &
imaging, and electrophysiological techniques have been Friesen, 1976), Gur et al. (2002), Mazurski and Bond (1993), NimStim
employed with infants, children, and adults of varying (Tottenham, Borscheid, Ellertsen, Marcus, & Nelson, 2002), and
ages. In addition, studies have examined why individuals Watson (2001). In this section, we will briefly describe each face set
with specific psychiatric or neurological impairments and explain how and why we selected images from each set to be in-
(e.g., autistic spectrum disorder, Williams syndrome, cluded in this study. To be considered, the photographs needed to be
schizophrenia, Huntington’s disease, or Alzheimer’s dis- of a Caucasian adult displaying a good representation of all seven
facial expressions: angry, disgusted, fearful, happy, neutral, sad, and
ease) are often impaired in their ability to recognize fa- surprised (except for those faces selected from the Gur et al. data-
cial expressions of emotion. In the majority of studies, base, from which surprised expressions were not available).
faces from the Pictures of Facial Affect (Ekman & Friesen, The Pictures of Facial Affect (Ekman & Friesen, 1976) consist of
1976) have been used. This set consists of photographs 110 black-and-white photographs of Caucasian participants. The
of five males and six females displaying each of the six happy photographs were taken from spontaneous expressions,
basic facial expressions (happiness, sadness, anger, fear, whereas the participants were asked to move their facial muscles
into particular patterns for the other expressions (see Young, Perrett,
disgust, and surprise) and a neutral expression. Although Calder, Sprengelmeyer, & Ekman, 2002, for further information).
this database has proved an invaluable resource, use of Eleven individuals (6 female) were photographed displaying all
the database is limited to those researchers who need only seven expressions, and these faces were included in the experiment.
a small number of faces. Gur et al. (2002) collected three-dimensional color photographs
We collected, from a number of sources, a set of face from 139 actors (70 male and 69 female) of diverse ethnicity and
photographs of 50 individuals displaying basic emotions. age. Half of the expressions were posed, and the other half were
Participants were asked to judge the expression displayed evoked by asking the actors to relive appropriate experiences. From
this set, we selected a subset of 14 Caucasian individuals (7 female)
by each face and rate the intensity of the expression. Re- whose heads were not overly tilted from the vertical and who were
sponse times (RTs) were also collected. In addition to between 25 and 51 years of age. Expressions were portrayed in low,
providing agreement scores and intensity ratings for a medium, and high intensity, and if available, we selected high-in-
larger set of stimuli, we also examined whether there tensity expressions for use in this study.
Figure 3. Mean response
were anytimes (RTs)
differences betweentomale
correctly
and female recognize
expres- The each expression.
NimStim Face Standard
Stimulus Set (Tottenham et al., 2002) error
consists bars are shown. 26
of 646 color photographs. Actors of different ethnicities posed each
sion production and recognition.
facial expression, and muscles were adjusted until the desired ex-
pression was achieved. Each expression was photographed once with
Expressão e Reconhecimento de Emoções

Fear and the amygdala - evidence from human


neuropsychology

Bilateral amygdala damage


• reduces recognition of fear-inducing stimuli
• reduces recognition of fear in others
• reduces ability to express fear

Does NOT affect ability to recognise faces or to


know what fear is

See patients SM, DR and SE (Adolphs et al. and Calder et


al.)
A mechanism for impaired fear
recognition after amygdala
damage
Ralph Adolphs, Frederic Gosselin, Tony W. Buchanan,
Daniel Tranel, Philippe Schyns and Antonio R. Damasio
Nature 433, 68-72(6 January 2005)

a, When instructed to fixate on the eyes


in facial expressions of fear, SM is able to
do so. b, Accuracy of emotion
recognition ( s.e.m.) for ten control
subjects (white) and SM. Whereas SM's
recognition of fear is impaired when
allowed to look at the stimuli freely (SM
free, black bars), her performance
becomes normal relative to control
subjects when instructed to fixate on the
eyes (SM eyes, grey bar, red arrow). The
impairment is specific to fear recognition
(left panel shows mean recognition
accuracy for all emotions other than fear).
tion18,21,22. Thus, we believe that the impaired fear recognition
arising from damage to SM’s amygdala is not due to a basic
visuoperceptual inability to process information from the eyes,
A mechanism for impaired fear
but is instead a failure by the amygdala to direct her visual system
to seek out, fixate, pay attention to and make use of such infor-
recognition after amygdala
mation to identify emotions. This interpretation entails a revision of damage
Ralph Adolphs, Frederic Gosselin, Tony W. Buchanan,
Daniel Tranel, Philippe Schyns and Antonio R. Damasio
Nature 433, 68-72(6 January 2005)
Table 1 Mean accuracies in emotion recognition for SM and control subjects
Emotion Controls SM (free) SM (eyes)
.............................................................................................................................................................................
Happiness 1.00 1.00 1.00
Surprise 0.96 1.00 1.00
M.
Anger
Disgust
0.82
0.76
0.88
0.85
letters to nature
0.82
0.90
do so. Sadness 1.00 0.96 1.00
SM. Fear 0.84 0.46 0.83
our previous conclusions1 about the face processing abilities of SM:
.............................................................................................................................................................................
freely Subjects (SM and ten control subjects) were shown six different exemplars of each of six emotions
using face stimuli13 identical to those used in prior studies1, and were asked to identify the although she can generate a normal performance score on discrimi-
s when appropriate emotion by pushing a button. The experiment was conducted twice with controls
pecific and four times with SM: twice when she was allowed to look freely at the images (free), and twice
nation and recognition tasks for some emotions (such as happi-
er than
when instructed to fixate on the eyes (eyes). The only significant difference between SM and control ness), her use of visual information is abnormal for all facial
subjects is in her recognition of fear under the free viewing condition (Z ¼ 22.385, P , 0.01, one-
tailed t-test).
emotions, not only fear.
Our study is in line with recent findings that the amygdala
71
participates in processing information about the eye region of
ature Publishing Group
6,23,24
faces . Such a functional specialization might account for the
role of the amygdala in processing emotions related to behavioural
withdrawal25, fear26, threat or danger3,7. A strategy of directing one’s
own gaze onto the eyes of others would serve to seek out potential
sources of salient social information27, and it seems plausible that
other impairments in social judgement resulting from bilateral
amygdala damage9 could be attributed, at least in part, to the
same mechanism. It is intriguing to consider the possibility that
disorders such as autism, which also features impaired fixations to
the features of faces28,29 and impaired processing of emotion
from faces30, might benefit from instructed viewing as we found
29
in SM. A
The distinction between positive and negative emotions is fundamental in emotion models.
Intriguingly, neurobiological work suggests shared mechanisms across positive and negative
emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak
Body Cues, Not Facial Expressions, Discriminate Between Intense intensities of emotion, positive and negative situations were successfully discriminated from
Positive and Negative Emotions
Hillel Aviezer et al. isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the
Science 338, 1225 (2012);
DOI: 10.1126/science.1224313 nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created
compounds of intense negative faces combined with positive bodies, and vice versa. Perceived
affect and mimicry of the faces shifted systematically as a function of their contextual body
This copy is for your personal, non-commercial use only.
emotion. These findings challenge standard models of emotion expression and highlight the role
of the body in expressing and perceiving emotions.

J
If you wish to distribute this article to others, you can order high-quality copies for your
colleagues, clients, or customers by clicking here. ennifer checks the numbers in her lottery discrimination (1, 2). Similarly, dimensional emo-
Winning

Downloaded from www.sciencemag.org on August 8, 2014


Permission to republish or repurpose articles or portions of articles can be obtained by
following the guidelines here.
ticket, when she realizes she hit the 10-million- tion models, which posit that valence is a primary
dollar jackpot. Michael fumbles for his car dimension of emotion perception, predict that
The following resources related to this article are available online at
? keys while his 3-year-old son steps into the street intense emotions are located on more extreme
STUDY 1 www.sciencemag.org (this information is current as of August 8, 2014 ):

Updated information and services, including high-resolution figures, can be found in the online and is hit by a passing car. In a split second, positions on the pleasure-displeasure axis and
version of this article at:
Jennifer and Michael experience the most intense thus their positivity or negativity should be easier
15 participants rated the
http://www.sciencemag.org/content/338/6111/1225.full.html
Supporting Online Material can be found at:
http://www.sciencemag.org/content/suppl/2012/11/28/338.6111.1225.DC1.html
emotions of their lives. Intuitively, their emotion- to decipher (3).
affective valence and
This article cites 35 articles, 11 of which can be accessed free:
http://www.sciencemag.org/content/338/6111/1225.full.html#ref-list-1
al expressions should differ vastly, an assumption
shared by leading models of emotion. For ex-
The question of affective valence discrimina-
tion is theoretically important for the structure of
intensity of either the full
This article has been cited by 3 articles hosted by HighWire Press; see:
http://www.sciencemag.org/content/338/6111/1225.full.html#related-urls ample, basic emotion models, which posit dis- emotion models and is central for understanding
This article appears in the following subject collections:
Psychology
tinctive categories of emotions such as anger how social communication takes place in highly
image (face + body), the
http://www.sciencemag.org/cgi/collection/psychology
and fear, predict that intense emotions activate intense and potentially dangerous situations. Yet,
maximally distinct facial muscles, which increase although it is commonly assumed that facial ex-
body alone, or the face pressions convey positive and negative affective
1
Department of Psychology, Princeton University, Princeton, NJ valence in a highly distinct manner, there is still
alone. 08540, USA. 2Department of Psychology, New York University,
NY 10003, USA. 3Behavioral Science Institute, Radboud Uni-
room for question on both methodological and
theoretical grounds. From a methodological stand-
versity, Nijmegen, Netherlands.
point, most studies to date have used posed pro-
*To whom correspondence should be addressed. E-mail:
haviezer@mail.huji.ac.il
totypical facial expressions (1) that have been
†Present address: Department of Psychology, Hebrew Uni- carefully designed to signal clear and distinct
versity of Jerusalem, Jerusalem, Israel. emotions (4–7), and indeed the higher their in-

Science (print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by the
American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. Copyright www.sciencemag.org SCIENCE VOL 338 30 NOVEM
2012 by the American Association for the Advancement of Science; all rights reserved. The title Science is a
registered trademark of AAAS.

(A) Examples of reactions to (1) winning and (2) losing a point.


(B) Examples of isolated faces (1, 4, 6 = losing point; 2, 3, 5 =
winning point).
0.0001] (Fig. 1D). Thus, intensity ratings were unaware of the manipulation. As predicted, the positive-valence (a victorious body) or negative-
higher for winners than losers when based on the perceived affective valence of the same faces valence (a person undergoing piercing) (Fig. 3B)
face + body (P < 0.0001), or the face (P < shifted categorically depending on the body with (25). Although the influence of the bodies was
0.0001), but not when based on the body alone which they appeared [repeated ANOVA: body stronger for some faces than for others [repeated
(P > 0.1). effect Intense
Body Cues, Not Facial Expressions, Discriminate Between
Positive and Negative Emotions
F(1, 14) = 118, P < 0.0001] (Fig. 2B). In- ANOVA: interaction effect, F(5, 70) = 4.9, P <
Intriguingly, Science
we found
Hillel Aviezer et an illusion in the per- deed, the effect of the body was slightly stronger
al.
338, 1225 (2012);
0.001], the effect of the body was significant
ception of the DOI:
nondiagnostic faces: 53.3% of for the incongruent face combinations, indicat-
10.1126/science.1224313 [repeated ANOVA: body effect, F(1, 14) = 96.9,
the participants who completed the perceptual ing again that the face itself was nondiagnostic P < 0.0001] and held for every pair of emotions
This copy is for your personal, non-commercial use only.

If you wish to distribute this article to others, you can order high-quality copies for your
colleagues, clients, or customers by clicking here.

Downloaded from www.sciencemag.org on August 8, 2014


Permission to republish or repurpose articles or portions of articles can be obtained by
following the guidelines here.

The following resources related to this article are available online at


www.sciencemag.org (this information is current as of August 8, 2014 ):

Updated information and services, including high-resolution figures, can be found in the online
version of this article at:
http://www.sciencemag.org/content/338/6111/1225.full.html
Supporting Online Material can be found at:
http://www.sciencemag.org/content/suppl/2012/11/28/338.6111.1225.DC1.html
This article cites 35 articles, 11 of which can be accessed free:
http://www.sciencemag.org/content/338/6111/1225.full.html#ref-list-1
This article has been cited by 3 articles hosted by HighWire Press; see:
http://www.sciencemag.org/content/338/6111/1225.full.html#related-urls
This article appears in the following subject collections:
Psychology
http://www.sciencemag.org/cgi/collection/psychology

Fig. 1. Experiment 1. (A) Examples of reactions to (1) winning and (2) losing
a point. (B) Examples of isolated faces (1, 4, 6 = losing point; 2, 3, 5 =
winning point). [All photos in Fig. 1 credited to a.s.a.p. Creative/Reuters] (C)
Mean
Science (print valence ratings
ISSN 0036-8075; for1095-9203)
online ISSN face +is published
body,weekly,body, and
except face.
the last week inResults
December, byare
American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. Copyright
the converted

fromtrademark
the original scale, which ranged from 1 (most negative) to 9 (most pos-
2012 by the American Association for the Advancement of Science; all rights reserved. The title Science is a
registered of AAAS.
itive), with 5 serving as a neutral midpoint. (D) Mean intensity ratings for face +
body, body, and face. Asterisks indicate significant differences between the
ratings of winners and losers. Error bars throughout all figures represent SEM.
ns, not significant.

1226 30 NOVEMBER 2012 VOL 338 SCIENCE www.sciencemag.org 31


(Schirmer & Adolphs, 2017)

Multimodal Convergence of Expressive Signals in the Brain.


At a late conceptual stage, individuals represent emotional meaning amodally.
Higher-level representations can feed back and modulate lower-level representations.
32

Você também pode gostar