Você está na página 1de 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/258038054

Gender differences in emotion recognition: Impact of sensory modality and


emotional category

Article  in  Cognition and Emotion · October 2013


DOI: 10.1080/02699931.2013.837378 · Source: PubMed

CITATIONS READS

26 399

3 authors:

Lena Lambrecht Benjamin Kreifelts

2 PUBLICATIONS   74 CITATIONS   
University of Tuebingen
64 PUBLICATIONS   1,457 CITATIONS   
SEE PROFILE
SEE PROFILE

Dirk Wildgruber
University of Tuebingen
170 PUBLICATIONS   5,608 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Assessment of negative symptoms in schizophrenia View project

Neurobiology of processing vocal emotions in unipolar and bipolar depression View project

All content following this page was uploaded by Dirk Wildgruber on 10 July 2015.

The user has requested enhancement of the downloaded file.


This article was downloaded by: [University of Basel]
On: 21 May 2014, At: 09:30
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:
Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Cognition & Emotion


Publication details, including instructions for authors and subscription
information:
http://www.tandfonline.com/loi/pcem20

Gender differences in emotion recognition:


Impact of sensory modality and emotional
category
a a a
Lena Lambrecht , Benjamin Kreifelts & Dirk Wildgruber
a
Department of Psychiatry and Psychotherapy, Eberhard-Karls-University of
Tübingen, Tübingen, Germany
Published online: 24 Oct 2013.

Click for updates

To cite this article: Lena Lambrecht, Benjamin Kreifelts & Dirk Wildgruber (2014) Gender differences
in emotion recognition: Impact of sensory modality and emotional category, Cognition & Emotion, 28:3,
452-469, DOI: 10.1080/02699931.2013.837378
To link to this article: http://dx.doi.org/10.1080/02699931.2013.837378

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”)
contained in the publications on our platform. However, Taylor & Francis, our agents, and our
licensors make no representations or warranties whatsoever as to the accuracy, completeness, or
suitability for any purpose of the Content. Any opinions and views expressed in this publication
are the opinions and views of the authors, and are not the views of or endorsed by Taylor &
Francis. The accuracy of the Content should not be relied upon and should be independently
verified with primary sources of information. Taylor and Francis shall not be liable for any
losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities
whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial
or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or
distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use
can be found at http://www.tandfonline.com/page/terms-and-conditions
COGNITION AND EMOTION, 2014
Vol. 28, No. 3, 452–469, http://dx.doi.org/10.1080/02699931.2013.837378

Gender differences in emotion recognition: Impact of


sensory modality and emotional category

Lena Lambrecht, Benjamin Kreifelts, and Dirk Wildgruber


Department of Psychiatry and Psychotherapy, Eberhard-Karls-University of Tübingen, Tübingen,
Germany

Results from studies on gender differences in emotion recognition vary, depending on the types of
emotion and the sensory modalities used for stimulus presentation. This makes comparability between
Downloaded by [University of Basel] at 09:30 21 May 2014

different studies problematic. This study investigated emotion recognition of healthy participants
(N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three
different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The
participants were asked to categorise the stimuli on the basis of their nonverbal emotional content
(happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed.
Women were found to be more accurate in recognition of emotional prosody. This effect was partially
mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific
selection bias for alluring stimuli: Men, as compared to women, chose “alluring” more often when a
stimulus was presented by a woman as compared to a man.

Keywords: Emotion recognition; Gender; Modality and emotional category; Alluring.

Nonverbal communication (e.g., facial expressions, information. With respect to decoding nonverbal
speech melody—prosody—or gestures) forms an emotional information women outperformed men
important basis of human social relationships, based on drawings, photos or films of faces, hands,
because they convey our social counterpart’s inten- arms, legs, and feet, as well as speech (Hall, 1978).
tions and emotions. Failure to correctly interpret However, one has to bear in mind that only “less
such nonverbal signals impedes successful social than 4% of the variance in decoding scores is
interaction and may cause irritation and aggression. accounted for by gender” (Hall, 1978). Two further
In the context of nonverbal communication the reviews (Hall, 1984; McClure, 2000) confirmed
observer’s gender has repeatedly been proposed as the female advantage in emotion recognition when
a factor with impact on processing emotional investigating infants, children and adolescents.

Correspondence should be addressed to: Lena Lambrecht, Department of Psychiatry and Psychotherapy, University of Tübingen,
Osianderstrasse 24, D-72076 Tübingen, Germany. E-mail: lenalambrecht@hotmail.de
Supplementary material can be accessed.
This work was supported by a grants of the Fortüne-Program of the University of Tübingen (fortüne 1997-0-0).

© 2013 Taylor & Francis 452


GENDER DIFFERENCES IN EMOTION RECOGNITION

Subsequent studies investigating gender differ- Multimodal stimuli. Comparing auditory and
ences in emotion recognition have mostly focused visual sensory modalities as well as audio-visual
on either prosody or facial expressions. However, stimulation, the most pronounced gender effect
they differed in the type of emotions they occurs for audio-visually presented emotions
included. Thus, results of different modalities are (Collignon et al., 2010; Hall, 1978). With regard
difficult to compare. In order to overcome this to multimodal emotion recognition, a female
problem and to increase comparability of emotion advantage (ηp2 = .009) was observed based on
recognition performance across sensory modalities judgements of black and white photographs of
and emotional categories, we designed a study facial expressions and voice recordings (Scherer &
with different sensory modalities and several Scherer, 2011). However, in the major validation
emotional categories and investigated the influ- study the proportion of male and female partici-
ence of gender of the participant and gender of the pants was not balanced with 25% female partici-
model on emotion recognition pants. Thus, comparability between men and
women in emotion recognition is limited. Another
multimodal test (Rosenthal, Hall, DiMatteo,
Downloaded by [University of Basel] at 09:30 21 May 2014

Gender of the participant Rogers, & Archer, 1979), focused on decoding


different scenes and found a female advantage too
Unimodal stimuli. Studies focusing on emotional (effect size d = 0.47). Nonverbal information was
facial expressions support the idea of a female conveyed via 11 different channels whereas faces
advantage in emotion recognition (Hall & or parts of a body were presented and/or content-
Matsumoto, 2004; Kirouac & Doré, 1985; filtered vioce or spliced voice. Stimuli were
McClure, 2000; Miura, 1993; Montagne, Kessels, portrayed by one single woman. However, an
Frigerio, de Haan, & Perrett, 2005; Proverbio, investigation of gender differences in emotion
recognition should account for potential interac-
Matarazzo, Brignone, Del Zotto, & Zani, 2007;
tions between the gender of the recipient and the
Rotter & Rotter, 1988; Vaskinn et al., 2007;
gender of the sender by employing stimuli with a
Williams et al., 2009). Considering specific emo-
balanced proportion of the gender of the sender.
tional categories, two studies found no evidence of
a gender difference in the perception of angry
facial expressions (Mandal & Palchoudhury, 1985; Interaction between gender of the participant
Rotter & Rotter, 1988). The majority of studies, and gender of the display
however, showed a female advantage in emotion
The gender of the actor/actress seems to have an
recognition for every emotional category including
impact on emotion recognition. Calvo and
angry stimuli (Hall & Matsumoto, 2004). Lundqvist (2008) found a tendency for angry faces
Data on gender differences in recognition of to be better recognised in males than in females.
emotional prosody, on the other hand, are scarce. Actresses expressing anger were more often clas-
Raithel and Hielscher-Fastabend (2004) found no sified as expressing disgust, than was the case for
gender difference in identifying the emotional male actors. Based on emotional prosody anger
prosody of semantically neutral sentences. How- and fear were better recognised from actors and
ever, with regard to identification rates based on happiness was better recognised from actresses
the intonation of short stories, Bonebright, (Bonebright et al., 1996). Especially in the choice
Thompson, and Leger (1996) found an advantage of sexually relevant signals, gender of the display
for female raters. With respect to emotion-specific could play a critical role. To our knowledge cross-
effects women appear to be more sensitive in gender interaction during perception of alluring
recognition of sad and happy emotional prosody stimuli has been investigated only in one single
but not of angry emotional prosody (Fujisawa & study using emotional prosody (Ethofer et al.,
Shinohara, 2011). 2007). The authors reported increasing responses

COGNITION AND EMOTION, 2014, 28 (3) 453


LAMBRECHT, KREIFELTS, WILDGRUBER

at the behavioural level (arousal ratings), as well as recognised than disgust (Williams et al., 2009).
haemodynamic responses within voice sensitive Regarding emotional prosody, Castro and Lima
brain areas (right middle superior temporal gyrus), (2010) observed that neutral prosody is recognised
to an alluring tone of voice from the opposite best, followed by angry, happy and disgusted
gender. Notably, this effect was confined to prosody. Thus, differences in recognition perform-
alluring stimuli and no cross-gender interaction ance of specific emotional categories seem to
was observed during the perception of neutral, depend upon the sensory modality through which
happy, angry, or fearful prosody. the emotional signals are perceived. The recently
developed Multimodal Emotion Recognition Test
(MERT; Bänziger, Grandjean, & Scherer, 2009)
Emotion recognition performance as a investigated the recognition of dynamic, black and
function of modality and emotional category white expressions of ten emotions (hot anger, cold
Previous studies have indicated that sensory mod- anger, panic fear, anxiety, despair, sadness, elation,
ality, through which emotional information is happiness, disgust, contempt), which were pre-
perceived, and multisensory integration of emo- sented in four modalities (audio, video, audio/
Downloaded by [University of Basel] at 09:30 21 May 2014

tional information strongly impact emotion recog- video, still picture). The authors found a signific-
nition performance. Kreifelts, Ethofer, Grodd, ant interaction between modality and emotional
Erb, and Wildgruber (2007) demonstrated that category, where disgust and elation were notably
emotional facial expressions are recognised with better recognised by face than from voice. Fur-
higher accuracy than emotional prosody. Hawk, thermore, authors described that cold as well as
van Kleef, Fischer, and van der Schalk (2009) hot anger were recognised better from dynamic
compared the accuracy of emotion recognition of video clips compared to still pictures.
non-linguistic affective vocalisations, speech-
embedded vocal prosody and facial cues and found
lowest scores for speech-embedded vocal prosody The present study
and equivalent scores for the other channels.
In order to investigate gender differences in
Integration of auditory and visually presented
recognition performance across different sensory
nonverbal emotional information facilitates emo-
modalities and across a range of emotions, three
tion recognition, resulting in higher accuracy and
modal conditions (unimodal auditory, unimodal
faster response times for the recognition of audio- visual and bimodal audio-visual), and five emotion
visual stimuli as compared to visual or auditory
categories (happy, alluring, angry, disgusted and
stimuli alone (Collignon et al., 2010; de Gelder &
neutral) were included. Additionally, we investi-
Vroomen, 2000; Ethofer et al., 2006; Kreifelts
gated the influence of gender of the display and
et al., 2007). Kreifelts et al. (2007) could demon-
the interaction with the gender of the participant.
strate that integration of auditory and visual emo-
We included three different performance mea-
tional information was accompanied by an
sures. The raw hit rate was applied as a measure of
enhanced activation in the bilateral posterior super-
the correct responses. Second, we investigated the
ior temporal gyrus (pSTG) and right thalamus.
choice of the emotional category irrespective of
Emotion recognition performance varies across
whether it is correct or incorrect, to evaluate
discrete emotion categories. With regard to facial
selection biases. Thirdly, we used the unbiased
expressions happiness is the best recognised emo-
hit rate which measures the correct responses, but
tion among the basic emotions (Kirouac & Doré,
accounts for false alarms.
1985; Montagne, Kessels, De Haan, & Perrett,
We hypothesised that:
2007; Ruffman, Henry, Livingstone, & Phillips,
2008; Williams et al., 2009). Neutral expressions 1. Females show higher emotion recognition
seem to be better recognised than anger and rates than males, for every modal condition
disgust whereas anger, in turn, seems to be better with greatest gender differences in the

454 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

audio-visual modality (Collignon et al., Additionally, mean hearing loss was calculated
2010; Hall, 1978). across all tested frequencies. The average duration
2. Females exhibit higher emotion recogni- of education was 18.0 years (SD = 4.8 years).
tion rates than males for every emotional Verbal intelligence ranged from 93 to 145 (mean
category, independent of the gender of the verbal intelligence 124.2, S.D. 13.9) as assessed by
model. the “Mehrfach-Wortschatz-Intelligenz-Test”
3. Female and male subjects identify alluring (Multiple Choice Word Fluency Test; MWT-B,
stimuli of the opposite gender with higher (Lehrl, 1977)). Verbal intelligence exhibits a high
accuracy as compared to stimuli of the reliability and validity and is correlated with
same gender. measures of global intelligence (Lehrl, 1977; Lehrl,
4. Alluring stimuli are better recognised and Triebig, & Fischer, 1995; Merz, Lehrl, Galster, &
chosen more frequently by the opposite Erzigkeit, 1975) (r = 0.72). Approval for the
than by the same gender. investigation was obtained from the local ethics
committee and the study was performed according
to the Declaration of Helsinki. Before their inclu-
Downloaded by [University of Basel] at 09:30 21 May 2014

METHOD sion in the study participants gave written


informed consent.
Participants and design
We chose an age-stratified sample of subjects
Procedure
ranging from 20 to 70 years old in order to prevent
an age-bias introduced through random sample Participants watched short video sequences with
selection. Eighty-four German native speakers (40 models pronouncing single words expressing dif-
m, 44 f, mean age 44.8 years, sex distribution along ferent emotions through their speech melody and
the age range: 20–30 years: 8 m, 8 f; 31–40 years: facial expressions. The video sequences were
8 m, 8 f; 41–50 years: 8 m, 10 f; 51–60 years: 7 m, presented audio-visually (AV) or unimodally (aud-
10 f; 61–70 years: 9m, 8 f) took part in the present itory A, visually V). The participants’ task was to
study. All were right-handed as assessed with the identify the presented emotion based on the
Edinburgh Inventory (Oldfield, 1971) and none nonverbal information expressed within the
had any history of neurological or psychiatric dynamic auditory, visual or audio-visual stimuli.
illness. According to results of a figure-based Subsequently, measures of hearing and vision,
eyesight test (Sehproben pocketcard Börm Bruck- mood, arousal, sustained attention, and trait emo-
meier Verlag, 2002), all participants had normal or tional intelligence (SREIT; Schutte et al., 1998)
corrected-to-normal vision (measures for the better were collected to identify potential confounders
eye: mean visual acuity = 92%, SD = 5%). Further- and to test if gender associated variance in emotion
more, participants had no moderate or severe recognition might be attributed to one or several of
hardness of hearing (mean hearing loss (better these factors. Furthermore, based on an actual
ear) = 16.3 dB, SD = 7.4 dB, range 4.6 dB–36.8 meta-analysis (Murphy & Hall, 2011) showing
dB) as assessed with an audiometer (Interacoustics an association between intelligence and nonverbal
screening audiometer AS208, Meyer Audiolo- cue decoding accuracy, we were interested in
gische Technik, Weningen) testing hearing whether emotion recognition performance was
thresholds for 11 different frequencies (125, 250, associated with performance in the TAP as an
500, 750, 1000, 1500, 2000, 3000, 4000, 6000, aspect of cognitive ability.
8000 Hz) for each ear separately. The relative
hearing loss for each frequency (decibel = dB) was
Stimulus material
calculated by comparing individual hearing thresh-
old with normative hearing thresholds based on Dynamic stimuli are better recognised than static
meta-analytical data (Robinson & Sutton, 1979). ones (Trautmann, Fehr, & Herrmann, 2009) and

COGNITION AND EMOTION, 2014, 28 (3) 455


LAMBRECHT, KREIFELTS, WILDGRUBER

brain areas differ in their activation during per- developing smile and a slight widening of the
ception of dynamic compared to static stimuli palpebral fissure.
(Kilts, Egan, Gideon, Ely, & Hoffman, 2003;
Sato, Kochiyama, Yoshikawa, Naito, & Matsu-
mura, 2004; Trautmann et al., 2009). Further-
Experimental design and task
more, brain areas activated during perception of Visual and audio-visual stimuli were presented on
dynamic emotional stimuli differed between the a 17-inch flat screen (LG FLATRON L1953PM)
two genders (Kret, Pichon, Grèzes, & de with a resolution of 800 × 600 pixels. The size of
Gelder, 2011). speaker’s face on the screen had approximately the
Therefore, short video sequences (resolution: same size as a real face. In the auditory and audio-
720 × 576 pixels, sound: 48 kHz, 16 bit, mean visual conditions sound was conveyed through
duration = 965 ms; SD = 402 ms) were presented headphones (Sennheiser, HD 515), with partici-
by two unimodal conditions (auditory = A, visual pants adjusting the volume individually. The
= V) and one bimodal condition (audio-visual = experiment took part in a quiet room where
AV). Stimulus material consisted of eight words participants were seated in a comfortable position
Downloaded by [University of Basel] at 09:30 21 May 2014

spoken by professional actors (2 female, with their heads at a distance of ∼70 cm from the
2 male) in neutral, as well as in four different screen. Stimuli presentation occurred in a rando-
emotional intonations (happy, alluring, angry, mised order and the full set of stimuli was
disgusted) with a congruent facial expression. presented twice with a five minute break between
The complete stimulus set contained 120 different the sessions.
stimuli (8 words × 5 emotional categories × 3 Each trial was structured as follows: first, a
modalities). Stimulus material was also balanced horizontal scale showing five emotional categories
for the gender of the displays. was presented for one second (1; see Figure 1).
The stimuli used in the present study were Emotional categories were numbered consecutively
taken from a set which was validated in previous from left to right with the appropriate number
experiments (Kreifelts et al., 2007) with emotion indicated below the name of the respective cat-
categorisation hit rates well above chance level. egory. To avoid possible laterality effects the
Considering semantic valence, words were positions of the emotional categories on the scale
selected and balanced according to ratings on a were permuted resulting in a set of eight different
9-point Self-Assessment Manikin scale (SAM; scales. These were changed between the partici-
Bradley & Lang, 1994) as assessed in a previous pants. The neutral category was always positioned
study (Herbert, Kissler, Junghöfer, Peyk, & Rock- centrally whereas the negative (angry, disgusted)
stroh, 2006): Four words had a neutral connota- and positive (happy, alluring) categories were
tion (e.g., “Möbel” [furniture], mean valence = placed contralaterally. With the aim of directing
5.2) and four words, in equal parts, had a positive the participants’ attention to the stimulus, a yellow
(e.g., “Freundschaft” [friendship], mean valence fixation cross and a pure tone (302 Hz) were
rating = 8.7) or negative connotation (e.g., “Eiter” presented simultaneously for one second (2). Sub-
[pus], mean valence rating = 1.9). sequently, the stimulus was presented (3). After
In addition to basic emotions including happi- stimulus offset the emotional categories scale was
ness, anger and disgust alluring expressions were shown again (4). As soon as an emotional category
also used in the stimulus set. Alluring stimuli were was chosen, a short visual feedback (700 ms)
generated by the actors’ nonverbal expression of occurred (5). The response window (10 s duration)
sexual interest in an inviting manner using a was time-locked to the onset of the stimulus. The
seductive tone of voice. The resulting alluring trial duration ranged, depending on stimulus dura-
expressions were uniformly characterised by a soft tion and response time, from 3.2 to 12.7 seconds.
and sustained intonation in the lower vocal Responses were given via horizontally adjoining
frequency spectrum accompanied by a slowly keys on the computer keyboard (number keys 1 to 5

456 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

Figure 1. Experimental design. Emotional stimuli were presented auditorily, visually and audio-visually and should be classified with
respect to the emotional category. Presentation occurred in randomised order and stimuli were balanced for emotional category and for
modality. A horizontal scale with five categories (“EROTIK” = alluring expression, “FREUDE” = happy expression, “NEUTRAL” =
neutral expression, “ÄRGER” = angry expression, “EKEL” = disgusted expression) was used. The order of the categories varied resulting in
eight scales which were balanced across participants. The scale was presented prior to and after stimulus presentation and participants received
Downloaded by [University of Basel] at 09:30 21 May 2014

a visual feedback after response selection. The response window had a maximum duration of 10s beginning with the onset of the stimulus.
Depending on response time the trial duration ranged from 3.2 to 12.7 seconds. Copyright © 2012 by the American Psychological
Association. Reproduced (or Adapted) with permission. The official citation that should be used in referencing this material is
Lambrecht, L., Kreifelts, B., & Wildgruber, D. (2012). Age-related decrease in recognition of emotional facial and prosodic
expressions. Emotion (Washington, D.C.), 12(3), 529–539. doi: 10.1037/a0026827. No further reproduction or distribution is
permitted without written permission from the American Psychological Association.

on the letter block). Participants were told to use emotional intelligence (Schutte et al., 1998). The
only their right index finger for response selection. SREIT corresponds to Salovey and Mayer’s
Furthermore, participants were instructed to clas- framework of emotional intelligence (Salovey &
sify the presented stimulus as quickly as possible Mayer, 1990) implying appraisal, expression, regu-
and based solely on non-verbal emotional cues lation, as well as, utilisation of emotions relating to
while ignoring potential emotional word content. oneself and to others and it is also related to well-
Participants were instructed to appraise the emo- being and measures of social skills with high test–
tional state of the presented person based on retest reliability (0.78; Schutte et al., 2001, 1998).
emotional prosody and/or emotional facial expres- The SREIT consists of 33 items with each item
sion and to select the emotional category that fitted being rated on a 5-point Likert scale (1 = Strongly
their impression best. agree, 5 = Strongly disagree) resulting in scores
Before the main experiment participants were ranging between 33 and 165. Higher scores indic-
familiarised with the experimental setting in a ate a greater degree of trait emotional intelligence.
short training session comprised of 15 stimuli, Three of the items were reverse-scored.
which were not presented in the main experiment.
The experiment did not start before the particip-
Mood and arousal
ant fully understood the procedure and indicated
that she/he was ready to begin. In total partici- Prior to the experiment, participants rated their
pants judged 15 stimuli prior to and 240 stimuli arousal (1 = Very calm, 9 = Highly aroused) and their
during the experiment. mood (1 = Very bad, 9 = Very good) on a 9-point
Self-Assessment Manikin scale (Bradley &
Lang, 1994).
Additional measures
Self-Report Emotional Intelligence Test (SREIT) TAP
Following the experiment the participants com- The subtest “Daueraufmerksamkeit” of the
pleted the SREIT, a self-reported trait measure of “Testbatterie zur Aufmerksamkeitsprüfung” (TAP),

COGNITION AND EMOTION, 2014, 28 (3) 457


LAMBRECHT, KREIFELTS, WILDGRUBER

version 2.1 (Vera Fimm, Psychologische Testsys- evaluated, and in the third ANOVA the unbiased
teme, Herzogenrath) is a valid and reliable test to hit rate (Hu) was used. Partial eta-squared (ηp2)
assess the participants’ ability for sustained atten- served as a measure of effect size. All resulting
tion and working memory (Cronbach’s alpha p-values were corrected for heterogeneous correla-
= .985, test–retest-reliability 0.81 after 25 days; tions (Geisser & Greenhouse, 1958).
Zimmermann & Fimm, 2002). Response times
and mistakes (number of reactions without any
critical stimulus) were registered. Raw hit rates
Please note that this data set was analysed with
Significant main effects were observed for all four
the focus on age as well (Lambrecht, Kreifelts, &
factors of the ANOVA (Modality, Emotional
Wildgruber, 2012).
Category, Gender of the Participant, Gender
Display). Statistical measures of all main effects
RESULTS and interactions are listed in Table 1. Subsequent
matched sample t-tests for male participants
Downloaded by [University of Basel] at 09:30 21 May 2014

Statistical analyses were performed using SPSS revealed a significant higher raw hit rate for
Statistics 17.0 (SPSS Inc., Chicago, IL, USA). recognition of alluring stimuli in female as com-
The individual hit rates were averaged across the pared to male displays, t(39) = 4.6, p < .01. Female
two sessions (repetitions) of the experiment. Three participants also showed significantly higher raw
four-factorial analyses of variance (ANOVAs) for hit rates for recognition of alluring stimuli in
repeated measures with Modality (auditory, visual, female as compared to male displays, t(43) = 2.3,
audio-visual), Emotional Category (happy, allur- p < .05. A t-test for independent samples did not
ing, neutral, angry, disgusted) and Gender of evidence a significant difference between men and
Display as within-subject factors and Gender of women in decoding alluringness from female
Participant as between-subject factor were per- versus male displays, t(42) = 0.9, p = .38.
formed. In the first ANOVA raw hit rates were The means of the raw hit rates of the separate
analysed, in the second selection biases were emotional categories and of the average across all

Table 1. Main effects and interactions of the ANOVA for repeated measures for raw hit rates

Factor F(df, error) Partial η2


Mod F(1.6, 128.5) = 344.6** .808
Emo F(2.3, 273.5) = 26.8** .246
G (Display) F(1.0, 82.0) = 10.4** .113
G (Participant) F(1.0, 82.0) = 4.3* .050
Mod × Emo F(6.12, 501.7) = 172.6** .678
G (Display) × G (Participant) F(1.0, 82.0) = 0.066 .001
Mod × G (Participant) F(1.6, 128.5) = 7.5** .084
Emo × G (Participant) F(3.3, 273.5) = 0.12 .001
Mod × G (Display) F(1.9, 158.6) = 13.6** .143
Emo × G (Display) F(3.6, 297.6) = 33.8** .292
Mod × G (Participant) × G (Display) F(1.9, 158.6) = 0.206 .003
Emo × G (Participant) × G (Display) F(3.6, 297.6) = 1.6 .003
Mod × Emo × G (Participant) F(6.1, 501.7) = 0.70 .008
Mod × Emo × G (Display) F(6.7, 550.8) = 17.8** .179
Mod × Emo × G (Participant) × G (Display) F(6.7, 550.8) = 0.449 .005

Notes: Mod = modality; Emo = emotional category; G (Display) = Display’s gender; G (Participant) = Participant’s gender.
*p < .05; **p < .01.

458 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

emotional categories for men and women as well comparisons (10). Subsequently, we investigated
as t-tests for the comparison of the raw hit rates if men and women differed in their emotion
between the genders are displayed in the supple- confusion rates. Therefore, we used t-tests for
mental material (Table 6). independent samples comparing the respective
frequency of choosing an incorrect emotional
category instead of the right one. Overall 20
Gender specific response biases
confusions were compared between the two
We conducted a second repeated-measures genders.
ANOVA with Modality, Emotional Category In the subsequent t-tests only a single difference
and Gender Display as within-subject factors and remained significant after correction for multiple
Gender of Participant as a between-subject factor comparisons. The difference between female and
on the frequencies with which emotion categories male displays with regard to the difference between
were selected. The ANOVA (Modality × Emo- the choices of alluring and neutral as an emotional
tional Category × Gender of the Participant × category was significant for the gender of the
Gender Display) revealed a significant effect for participants, t(82) = −3.03, p < .05. Both men
Downloaded by [University of Basel] at 09:30 21 May 2014

the Emotional Category, a significant interaction and women chose alluring rather than neutral
between Emotional Category and Modality, a when an actress presented the stimulus. For men
significant interaction between Emotional Cat- (mean difference ± standard error of the difference:
egory and Gender Display, a significant interaction 0.07 ± 0.01), this tendency was significantly larger
between Emotional Category, Gender of the than it was for women (0.02 ± 0.01).
Participant and Gender Display, as well as a For a complete confusion matrix of stimulus
significant interaction between Emotional Cat- category and response selection (Table 7) as well
egory and Modality and Gender Display. For the as for Bonferroni corrected comparisons of the
statistical values see Table 2. In order to further confusions (Table 8); see supplemental material.
analyse the interaction between Gender of parti-
cipant, Gender Display and Emotional Category,
Unbiased hit rates
differences between the frequencies of choosing
each emotional category displayed by a male or In the third ANOVA, the unbiased hit rate (Hu)
female model were calculated. Subsequently, these was used. This is a precise measure that accounts
differences were compared across the emotional for false alarms and biases when using response
categories. These differences, which are technically categories by multiplying the raw hit rate with the
first order interaction terms, were compared positive predictive value (Wagner, 1993). It is
between the genders of the participants. P-values defined as “the joint probability that a stimulus
were Bonferoni-corrected for the number of category is correctly identified given that it is

Table 2. Main effects and interactions of the ANOVA for repeated measures for the emotional categories

Factor F(df, error) Partial η2


Emo F(2.2, 181.4) = 33.4** .289
Emo × Mod F(4.8, 397.6) = 169.2** .674
Emo × G (Participant) F(2.2, 181.4) = 0.13 .022
Emo × G (Display) F(3.4, 280.8) = 32.9** .286
Emo × G (Participant) × G (Display) F(3.4, 280.8) = 2.7* .032
Mod × Emo × G (Participant) F(4.8, 397.6) = 1.4 .016
Mod × Emo × G (Display) F(6.0, 495.9) = 28.3** .257
Mod × Emo G (Participant) × G (Display) F(6.0, 495.9) = 1.7 .020

Notes: Mod = modality; Emo = emotional category; G (Display) = Display’s gender; G (Participant) = Participant’s gender. *p < .05; **p < .01.

COGNITION AND EMOTION, 2014, 28 (3) 459


LAMBRECHT, KREIFELTS, WILDGRUBER

presented at all and that a response is correctly 0.01 (A). The main effect of Emotional Category
used given that it is used at all”. So, with regard to was based on the following pattern: alluring
our experiment, it combines the sensitivity as well expressions (0.70 ± 0.01) were better recognised
as the specificity of the subjects’ ability to correctly than any other emotional category, t(83) ≥ 4.1, p <
classify emotional stimuli. For the following stat- .01, followed by happy (0.66 ± 0.01), t(83) ≥ 5.5,
istical analyses arc sine transformed unbiased hit p < .01 and then neutral (0.59 ± 0.01), angry (0.58
rates were used (Wagner, 1993). ± 0.01) and disgusted expressions (0.58 ± 0.01)
The ANOVA (Modality × Emotional Cat- with no significant differences between the last
egory × Gender of the Participant × Gender three categories, t(83) ≤ 1.4, p ≥ .16.
Display) revealed a main effect of Modality, The source of the interaction between Modality
Emotional Category and Gender Display, but no and Emotional Category was investigated by com-
main effect of Gender of the Participant. There paring the differences of unbiased hit rates of the
was a significant interaction between Modality three modalities between the emotional categories
and Emotional Category, as well as a significant (see Table 4 for the statistical values and Figure 2).
Downloaded by [University of Basel] at 09:30 21 May 2014

interaction between Modality and Gender of the The recognition rates were higher for V than A
Participant. Furthermore, there was a significant for every emotional category, t(83) ≥ 4.2, p < .01,
interaction between Modality and Gender Display with the exception of alluring expressions, which
and a significant interaction between Emotional exhibited comparable recognition rates in both
Category and Gender Display, as well as between unimodal conditions, t(83) = 0.7, p > .05. The
Modality and Emotional Category and Gender greatest difference was found for disgust as
Display. For the statistical values see Table 3. compared to the remaining categories, followed
The main effect of Modality was due to higher by happy, angry and neutral expressions. The
hit rates for AV than for either of the unimodal comparison between AV and A showed a similar
conditions, t(83) ≥ 12.6, p < .01, d = 1.384 and to pattern with the greatest difference for disgusted
higher hit rates for V than A, t(83) = 15.1, p < .01, expressions. In contrast, the difference between
d = −1.646. Mean unbiased hit rates (± SEM) AV and V showed the greatest values for neutral
were 0.76 ± 0.01 (AV), 0.65 ± 0.01 (V) and 0.45 ± and alluring expressions.

Table 3. Main effects and interactions of the ANOVA for repeated measures for unbiased hit rates

Factor F(df, error) Partial η2


Mod F(1.9, 159.2) = 343.9** .807
Emo F(3.5, 283.4) = 39.5** .310
G (Display) F(1.0, 82.0) = 26.8** .246
G (Participant) F(1.0, 82.0) = 2.9 .034
Mod × Emo F(5.8, 474.5) = 54.1** .398
G (Display) × G (Participant) F(1.0, 82.0) = 0.001 .001
Mod × G (Participant) F(1.9, 159.2) = 5.8** .066
Emo × G (Participant) F(3.5, 283.4) = 2.2 .026
Mod × G (Display) F(1.9, 157.6) = 24.7** .232
Emo × G (Display) F(2.8, 228.5) = 10.0** .109
Mod × G (Participant) × G (Display) F(1.9, 157.6) = 0.01 .001
Emo × G (Participant) × G (Display) F(2.8, 228.5) = 0.22 .003
Mod × Emo × G (Participant) F(5.8, 474.5) = 0.74 .009
Mod × Emo × G (Display) F(6.2, 507.6) = 6.5** .074
Mod × Emo G (Participant) × G (Display) F(6.2, 507.6) = 0.553 .007

Notes: Mod = modality; Emo = emotional category; G (Display) = Display’s gender; G (Participant) = Participant’s gender. *p < .05; **p < .01.

460 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

Table 4. Differences in unbiased hit rate between the modalities: Comparisons between the emotional categories

Happy Alluring Neutral Angry

V–A t P t p t p t p
Happy — —
Alluring 13.2 <.01 — —
Neutral 10.4 <.01 –3.6 <.01 — —
Angry 6.7 <.01 –5.3 <.01 –2.9 <.01 — —
Disgusted –4.7 <.01 –18.3 <.01 –16.7 <.01 –16.1 <.01

Happy Alluring Neutral Angry

V–A t P t p t p t p

Happy — —
Downloaded by [University of Basel] at 09:30 21 May 2014

Alluring 10.5 <.01 — —


Neutral 7.4 <.01 –4.2 <.01 — —
Angry 6.0 <.01 –4.0 <.01 –0.5 ns — —
Disgusted –4.0 <.01 –13.5 <.01 –9.2 <.01 –13.0 <.01

Happy Alluring Neutral Angry

V–A t P t p t p t p

Happy — —
Alluring –2.8 <.01 — —
Neutral –3.3 <.01 –0.6 ns — —
Angry –1.1 ns 1.1 ns 1.9 ns — —
Disgusted 0.3 ns 2.4 < .05 3.2 <.01 2.9 <.01

Notes: A = auditory; V = visual; AV = audio-visual modality. t = t-value; p = p-value, ns = not significant. Significant results are bold. The t-
and p-values refer to the mean difference between the unbiased hit rate for each emotional category for the two modalities.

The source of the interaction between Modality


and Gender participant was investigated by com-
paring the differences of unbiased hit rates of the
three modalities between male and female partici-
pants. Men showed greater differences in unbiased
hit rates between visually and auditory presented
stimuli than women, t(82) = 2.2, p < .05 (mean
difference ± standard error of the difference: men:
0.28 ± 0.02, women: 0.21 ± 0.02), as well as
between audio-visually and auditory stimuli, t(82)
= 2.8, p < .01 (men: 0.46 ± 0.03, women: 0.36 ±
0.02). The difference between unbiased hit rates
for audio-visual and visual stimuli, on the other
Figure 2. Unbiased hit rates for separate emotional categories hand, was not significantly different between
and modalities. Error bars represent standard errors of the means. men (0.18 ± 0.02) and women (0.16 ± 0.02;

COGNITION AND EMOTION, 2014, 28 (3) 461


LAMBRECHT, KREIFELTS, WILDGRUBER

t(82) = 0.9, p = .350). T-tests for independent t(82) = 2.1, p = .041, d = 0.46 (women: 0.62 ± 0.02,
samples indicated that women (mean ± SEM: men: 0.56 ± 0.02). No significant gender differences
0.50 ± 0.02) had significantly higher unbiased hit were found for angry, t(82) = 1.44, p = .152, d =
rates than men (0.40 ± 0.02) for auditory stimuli, 0.315 (women: 0.60 ± 0.02, men: 0.56 ± 0.02) and
t(82) = 3.5, p = .001, d = 0.765, but not for visual, t disgusted stimuli, t(82) = 0.80, p = .426,
(82) = 1.2, p = .236, d = 0.262 (women: 0.66 ± 0.02, d = 0.175 (women: 0.59 ± 0.02, men: 0.58 ± 0.02;
men: 0.63 ± 0.02), or audio-visual stimuli, t(82) = see Figure 4).
0.4, p = .681, d = 0.087 (women: 0.77 ± 0.02, men: The difference in unbiased hit rates for alluring
0.75 ± 0.02; Figure 3). For auditory stimuli 12% of stimuli of the display’s opposite and same gender
the variance in recognition rates (adjusted R2) could was not significant, for male, t(39) = −1.4, p = .157,
be explained by gender, for visual stimuli the nor female participants, t(43) = 0.5, p = .59.
explained variance amounted to 2% and for audio-
visual stimuli to 0.2%. Furthermore, we calculated
the minimum difference between the bimodal and Mediation analysis
the maximum of both unimodal conditions AV –
Downloaded by [University of Basel] at 09:30 21 May 2014

In order to identify potential mediators of gender


max (A, V) for the unbiased hit rate as a measure of associated effects on emotion recognition the two
audio-visual integration. Higher values indicate gender groups were compared with respect to age,
stronger integration effects. In order to compare verbal IQ, education years, SREIT score, mean
the effects of audio-visual integration between men hearing loss, as well as hearing loss of the separate
and women, t-tests for independent samples were frequencies, vision loss, sustained attention, as well
applied. The measure of audio-visual integration as mood and arousal score, applying independent
(mean ± SEM) did not differ significantly between samples t-tests. For parameters differing signifi-
men (0.12 ± 0.01) and women (0.10 ± 0.01; t(82) = cantly between the two gender groups, these
1.1, p = .270). parameters were tested to see if they were linearly
Gender differences with regard to Emotional associated with unbiased hit rate. To this end, we
Category were analysed using t-tests for independ- used linear regression analyses.
ent samples. They showed significant differences in For mediation analyses a multiple mediation
unbiased hit rates between women and men for model by Preacher and Hayes (2008) was used to
alluring, t(82) = 2.1, p = .039, d = 0.46 (mean ± account for the estimations of the separate medi-
SEM: women: 0.73 ± 0.02, men: 0.67 ± 0.02), ating variables as well as the combined effects.
happy, t(82) = 2.6, p = .013, d = 0.57 (women: 0.69 ± Population parameters related to gender, as well as
0.02, men: 0.62 ± 0.02), and neutral expressions, unbiased hit rates, were included as potential

Figure 3. Unbiased hit rates for separate modalities and gender Figure 4. Unbiased hit rates for separate emotional categories
groups. Error bars represent standard errors of the means. Note: and gender groups. Error bars represent standard errors of the
*p < .05. means. Note: *p < .05.

462 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

mediators (M1, M2 … Mn), gender was defined as TAP


independent variable (X) and overall emotion
recognition performance was defined as a depend- Performance in the TAP measured by mistakes
ent variable (Y). The total effect of X on Y was made was correlated with emotion recognition
calculated (c), as well as the direct effect of X on Y performance with regard to raw hit rates, r =
(c′) and the indirect effect of X on Y, through the −.285, p < .01 as well as with regard to unbiased
potential mediators (a1b1, a1b2 … anbn) with the hit rates, r = −.278, p < .05.
effect of X on the potential mediators (a1, a2 … The regression of the overall emotion recogni-
an) and the effect of the potential mediators on Y tion performance on mood revealed no significant
(b1, b2 … bn). The indirect effects were estimated relationship, β = −0.005, t(82) = 0.48, p = .96.
with the recommended method of bootstrapping However, the remaining regression analyses
(5,000 resamples), which does not impose the revealed significant relationships between emotion
assumption of normality on the data. Results were recognition performance of auditory stimuli and
deemed significant if the 95% confidence intervals hearing loss for 4,000 Hz, β = −0.467, t(82) = 4.8,
(CI) of the effects did not include zero. p < .01, 6,000 Hz, β = −0.366, t(82) = 3.6, p < .01,
Downloaded by [University of Basel] at 09:30 21 May 2014

No significant differences between women and and 8,000 Hz, β = −0.454, t(82) = 4.6, p < .01.
men could be observed for age, education years, A multiple mediation analysis with hearing loss
verbal IQ, mean hearing and vision loss, SREIT of the frequencies 4,000 to 8,000 Hz as potential
score, arousal score, and sustained attention. For mediators of the effect of gender on emotion
the mood score, the analysis indicated slightly recognition of auditory stimuli was conducted. It
higher values for men. Means and standard confirmed a total effect of gender on emotion
deviations for the listed factors groups are shown recognition performance (c path), t = −3.55,
in Table 5 for both genders. With regard to p < .01. Furthermore, the analysis indicated a
hearing loss at the separate frequencies men non-significant total effect of all mediators (ab
showed significant greater hearing loss than wo- path; 95% CI: −.0687, −.0055). Analyses of the
men at 4,000 Hz, t(82) = 2.4, p < .05, 6,000 Hz, separate frequencies of hearing loss as potential
t(63) = 2.9, p < .01, and 8,000 Hz, t(71) = 2.9, mediators revealed that hearing loss for the
p < .01. frequency of 8,000 Hz (95% CI: −.1079,

Table 5. Participant characteristics: Population parameters

Variable Women Men t(82) p-value


Mean (SD) Mean (SD)

Age 44.7 (13.2) 44.9 (14.1) 0.05 .961


Education years 17.8 (4.4) 18.4 (5.2) 0.59 .556
Verbal IQ 125.5 (13.5) 122.8 (14.5) 0.90 .370
Mean hearing loss (dB) 15.3 (7.2) 17.4 (7.6) 1.33 .189
Vision loss (%) 8.4 (4.1) 7.4 (5.7) 0.96 .339
SREIT score 123.4 (9.5) 125.0 (10.8) 0.70 .483
Arousal score 3.1 (1.8) 2.7 (1.6) 0.98 .330
Mood score 6.5 (1.2) 7.0 (1.1) 2.25 .027
TAP: mistakes 3.2 (5.4) 3.7 (4.3) 0.44 .662
TAP: reaction time (ms) 653 (180) 616 (170) 0.96 .338

Notes: SD = standard deviation; Verbal IQ = verbal intelligence, measured by MWT-B; SREIT = self-report emotional intelligence test;
arousal and mood scores were rated by each participant and ranged from 1 to 10 with higher arousal scores indicating greater arousal and
higher mood scores indicating better mood; TAP = “Testbatterie zur Aufmerksamkeitsprüfung” = battery of tests investigating attention,
subtest “Daueraufmerksamkeit” = sustained attention. The p-values are two-tailed.

COGNITION AND EMOTION, 2014, 28 (3) 463


LAMBRECHT, KREIFELTS, WILDGRUBER

−.0030), but not of 6,000 Hz (95% CI: −.0053, Females, being mainly responsible for child rear-
−.0971) and 4,000 Hz (95% CI: −.0683, −.0028), ing in most parts of the world generally rely on
had a significant mediating influence on emotion correct recognition of nonverbal emotional signals
recognition performance of auditory stimuli. After until infants learn to speak. This relation has
removing the influence of hearing loss on the already been described by Babchuk, Hames, and
relationship between gender and emotion recog- Thompson (1985) as “The primary caretaker
nition the direct effect was still significant hypothesis”. As our female ancestors had to take
(c′ path), t = −2.67, p < .05. care of their household chores while simulta-
neously paying attention to their infants without
permanent visual contact recognition of emotional
DISCUSSION signals from the voice might be considered to be
of vital importance for their well-being. Thus,
The present study confirmed that women are more from an evolutionary point of view children whose
accurate in the recognition of emotional prosody. mothers had highly developed abilities in emotion
In contrast to previous observations of gender recognition may have had a selection advantage. A
Downloaded by [University of Basel] at 09:30 21 May 2014

differences in recognition accuracy of visually gender-specific heredity of this attribute might


(Biehl et al., 1997; Hall & Matsumoto, 2004; possibly be mediated by epigenetic mechanisms
McClure, 2000) and audio-visually (Collignon affecting the expression of genes without changing
et al., 2010; Hall, 1978) presented emotional the DNA per se, as recently observed in the
expressions, however, no significant gender-spe- contexts of other types of gender-specific behavi-
cific effects could be demonstrated during percep- our (Champagne, 2013; Saavedra-Rodríguez &
tion of visual and audio-visual stimuli in our study. Feig, 2013). Alternatively it can be assumed, that
As suggested by Hall’s review (1978), which accuracy of emotion recognition of voices is better
included 75 studies dating back to 1923 and in women than in men due a higher level of
featuring a variety of different stimuli, methodical experience caused by their more time intense role
differences could be responsible for diverging in child care. More research is needed, however, to
results. The study of Collignon et al. (2010) as further explore these considerations.
well as the present study, on the other hand, differ Results regarding the distinct emotional cat-
with regard to the emotional categories used: only egories, indicate that women are more accurate in
one emotional category (disgust) was employed in recognising positive and neutral categories,
both studies. Although in our study gender differ- whereas no gender differences were found for
ences were only significant for auditory stimuli, anger and disgust. As disgust scarcely appears in
each experimental condition displayed a trend for our daily life (Myrtek, Aschenbrenner, & Brügner,
women to be more accurate in recognising emo- 2005), it is possible that women indeed focus
tions. Presumably, a general advantage in emotion more strongly on emotional signals than men, but
recognition for women has been masked by ceiling cannot develop a recognition advantage for disgust
effects for recognition of visual and audio-visual as they are rarely confronted with this emotion.
stimuli, while auditory stimuli were the most The absence of gender difference for the recogni-
difficult to recognise. Additionally, as sample sizes tion of anger is supported by an earlier study
influence statistical sensitivity, it is possible that in (Bonebright et al., 1996).
a larger sample also a significant effect of gender on Furthermore, our results evidenced no signific-
recognition of visual and audio-visual stimuli ant interaction between gender of participant and
would have been observed. gender of model for both raw and unbiased hit
Taking into account the impact of emotion rates, confirming results from Hall (1978). How-
recognition for evolutionary biology, might be ever, with regard to alluring stimuli, we found
helpful in interpreting the observed female advant- higher hit rates and a selection bias for male and
age in recognition of emotions in the voice. female participants, but only if stimuli were

464 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

presented by an actress. These findings demon- status between women and men, for example, sex
strate an emotion-specific effect rather than a hormone levels have an impact on facial emotion
general-interaction effect of the gender of the recognition (Guapo et al., 2009).
participant and gender of the display. As the Finally, our results suggest that nonverbal
inclusion of alluring stimuli is novel, comparisons emotional expressions are better recognised on
to other studies cannot be drawn yet. the basis of bimodal rather than unimodal pre-
This bias to choose alluring was observed to a sentation and, second, that emotions are better
greater degree in men than women. As, based on recognised from facial expressions than from
unbiased hit rates, no difference in recognition emotional prosody. This is in line with earlier
accuracy of alluring stimuli between female and findings (Collignon et al., 2008, 2010; Hawk
male displays occurred one can assume that there et al., 2009; Kreifelts et al., 2007).
is a general tendency in men to select this category When interpreting the results on basis of the
more often for female displays but there is no overall recognition accuracy, one has to bear in
evidence for a gender-specific increase in recogni- mind that the emotions differ in their recognition
tion performance in male subjects. Moreover, it is pattern between modalities: Most emotional
Downloaded by [University of Basel] at 09:30 21 May 2014

interesting to note that female participants also expressions are better recognised from facial than
had higher raw hit rates in recognition of alluring from vocal cues (disgusted, happy, angry, neutral),
stimuli portrayed by a female speaker. From an but “alluring” was the only category of expressions
evolutionary point of view, for example in assuring without visual recognition advantage. As the
the reproduction of offspring, different signals implication of alluring expressions in an emotion
might have been relevant in our ancestors depend- recognition paradigm is very novel, any interpreta-
ing on the gender of the sender: Whereas for tions regarding the reason for this characteristic of
women alluring behaviour prevailed, men might alluring expressions have to be treated as tentative.
have been more likely to be successful by signalling It seems worthwhile, though, to consider the
strength. The tendency for choosing alluring in setting and situations in which alluring cues are
females was greater for male than for female typically used as a potential cause of the observed
participants. This may be the result of an expecta- effect: Most often these are situations where
tion bias in men, expecting alluring signals from individuals are close to each other with optimal
women rather than from men. Here, ubiquitous transmission conditions for vocal cues.
awareness of gender stereotypes could play a role. Identification rates for recognition of disgust
Alternatively, the men’s bias in categorising and happiness, on the other hand, evidence a very
female stimuli as alluring may reflect their aversion strong visual recognition advantage, which is in
to appearing to exhibit homosexual attraction. line with previous studies on recognition of vocal
Mediation analysis revealed that the female and facial cues expressing basic emotional states
advantage for recognition of auditory emotional (Banse & Scherer, 1996; Calvo & Lundqvist,
stimuli is indeed partially conveyed by a more 2008; Castro & Lima, 2010; Dyck et al., 2008;
pronounced hearing loss of male participants at the Kirouac & Doré, 1985; Leppänen & Hietanen,
8,000 Hz frequency. However, the gender effect is 2004; Leppänen, Tenhunen, & Hietanen, 2003;
not fully explained by the influence of this basic Williams et al., 2009). Another driving factor for
factor. Thus, other reasons have to be considered the interaction effect between sensory modality
for the gender difference in accuracy of emotion and type of emotional expression was the relatively
recognition. Several factors with a potential causal smaller difference between recognition rates for
relationship with the observed gender difference visual and audio-visual stimulation in those emo-
could be found in differences in brain structure tional categories with particular high recognition
(Cahill, 2006), in cerebral activation during emo- rates under visual stimulation (disgust, happiness).
tion perception (Kret et al., 2011; Lee et al., 2002; Here, ceiling effects have to be discussed as a
Rymarczyk & Grabowska, 2007) or in hormonal potential cause of this effect.

COGNITION AND EMOTION, 2014, 28 (3) 465


LAMBRECHT, KREIFELTS, WILDGRUBER

Limitations and perspectives Ethology and Sociobiology, 6(2), 89–101. doi:10.1016/


0162-3095(85)90002-0
In the present investigation emotion recognition Banse, R., & Scherer, K. R. (1996). Acoustic profiles in
was based on an explicit emotion-processing task. vocal emotion expression. Journal of Personality and
Since, in most situations, processing of nonverbal Social Psychology, 70(3), 614–636. doi:10.1037/0022-
emotional signals occurs in an automatic, implicit 3514.70.3.614
manner it might prove very valuable to include Bänziger, T., Grandjean, D., & Scherer, K. R. (2009).
implicit emotion processing tasks with regard to Emotion recognition from expressions in face, voice,
facial and prosodic cues in future studies. These and body: The Multimodal Emotion Recognition
Test (MERT). Emotion (Washington, DC), 9(5),
could further extend current knowledge (Williams
691–704. doi:10.1037/a0017088
et al., 2009) of implicit processing of emotional Biehl, M., Matsumoto, D., Ekman, P., Hearn, V.,
cues. Furthermore, the combination of biological Heider, K., Kudoh, T., & Ton, V. (1997). Matsumoto
measures (e.g., functional magnetic resonance and Ekman’s Japanese and Caucasian Facial Expres-
imaging, measures of sexual hormonal levels) and sions of Emotion, JACFEE: Reliability data and
psychosocial measures (e.g., frequency of appear- cross-national differences. Journal of Nonverbal Beha-
Downloaded by [University of Basel] at 09:30 21 May 2014

ance of each emotion in daily life, personality vior, 21(1), 3–21. doi:10.1023/A:1024902500935
traits) within a study design comprising emotion Bonebright, T. L., Thompson, J. L., & Leger, D. W.
presentation in several sensory modalities could (1996). Gender stereotypes in the expression and
help to assess gender differences in emotion perception of vocal affect. Sex Roles, 34(5), 429–445.
recognition in an even more differentiated man- doi:10.1007/BF01547811
Bradley, M. M., & Lang, P. J. (1994). Measuring
ner. Finally, as the present study evidenced gender
emotion: The Self-Assessment Manikin and the
differences in emotion recognition in the auditory, Semantic Differential. Journal of Behavior Therapy
but not the visual, modality, it would be worth- and Experimental Psychiatry, 25(1), 49–59.
while to investigate how these differences relate to doi:10.1016/0005-7916(94)90063-9
general face and voice recognition abilities among Cahill, L. (2006). Why sex matters for neuroscience.
men and women. Furthermore, the application of Nature Reviews Neuroscience, 7(6), 477–484. doi:
visual stimuli with a higher degree of categorisa- 10.1038/nrn1909
tion difficulty in future studies may help to avoid Calvo, M. G., & Lundqvist, D. (2008). Facial expressions
confounding ceiling effects and enhance sensitivity of emotion (KDEF): Identification under different
to gender differences in emotion recognition. display-duration conditions. Behavior Research Meth-
ods, 40(1), 109–115. doi:10.3758/BRM.40.1.109
Castro, S. L., & Lima, C. F. (2010). Recognizing
Manuscript received 26 February 2012
Revised manuscript received 17 July 2013
emotions in spoken language: A validated set of
Manuscript accepted 19 August 2013 Portuguese sentences and pseudosentences for
First published online 21 October 2013 research on emotional prosody. Behavior Research
Methods, 42(1), 74–81. doi:10.3758/BRM.42.1.74
Champagne, F. A. (2013). Effects of stress across
Supplementary material generations: Why sex matters. Biological Psychiatry,
73(1), 2–4. doi:10.1016/j.biopsych.2012.10.004
Supplementary material (Tables 6–8) is available via Collignon, O., Girard, S., Gosselin, F., Roy, S., Saint-
the ‘Supplementary’ tab on the article’s online page Amour, D., Lassonde, M., & Lepore, F. (2008).
(http://dx.doi.org.10.1080/02699931.2013.837378). Audio-visual integration of emotion expression. Brain
Research, 1242, 126–135. doi:10.1016/j.brainres.
2008.04.023
REFERENCES Collignon, O., Girard, S., Gosselin, F., Saint-Amour,
D., Lepore, F., & Lassonde, M. (2010). Women
Babchuk, W., Hames, R., & Thompson, R. (1985). Sex process multisensory emotion expressions more effi-
differences in the recognition of infant facial expres- ciently than men. Neuropsychologia, 48(1), 220–225.
sions of emotion: The primary caretaker hypothesis. doi:10.1016/j.neuropsychologia.2009.09.007

466 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

de Gelder, B., & Vroomen, J. (2000). The perception of vocalizations. Emotion (Washington, DC), 9(3), 293–
emotions by ear and by eye. Cognition and Emotion, 305. doi:10.1037/a0015178
14(3), 289–289. doi:10.1080/026999300378824 Herbert, C., Kissler, J., Junghöfer, M., Peyk, P., &
Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., Gur, R. Rockstroh, B. (2006). Processing of emotional adjec-
C., Gur, R. C., & Mathiak, K. (2008). Recognition tives: Evidence from startle EMG and ERPs. Psy-
profile of emotions in natural and virtual faces. PloS chophysiology, 43(2), 197–206. doi:10.1111/j.1469-
One, 3(11), e3628–e3628. doi:10.1371/journal.pone. 8986.2006.00385.x
0003628 Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., &
Ethofer, T., Anders, S., Erb, M., Droll, C., Royen, L., Hoffman, J. M. (2003). Dissociable neural pathways
Saur, R., … Wildgruber, D. (2006). Impact of voice are involved in the recognition of emotion in static
on emotional judgment of faces: An event-related and dynamic facial expressions. NeuroImage, 18(1),
fMRI study. Human Brain Mapping, 27(9), 707– 156–168. doi:10.1006/nimg.2002.1323
714. doi:10.1002/hbm.20212 Kirouac, G., & Doré, F. Y. (1985). Accuracy of the
Ethofer, T., Wiethoff, S., Anders, S., Kreifelts, B., Grodd, judgment of facial expression of emotions as a
W., & Wildgruber, D. (2007). The voices function of sex and level of education. Journal of
of seduction: Cross-gender effects in processing Nonverbal Behavior, 9(1), 3–7. doi:10.1007/BF00
Downloaded by [University of Basel] at 09:30 21 May 2014

of erotic prosody. Social Cognitive and Affective 987555


Neuroscience, 2(4), 334–337. doi:10.1093/scan/nsm028 Kreifelts, B., Ethofer, T., Grodd, W., Erb, M., & Wild-
Fujimura, T., & Suzuki, N. (2010). Effects of dynamic gruber, D. (2007). Audiovisual integration of emo-
information in recognising facial expressions on tional signals in voice and face: An event-related fMRI
dimensional and categorical judgments. Perception, study. NeuroImage, 37(4), 1445–1456. doi:10.1016/j.
39(4), 543–552. doi:10.1068/p6257 neuroimage.2007.06.020
Fujisawa, T. X., & Shinohara, K. (2011). Sex differences in Kret, M. E., Pichon, S., Grèzes, J., & de Gelder, B.
the recognition of emotional prosody in late childhood (2011). Men fear other men most: Gender specific
and adolescence. The Journal of Physiological Sciences, 61 brain activations in perceiving threat from dynamic
(5), 429–435. doi:10.1007/s12576-011-0156-9 faces and bodies—An fMRI study. Frontiers in
Geisser, S., & Greenhouse, S. W. (1958). An extension Psychology, 2(3), 1–11.
of Box’s results on the use of the $F$ distribution in Lambrecht, L., Kreifelts, B., & Wildgruber, D. (2012).
multivariate analysis. The Annals of Mathematical Age-related decrease in recognition of emotional
Statistics, 29(3), 885–891. doi:10.1214/aoms/11777 facial and prosodic expressions. Emotion (Washing-
06545 ton, DC), 12(3), 529–539. doi:10.1037/a0026827
Guapo, V. G., Graeff, F. G., Zani, A. C. T., Labate, C. Lee, T. M. C., Liu, H.-L., Hoosain, R., Liao, W.-T.,
M., dos Reis, R. M., & Del-Ben, C. M. (2009). Effects Wu, C.-T., Yuen, K. S. L., … Gao, J.-H. (2002).
of sex hormonal levels and phases of the menstrual Gender differences in neural correlates of recognition
cycle in the processing of emotional faces. Psychoneur- of happy and sad faces in humans assessed by
oendocrinology, 34(7), 1087–1094. doi:10.1016/j. functional magnetic resonance imaging. Neuroscience
psyneuen.2009.02.007 Letters, 333(1), 13–16. doi:10.1016/S0304-3940(02)
Hall, J. A. (1978). Gender effects in decoding nonverbal 00965-5
cues. Psychological Bulletin, 85(4), 845–857. doi: Lehrl, S. (Ed.). (1977). Mehrfachwahl-Wortschatz-Intel-
10.1037/0033-2909.85.4.845 ligenztest: MWT-B. Erlangen: Perimed-Verlag
Hall, J. A. (1984). Nonverbal sex differences: Commun- Straube.
ication accuracy and expressive style. Baltimore, MD: Lehrl, S., Triebig, G., & Fischer, B. (1995). Multiple
Johns Hopkins University Press. choice vocabulary test MWT as a valid and short test
Hall, J. A., & Matsumoto, D. (2004). Gender differ- to estimate premorbid intelligence. Acta Neurologica
ences in judgments of multiple emotions from facial Scandinavica, 91(5), 335–345. doi:10.1111/j.1600-
expressions. Emotion (Washington, DC), 4(2), 201– 0404.1995.tb07018.x
206. doi:10.1037/1528-3542.4.2.201 Leppänen, J. M., & Hietanen, J. K. (2004). Positive
Hawk, S. T., van Kleef, G. A., Fischer, A. H., & van facial expressions are recognized faster than negative
der Schalk, J. (2009). “Worth a thousand words”: facial expressions, but why? Psychological Research,
Absolute and relative decoding of nonlinguistic affect 69(1–2), 22–29.

COGNITION AND EMOTION, 2014, 28 (3) 467


LAMBRECHT, KREIFELTS, WILDGRUBER

Leppänen, J. M., Tenhunen, M., & Hietanen, J. K. Proverbio, A. M., Matarazzo, S., Brignone, V., Del
(2003). Faster choice-reaction times to positive Zotto, M., & Zani, A. (2007). Processing valence
than to negative facial expressions: The role of and intensity of infant expressions: The roles of
cognitive and motor processes. Journal of Psychophysi- expertise and gender. Scandinavian Journal of Psycho-
ology, 17(3), 113–123. doi:10.1027//0269-8803.17. logy, 48(6), 477–485. doi:10.1111/j.1467-9450.2007.
3.113 00616.x
Mandal, M. K., & Palchoudhury, S. (1985). Perceptual Raithel, V., & Hielscher-Fastabend, M. (2004). Emo-
skill in decoding facial affect. Perceptual and Motor tional and linguistic perception of prosody. Recep-
Skills, 60(1), 96–98. doi:10.2466/pms.1985.60.1.96 tion of prosody. Folia Phoniatrica Et Logopaedica:
McClure, E. B. (2000). A meta-analytic review of sex Official Organ of the International Association of
differences in facial expression processing and their Logopedics and Phoniatrics (IALP), 56(1), 7–13.
development in infants, children, and adolescents. doi:10.1159/000075324
Psychological Bulletin, 126(3), 424–453. doi:10.1037/ Robinson, D. W., & Sutton, G. J. (1979). Age effect in
0033-2909.126.3.424 hearing—A comparative analysis of published
Merz, J., Lehrl, S., Galster, V., & Erzigkeit, H. (1975). threshold data. Audiology: Official Organ of the
The multiple selection vocabulary test (MSVT-B)— International Society of Audiology, 18(4), 320–334.
Downloaded by [University of Basel] at 09:30 21 May 2014

An accelerated intelligence test. Psychiatrie, Neurolo- Rosenthal, R., Hall, J. A., DiMatteo, M. R., Rogers,
gie und medizinische Psychologie (Leipzig), 27(7), P. L., & Archer, D. (1979). Sensitivity to nonverbal
423–428. communication: The PONS test. Baltimore, MD:
Miura, M. (1993). Individual differences in the percep- Johns Hopkins University Press.
tion of facial expression: The relation to sex differ- Rotter, N. G., & Rotter, G. S. (1988). Sex differences
ence and cognitive mode. Shinrigaku Kenkyu: The in the encoding and decoding of negative facial
Japanese Journal of Psychology, 63(6), 409–413. emotions. Journal of Nonverbal Behavior, 12(2),
doi:10.4992/jjpsy.63.409 139–148. doi:10.1007/BF00986931
Montagne, B., Kessels, R. P. C., De Haan, E. H. F., & Ruffman, T., Henry, J. D., Livingstone, V., & Phillips,
Perrett, D. I. (2007). The Emotion Recognition L. H. (2008). A meta-analytic review of emotion
Task: A paradigm to measure the perception of facial recognition and aging: Implications for neuropsy-
emotional expressions at different intensities. Percep- chological models of aging. Neuroscience and Biobe-
tual and Motor Skills, 104(2), 589–598. doi:10.2466/ havioral Reviews, 32(4), 863–881. doi:10.1016/j.
pms.104.2.589-598 neubiorev.2008.01.001
Montagne, B., Kessels, R. P. C., Frigerio, E., de Haan, Rymarczyk, K., & Grabowska, A. (2007). Sex differ-
E. H. F., & Perrett, D. I. (2005). Sex differences in ences in brain control of prosody. Neuropsychologia,
the perception of affective facial expressions: Do men 45(5), 921–930. doi:10.1016/j.neuropsychologia.200
really lack emotional sensitivity? Cognitive Processing, 6.08.021
6(2), 136–141. doi:10.1007/s10339-005-0050-6 Saavedra-Rodríguez, L., & Feig, L. A. (2013). Chronic
Murphy, N. A., & Hall, J. A. (2011). Intelligence and social instability induces anxiety and defective social
interpersonal sensitivity: A meta-analysis. Intelligence, interactions across generations. Biological Psychiatry,
39(1), 54–63. doi:10.1016/j.intell.2010.10.001 73(1), 44–53. doi:10.1016/j.biopsych.2012.06.035
Myrtek, M., Aschenbrenner, E., & Brügner, G. (2005). Salovey, P., & Mayer, J. (1990). Emotional intelligence.
Emotions in everyday life: An ambulatory monitoring Imagination, Cognition, and Personality, 9, 185–211.
study with female students. Biological Psychology, doi:10.2190/DUGG-P24E-52WK-6CDG
68(3), 237–255. doi:10.1016/j.biopsycho.2004.06.001 Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., &
Oldfield, R. C. (1971). The assessment and analysis of Matsumura, M. (2004). Enhanced neural activity in
handedness: The Edinburgh inventory. Neuropsycho- response to dynamic facial expressions of emotion: An
logia, 9(1), 97–113. doi:10.1016/0028-3932(71) fMRI study. Brain Research. Cognitive Brain Research,
90067-4 20(1), 81–91. doi:10.1016/j.cogbrainres.2004.01.008
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and Scherer, K., & Scherer, U. (2011). Assessing the ability
resampling strategies for assessing and comparing to recognize facial and vocal expressions of emotion:
indirect effects in multiple mediator models. Beha- Construction and validation of the Emotion Recog-
vior Research Methods, 40(3), 879–891. doi:10.3758/ nition Index. Journal of Nonverbal Behavior, 35(4),
BRM.40.3.879 305–326. doi:10.1007/s10919-011-0115-4

468 COGNITION AND EMOTION, 2014, 28 (3)


GENDER DIFFERENCES IN EMOTION RECOGNITION

Schutte, N. S., Malouff, J. M., Bobik, C., Coston, The effect of gender on emotion perception in
T. D., Greeson, C., Jedlicka, C., … Wendorf, G. schizophrenia and bipolar disorder. Acta Psychiatrica
(2001). Emotional intelligence and interpersonal Scandinavica, 116(4), 263–270. doi:10.1111/j.1600-
relations. The Journal of Social Psychology, 141(4), 0447.2007.00991.x
523–536. doi:10.1080/00224540109600569 Wagner, H. L. (1993). On measuring performance in
Schutte, N. S., Malouff, J. M., Hall, L. E., Haggerty, category judgment studies of nonverbal behavior. Journal
D. J., Cooper, J. T., Golden, C. J., & Dornheim, L. of Nonverbal Behavior, 17(1), 3–28. doi:10.1007/
(1998). Development and validation of a measure of BF00987006
emotional intelligence. Personality and Individual Williams, L. M., Mathersul, D., Palmer, D. M., Gur,
Differences, 25, 167–177. doi:10.1016/S0191-8869 R. C., Gur, R. E., & Gordon, E. (2009). Explicit
(98)00001-4 identification and implicit recognition of facial emo-
Trautmann, S. A., Fehr, T., & Herrmann, M. (2009). tions: I. Age effects in males and females across 10
Emotions in motion: Dynamic compared to static decades. Journal of Clinical and Experimental Neu-
facial expressions of disgust and happiness reveal ropsychology, 31(3), 257–277. doi:10.1080/13803390
more widespread emotion-specific activations. Brain 802255635
Research, 1284, 100–115. doi:10.1016/j.brainres.2009. Zimmermann, P., & Fimm, V. (2002). Testbatterie zur
Downloaded by [University of Basel] at 09:30 21 May 2014

05.075 Aufmerksamkeitsprüfung (TAP) (Version 2.1). Herzo-


Vaskinn, A., Sundet, K., Friis, S., Simonsen, C., Birke- genrath: Psytest.
naes, A. B., Engh, J. A., … Andreassen, O. A. (2007).

COGNITION AND EMOTION, 2014, 28 (3) 469

View publication stats

Você também pode gostar