Escolar Documentos
Profissional Documentos
Cultura Documentos
Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology,
University Medical Center, Geneva, Switzerland
b Swiss Center for Affective Sciences, University of Geneva, Switzerland
Received 15 May 2007; received in revised form 29 August 2007; accepted 26 October 2007
Available online 4 November 2007
Abstract
The perception of emotional facial expressions induces covert imitation in emotion-specific muscles of the perceivers face. Neural processes
involved in these spontaneous facial reactions remain largely unknown. Here we concurrently recorded EEG and facial EMG in 15 participants
watching short movie clips displaying either happy or angry facial expressions. EMG activity was recorded for the zygomaticus major (ZM) that
elevates the lips during a smile, and the corrugator supercillii (CS) that knits the eyebrows during a frown. We found increased EMG activity of CS
in response to angry expressions, and enhanced EMG activity of ZM for happy expressions, replicating earlier EMG studies. More importantly,
we found that the amplitude of an early visual evoked potential (right P1) was larger when ZM activity to happy faces was high, and when CS
activity to angry faces was high, as compared to when muscle reactions were low. Conversely, the amplitude of right N170 component was smaller
when the intensity of facial imitation was high. These combined EEGEMG results suggest that early visual processing of face expression may
determine the magnitude of subsequent facial imitation, with dissociable effects for P1 and N170. These findings are discussed against the classical
dual-route model of face recognition.
2007 Elsevier Ltd. All rights reserved.
Keywords: Facial mimicry; Electromyography; Electroencephalography; Emotion; Perception
1. Introduction
Emotional communication plays a key role in social interactions in humans, and crucially depends on facial expressions.
Darwin was the first to suggest that facial expressions of emotion have a major and hard-wired biological basis for social
communication (Darwin, 1872). The configuration of facial
muscles involved in facial expressions is preserved across nonhuman primates and humans (Burrows, Waller, Parr, & Bonar,
2006; Parr, Waller, & Fugate, 2005), as well as the cortical
innervation of the motor facial nuclei in brainstem (Morecraft,
Louie, Herrick, & Stilwell-Morecraft, 2001; Morecraft, StilwellMorecraft, & Rossing, 2004). Moreover, human newborns
1105
2.2. Stimuli
Ten different face identities (4 females and 6 males) were selected from
Ekman and Friesen (1976). To allow good control of the onset, duration, and
intensity of emotional expressions, we synthesized dynamic expressions from a
set of pictures morphed between the neutral expression and either the happy or
angry expression of the same face identity. We thus created pictures of these two
emotional expressions for each identity using Benson and Perretts morphing
technique (Benson & Perrett, 1993), leading to a set of 10 frames per face
with increasing emotional intensity (0%, 15%, 30%, 45%, 60%, 70%, 80%,
90%, 100% and 110% intensity) for each emotion and each identity. Pictures
of a given identity set were presented rapidly one after the other, using the Eprime software (Psychology Software Tools, http://www.pstnet.com). The first
9 pictures in the sequence were presented for 40 ms each, and the last one
was presented for 1100 ms, creating the compelling illusion of a short movie
clip displaying a dynamic facial expression of either anger or happiness (see
Fig. 1a to have an illustration of the temporal characteristics of the stimuli and
an example of the picture used). In total, 20 different movie clips were created
following this procedure. The number of repetitions in the experimental session
was the same for each movie.
2.3. Procedure
Subjects were tested individually. After EEG and EMG electrodes were
placed, they were seated in front of a computer screen where the dynamic facial
stimuli were displayed (16.2 10.9 of visual angle). Five blocks of 50 clips
(125 smiling faces and 125 angry faces) were presented to each subject. Each
1106
Fig. 1. (a) Example of the time frame used for an angry and a happy stimulus. The presentation time of each time frame is reported in the scale above the pictures.
(b and c) Mean facial EMG responses to happy and angry facial expressions recorded in the CS (b) and in the ZM (c), plotted in 100 ms bins after the stimulus onset.
This figure shows activation averaged over the left and right muscles.
clip (1460 ms duration) was preceded by a central fixation cross for 1450 ms,
and separated by a varying intertrial interval (28005200 ms; mean = 4000 ms).
The order of presentation of movie clips was randomized in each block and for
each subject. Subjects passively viewed the movie clips for three of the five
blocks, and for the two remaining blocks they reported whether the expression
was positive or negative by pressing a key at the end of the movie. The subject
was asked to manually respond only after the offset of the movie clip in order to
avoid movement artifacts contaminating the EEG or EMG recording. This task
was primarily introduced to maintain sufficient attention during the last blocks,
but it was not considered in our analysis. The subjects were told that the EEG
system measured their cerebral activity while the facial electrodes were placed
to monitor ocular activities. The subjects were thus blind to the exact purpose of
the study and in particular to the fact that facial muscle activity was measured.
After the recording, subjects were asked to answer the STAI questionnaire to
measure trait anxiety (Spielberger, 1983), as well as the empathy quotient (EQ)
questionnaire (Baron-Cohen & Wheelwright, 2004) to assess empathy level. In
subsequent debriefing, all subjects reported being unaware of our recording
of their facial muscle response and of any (voluntary or involuntary) facial
mimicry.
1107
Fig. 2. Topographical maps (upper view and back view of the head) depicting the scalp distribution of electrical activity at P1 latency on the left side (a and b) and
at N170 latency on the right side (c and d) for both for happy and angry expressions, regardless of the intensity of facial mimicry.
Gratton & Coles method (Gratton, Coles, & Donchin, 1983), and EEG activity
was re-referenced to an average reference. Finally, after baseline correction,
trials showing extreme amplitudes were rejected using a dynamic procedure
(mean threshold 70 V; range: 4595 V), so as to keep an average of 70%
of trials for each condition.
For each emotional condition (angry vs. happy faces), the EEG trials were
then split into two groups according to the EMG activity recorded for each
muscle on any given trial (high or low relative to the median of the corresponding
trial type). Median split was chosen as the most efficient and direct way to
separate trials in two opposite categories of muscle activity. Thus, we obtained
two different distributions of EEG epochs for each emotion condition, one based
on the CS activity and the other one based on the ZM activity. In total, eight
conditions were obtained: high-CS activity in response to angry faces (CAhi),
low-CS activity in response to angry faces (CAlo), high-ZM activity in response
to angry faces (ZAhi), low-ZM activity in response to angry faces (ZAlo), highCS activity in response to happy faces (CHhi), low-CS activity in response
to happy faces (CHlo), high-ZM activity in response to happy faces (ZHhi),
low-ZM activity in response to happy faces (ZHlo). The final number of trials
kept after artifact rejection and trial selection procedures did not significantly
differ between conditions, as verified by a repeated measure ANOVA with the
factors muscle (corrugator vs. zygomatic), emotion (happy vs. angry) and
intensity of EMG mimicry (high vs. low). This analysis showed no main effect
or interaction, suggesting that a comparable number of trials was used for each
condition (on average 43 trials for CAhi and CAlo, 41.5 trials for CHhi, 42.9
trials for CHlo, 43.8 trials for ZAhi, 42.5 trials for ZAlo and ZHhi and 42.2
trials for ZHlo). For statistical comparisons of ERPs, only the emotion-relevant
muscle was considered for each expression type (i.e. corrugator activity for
movies of angry faces and zygomatic activity for movies of happy faces). Note
that there was neither a difference in EEG response to happy faces as a function
of corrugator activity, nor to angry faces as a function of zygomatic activity.
Note also that there was no difference in corrugator activity in response to
happy faces when considering trials evoking high-zygomatic activity vs. trials
evoking low-zygomatic activity. In the same way, there was no difference in
zygomatic activity in response to angry faces when considering trials evoking
high corrugator activity vs. trials evoking low corrugator activity.
To simplify the analysis, and because muscular activity is thought to be
more intense on the left side of the face (Dimberg & Petterson, 2000; Zhou
& Hu, 2004), trials were classified according to EMG activity from the left
side only. However, it is noteworthy that complementary analyses taking into
account EMG activity of the right side of the face led to the same pattern of
results (including the same lateralization effects in EEG, see below). Therefore, our results generalize to mimicry activity recorded from either side of the
face.
EEG epochs of the trials from each EMG condition were averaged to
obtain event-related potential (ERPs) for each subject, filtered with a 30 Hz
low-pass filter (as recommended in the literature; Picton et al., 2000) and downsampled to 512 Hz. A few noisy electrodes (less than seven per recording) were
interpolated in Cartool 3.22 using a standard spherical spline transformation
(http://brainmapping.unige.ch). Early visual ERP components (P1, N170) were
measured at the electrodes where they were most prominent, as verified by
inspection of scalp topographic maps for these two components: P1 was measured at O1, O2, PO3, PO4, PO7 and PO8 (Fig. 2a and b), while N170 had a more
lateral occipito-temporal scalp distribution and was measured at PO7, PO8, P9
and P10 (Fig. 2c and d). Both amplitude and latency values were calculated.
3. Results
3.1. EMG data
We first examined EMG activity in bilateral CS and ZM muscles in response to angry and happy faces, as compared to the
preceding baseline. The time-course of muscular response to
both facial expressions is shown in Fig. 1 for both muscles.
There was a clear pattern of covert mimicry in EMG activity.
1108
Table 1
(a) Corrugator supercilii and (b) zygomaticus major activity in response to angry and happy faces for each time interval followed by the statistics (in italic) testing
the difference between angry and happy activation for each time-bin (Significant p values (p < 0.05) are in bold)
Time interval in ms
100
200
300
400
500
600
1000
1100
1200
1300
1400
0.135
0.685
2.875
0.006
0.104
0.642
2.714
0.008
0.033
0.618
2.860
0.006
0.049
0.613
3.347
0.002
0.071
0.379
2.091
0.028
0.079
0.421
2.014
0.032
0.007
0.386
2.036
0.031
0.025
0.305
1.680
0.058
700
800
900
1 We calculated the proportion of trials for which a given face stimulus was
associated with a high- or a low-EMG facial mimicry, for each movie clip and
each condition, across all subjects. For each angry face, 4657% of the presentations triggered a high-CS response, and none of these proportions differed
significantly from 50%. Similarly, for each happy face, 4753% of the presentations triggered a high-ZM response, and none of these proportions was
significantly different from 50%. In other words, all movie clips elicited highor low-facial mimicry on a similar number of trials across all subjects.
1109
Fig. 3. Grand average ERPs for P1 on trials with stronger vs. weaker mimicry in emotion-relevant muscle, showing brain response to faces when EMG disclosed
high-ZM vs. low-ZM activity in response to happy expressions (a); high-CS vs. low-CS activity in response to angry expressions (b). Data were averaged over
electrodes where the P1 component was maximal for each hemisphere. * p < 0.05.
For trials with happy faces, the ANOVA disclosed a significant muscle intensity of EMG mimicry hemisphere
interaction [F(1,14) = 5.982, p < 0.03]. When we considered
each muscle separately in this condition, we found that the ZM
showed a significant intensity of EMG mimicry hemisphere
interaction [F(1,14) = 9.585, p < 0.01], in addition to a significant
main effect of electrode [F(2,14) = 14.509, = 0.813, p < 0.001].
As this interaction did not include an electrode effect, we
averaged the values of the three electrodes from each hemisphere. Fig. 3a shows the resulting mean P1 amplitudes for
both the right and the left hemisphere, which was modulated
by the intensity of mimicry to happy faces on the right side,
but not on the left. Post-hoc t-tests confirmed that P1 amplitudes were larger in the right-hemisphere electrodes when ZM
muscle activity was high as compared to low [t(14) = 2.663,
p < 0.01]. This result indicated that the higher the P1 amplitude
over the right hemisphere, the stronger the ZM mimicry to happy
faces. By contrast for CS, the intensity of EMG mimicry did not
reliably modulate P1 amplitude for happy faces; only the electrode main effect was significant [F(2,28) = 14.669, = 0.823,
p < 0.001].
For trials with angry faces, the interaction muscle intensity
of EMG mimicry was not significant. However, when ERPs
were averaged over the three same electrodes from each hemisphere (as for the happy condition above), the comparison
between trials with high and low-CS activity in response to
the angry faces (Fig. 3b) revealed the same modulation of
amplitude as that observed for happy faces. Moreover, note
that the significant (p < 0.03) quadruple interaction of emotion muscle intensity of EMG mimicry hemisphere for
P1 amplitude found at the first level of the statistical analysis
(see above) further justified this direct decomposition. Thus, a
post-hoc paired t-test of mean P1 amplitude between these two
1110
Fig. 4. Grand average ERPs for N170 on trials with stronger vs. weaker mimicry in emotion-relevant muscle, showing brain response to faces when EMG revealed
high-ZM vs. low-ZM activity in response to happy expressions (a); high-CS vs. low-CS activity in response to angry expressions (b). Data were averaged over
electrodes where N170 component was maximal for each hemisphere. * p < 0.05.
p > 0.07), suggesting that these two effects most likely reflect
distinct underlying mechanisms.
Finally, there were no modulations of ERPs at later latencies,
encompassing P2 and later components.
4. Individual personality factors
Because intensity of facial mimicry has previously been
related to emotionality (Hess et al., 1998; Sonnby-Borgstrom,
2002; Sonnby-Borgstrom & Jonsson, 2004; Sonnby-Borgstrom
et al., 2003) and empathy (de Wied et al., 2006; Hermans et
al., 2006), we tested for any correlation between the intensity of
EMG responses to facial expressions and individual personality
traits measured by questionnaires (STAI and EQ). Intensity of
mimicry in the CS was assessed for each subject by subtracting the mean EMG activity over the 1400 ms time window after
happy expressions onset from the mean EMG activity over the
1400 ms time window after angry expression onset. Conversely,
mean ZM activity in response to angry expressions was subtracted from mean ZM activity in response to happy expressions.
However, the intensity of facial mimicry in each muscle was correlated with neither levels of anxiety nor levels of empathy (all
r < 0.33, p > 0.1).
5. Discussion
To our knowledge, this study is the first to combine simultaneous EEG and EMG recordings to assess the pattern of
neural activation associated with involuntary mimicry of emotional facial expressions. By tracking the time-course of both
EEG and EMG responses to dynamic emotional faces, we were
able to identify processing stages that were differentially activated as a function of mimicry intensity, on a trial-by-trial
basis, i.e. when covert facial imitation was higher as com-
pared with when it was lower. Our study provides several new
findings.
First, our results replicate the facial imitation phenomenon
in 15 healthy subjects using synthesized dynamic facial expressions, with selective modulation of ZM muscles by the
presentation of happy faces, and of CS muscle by the presentation of angry faces. These data extend previous observations
of mimicry elicited by static pictures of faces (Dimberg &
Petterson, 2000; Dimberg & Thunberg, 1998; Dimberg et al.,
2000, 2002) by showing that similar EMG responses can reliably be measured when the emotion in the face is dynamically
expressed. This was the case even though we used artificial
movie clips of morphed (non-natural) expressions. Moreover,
it is remarkable that such reactions were still measurable after
many repetitions of the same facial expressions (125 trials each,
see Section 2), without any apparent habituation, as there were
as many trials with high levels of imitation in the second half of
the experiment as in the first half (data not shown). In contrast
to this large number of trials needed for reliable EEG recordings in our study, previous behavioral studies on facial mimicry
have used only very few trials with emotional expressions (either
four (de Wied et al., 2006; Hess & Blairy, 2001; Hess et al.,
1998), six (Dimberg & Petterson, 2000; Dimberg et al., 2002)
or eight (Dimberg & Thunberg, 1998; Vrana & Gross, 2004;
Weyers et al., 2006)). Therefore, our study indicates that facial
mimicry can be repeatedly elicited over many successive trials,
supporting a strong degree of automaticity for this phenomenon,
and establishing an opportunity for repeated measurements of
the concomitant neural activity. The automatic nature of this
phenomenon is also substantiated by the fact that none of our
subjects was actually aware of the recording of facial EMG and
mimicry.
Secondly, our EMG data revealed different temporal dynamics for CS and ZM responses. The CS activity was enhanced
1111
1112
1113