Você está na página 1de 10

Neuropsychologia 46 (2008) 11041113

Simultaneous recording of EEG and facial muscle reactions


during spontaneous emotional mimicry
Amal Achaibou a,b, , Gilles Pourtois a,b , Sophie Schwartz a , Patrik Vuilleumier a,b
a

Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology,
University Medical Center, Geneva, Switzerland
b Swiss Center for Affective Sciences, University of Geneva, Switzerland
Received 15 May 2007; received in revised form 29 August 2007; accepted 26 October 2007
Available online 4 November 2007

Abstract
The perception of emotional facial expressions induces covert imitation in emotion-specific muscles of the perceivers face. Neural processes
involved in these spontaneous facial reactions remain largely unknown. Here we concurrently recorded EEG and facial EMG in 15 participants
watching short movie clips displaying either happy or angry facial expressions. EMG activity was recorded for the zygomaticus major (ZM) that
elevates the lips during a smile, and the corrugator supercillii (CS) that knits the eyebrows during a frown. We found increased EMG activity of CS
in response to angry expressions, and enhanced EMG activity of ZM for happy expressions, replicating earlier EMG studies. More importantly,
we found that the amplitude of an early visual evoked potential (right P1) was larger when ZM activity to happy faces was high, and when CS
activity to angry faces was high, as compared to when muscle reactions were low. Conversely, the amplitude of right N170 component was smaller
when the intensity of facial imitation was high. These combined EEGEMG results suggest that early visual processing of face expression may
determine the magnitude of subsequent facial imitation, with dissociable effects for P1 and N170. These findings are discussed against the classical
dual-route model of face recognition.
2007 Elsevier Ltd. All rights reserved.
Keywords: Facial mimicry; Electromyography; Electroencephalography; Emotion; Perception

1. Introduction
Emotional communication plays a key role in social interactions in humans, and crucially depends on facial expressions.
Darwin was the first to suggest that facial expressions of emotion have a major and hard-wired biological basis for social
communication (Darwin, 1872). The configuration of facial
muscles involved in facial expressions is preserved across nonhuman primates and humans (Burrows, Waller, Parr, & Bonar,
2006; Parr, Waller, & Fugate, 2005), as well as the cortical
innervation of the motor facial nuclei in brainstem (Morecraft,
Louie, Herrick, & Stilwell-Morecraft, 2001; Morecraft, StilwellMorecraft, & Rossing, 2004). Moreover, human newborns

Abbreviations: EEG, electroencephalography; EMG, electromyography;


CS, corrugator supercilii; ZM, zygomaticus major.
Corresponding author at: LabNIC, Department of Neuroscience, University
Medical Center (CMU), Rue Michel-Servet 1, CH-1211 Geneva, Switzerland.
Tel.: +41 22 37 95 361; fax: +41 22 37 95 402.
E-mail address: amal.achaibou@medecine.unige.ch (A. Achaibou).
0028-3932/$ see front matter 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.neuropsychologia.2007.10.019

produce spontaneous prototypical emotional facial expressions


(Field, Woodson, Greenberg, & Cohen, 1982; Soussignan,
Schaal, Marlier, & Jiang, 1997), suggesting that somehow emotional expressions can be elicited in a relatively automatic
manner from an early age on. Likewise, in adults, facial reactions are also evoked by observation of emotional expressions
in other peoples faces.
In particular, Dimberg et al. (Dimberg & Petterson, 2000;
Dimberg & Thunberg, 1998; Dimberg, Thunberg, & Elmehed,
2000; Dimberg, Thunberg, & Grunedal, 2002) have shown that
distinctive patterns of facial muscle activity may be elicited by
the perception of different categories of basic facial expressions.
This activity can reliably be measured by EMG in two distinct
muscles, the corrugator supercilii (CS), which knits the eyebrows when frowning, and the zygomaticus major (ZM), which
elevates the lips when smiling. A covert increase in CS activity is
typically measured in response to angry faces, while ZM activity
is increased in response to happy faces. These findings indicate
a spontaneous tendency of the subject to mimic the emotional
expression seen in another face, even if the latter is presented

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

in static images. This reaction occurs relatively quickly, within


500 ms after the stimulus onset (Dimberg & Thunberg, 1998),
but also unconsciously, as it may persist when the emotional
expression is masked by a neutral face presented immediately
after the expressive face (Dimberg et al., 2000), arguing in
favor of the automaticity of the generated facial expressions.
Such facial mimicry is also independent of voluntary control
(Dimberg et al., 2002), as it can still be observed when the subject is asked to inhibit any facial movement or even to react with
incongruent facial movements (i.e. frown in response to happy
faces and smile in response to angry faces).
Spontaneous facial mimicry provides unique insights into the
biological determinants of emotional responses. Accordingly,
facial mimicry has been extensively studied in psychophysiology, particularly with the aim to understand its links with
empathy and emotional contagion (Hess, Philippot, & Blairy,
1998; Sonnby-Borgstrom, 2002; Sonnby-Borgstrom & Jonsson,
2004; Sonnby-Borgstrom, Jonsson, & Svensson, 2003) or to
determine an objective marker for empathy (de Wied et al., 2006;
Hermans, Putman, & van Honk, 2006). Because facial mimicry
reflects spontaneous reactions of viewers to emotional stimuli,
identifying the neural correlates of this phenomenon can provide important insights into cerebral mechanisms of emotion
and social cognition. However, still little is known about the
neural processes underlying facial mimicry, and in particular
whether emotional face perception (including visual processing) and the subsequent emotion reaction (i.e. mimicry) are
somehow related or completely independent processes. Multimodal approaches combining different psychophysiological
measures simultaneously are crucial in order to understand the
link between peripheral (bodily) and central (cerebral) events
during emotional processing.
The aim of the current study was to identify the pattern of
brain activity associated with the intensity of facial mimicry.
To track neural responses possibly implicated in the generation
of mimicry, we recorded high-density EEG simultaneously to
EMG activity from both left and right ZM and CS muscles,
while subjects viewed dynamic faces expressing two different emotional expressions (happy or angry). Our main goal
was to measure any differential brain responses arising when
subjects show stronger vs. weaker facial mimicry on a trial-bytrial basis, with facial stimuli being physically identical across
these two kinds of trials. We used dynamic facial expressions
of emotion, rather than static expressions, because dynamic
expressions are more ecologically valid than static displays of
facial emotions; because it has been shown that they lead to
better recognition of the type of emotional expression (Wehrle,
Kaiser, Schmidt, & Scherer, 2000) and to more intense mimicry
(Sato & Yoshikawa, 2006; Weyers, Muhlberger, Hefele, &
Pauli, 2006). To allow for a good temporal control of our stimuli, we used artificial expressions (see Section 2) made of a
sequence of pictures of morphed expressions with increasing
emotional intensity. Although these stimuli did not preserve
precise information about the onset, apex and offset of each
muscle movement, the increase of expression in the picture
sequence was not linear in order to make the stimuli more
natural.

1105

Our EEG analysis focused on classic sensory components


of the visual event-related potential (ERP), including the P1
and N170 components. The P1 component is generated in the
extrastriate cortex as early as 100 ms after a visual stimulus is
presented. It has been shown that P1 amplitude may be sensitive to emotional aspects of facial expressions (Batty & Taylor,
2003; Pourtois, Dan, Grandjean, Sander, & Vuilleumier, 2005).
The N170 component is associated with structural encoding of
faces (Bentin, Allison, Puce, Perez, & McCarthy, 1996; Bentin,
Deouell, & Soroker, 1999; Eimer & McCarthy, 1999). Some
studies have shown that both N170 latency and amplitude may
be modulated by emotional expressions (Ashley, Vuilleumier,
& Swick, 2004; Batty & Taylor, 2003; Campanella, Quinet,
Bruyer, Crommelinck, & Guerit, 2002) while other studies found
that N170 was insensitive to emotional expressions (Eimer &
Holmes, 2002). Thus, it is still unresolved whether emotional
expressions have an impact on this ERP component, and what
factors may contribute to a modulation of the N170 by emotion
in some situations but not in others.
Because previous behavioral studies (Dimberg & Thunberg,
1998; Dimberg et al., 2000) have reported rapid and unconscious
mimicry in EMG response to emotional faces, we predicted that
any concomitant modulation of EEG response during stronger
vs. weaker mimicry might take place at relatively early stages
of processing after stimulus onset.
2. Materials and methods
2.1. Participants
Fifteen healthy subjects (1 left handed, 10 females, mean age 26 years, range
2235 years) gave their written informed consent to participate in a single session
lasting about 90 min. All participants had normal or corrected-to-normal vision,
and no previous psychiatric or neurological history.

2.2. Stimuli
Ten different face identities (4 females and 6 males) were selected from
Ekman and Friesen (1976). To allow good control of the onset, duration, and
intensity of emotional expressions, we synthesized dynamic expressions from a
set of pictures morphed between the neutral expression and either the happy or
angry expression of the same face identity. We thus created pictures of these two
emotional expressions for each identity using Benson and Perretts morphing
technique (Benson & Perrett, 1993), leading to a set of 10 frames per face
with increasing emotional intensity (0%, 15%, 30%, 45%, 60%, 70%, 80%,
90%, 100% and 110% intensity) for each emotion and each identity. Pictures
of a given identity set were presented rapidly one after the other, using the Eprime software (Psychology Software Tools, http://www.pstnet.com). The first
9 pictures in the sequence were presented for 40 ms each, and the last one
was presented for 1100 ms, creating the compelling illusion of a short movie
clip displaying a dynamic facial expression of either anger or happiness (see
Fig. 1a to have an illustration of the temporal characteristics of the stimuli and
an example of the picture used). In total, 20 different movie clips were created
following this procedure. The number of repetitions in the experimental session
was the same for each movie.

2.3. Procedure
Subjects were tested individually. After EEG and EMG electrodes were
placed, they were seated in front of a computer screen where the dynamic facial
stimuli were displayed (16.2 10.9 of visual angle). Five blocks of 50 clips
(125 smiling faces and 125 angry faces) were presented to each subject. Each

1106

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

Fig. 1. (a) Example of the time frame used for an angry and a happy stimulus. The presentation time of each time frame is reported in the scale above the pictures.
(b and c) Mean facial EMG responses to happy and angry facial expressions recorded in the CS (b) and in the ZM (c), plotted in 100 ms bins after the stimulus onset.
This figure shows activation averaged over the left and right muscles.
clip (1460 ms duration) was preceded by a central fixation cross for 1450 ms,
and separated by a varying intertrial interval (28005200 ms; mean = 4000 ms).
The order of presentation of movie clips was randomized in each block and for
each subject. Subjects passively viewed the movie clips for three of the five
blocks, and for the two remaining blocks they reported whether the expression
was positive or negative by pressing a key at the end of the movie. The subject
was asked to manually respond only after the offset of the movie clip in order to
avoid movement artifacts contaminating the EEG or EMG recording. This task
was primarily introduced to maintain sufficient attention during the last blocks,
but it was not considered in our analysis. The subjects were told that the EEG
system measured their cerebral activity while the facial electrodes were placed
to monitor ocular activities. The subjects were thus blind to the exact purpose of
the study and in particular to the fact that facial muscle activity was measured.
After the recording, subjects were asked to answer the STAI questionnaire to
measure trait anxiety (Spielberger, 1983), as well as the empathy quotient (EQ)
questionnaire (Baron-Cohen & Wheelwright, 2004) to assess empathy level. In
subsequent debriefing, all subjects reported being unaware of our recording
of their facial muscle response and of any (voluntary or involuntary) facial
mimicry.

2.4. Electrophysiological recordings: EMG


EMG was recorded using a BIOSEMI Active-Two amplifier system from
eight active electrodes corresponding to four distinct bipolar montages. The
electrodes were attached on both the left and right sides of the face, two over
each ZM and each CS muscle regions (Fridlund & Cacioppo, 1986). Two additional electrodes, the common mode sense [CMS] active electrode and the driven
right leg [DRL] passive electrode, were used as reference and ground electrodes,
respectively (http://www.biosemi/faq/cms&drl.htm). The EMG was continuously recorded at 2048 Hz, with a 0.1417 Hz band-pass filter. The raw data
were segmented offline into 2400 ms epochs, including a 1000 ms pre-stimulus
baseline, and digitally filtered with a 20400 Hz band-pass in Brain Vision Analyzer Version 1.04 (Brain Products GmbH). These specific parameters for facial

EMG acquisition and analysis were selected to conform to published guidelines


for this psychophysiological technique (DeLuca, 1997; van Boxtel, 2001).
Conventional bipolar montages were then calculated from electrode pairs for
each muscle in Matlab (http://www.mathworks.com) by subtracting the activity of one electrode placed over the muscle to the activity of the other electrode
nearby. The magnitude of EMG signal (corresponding to the strength of muscular
activity) was determined in Matlab by calculating the root-mean-square (RMS)
over 100 ms interval bins after the onset of each stimulus. Trials in which the difference of activity between a given time-bin and the following one was superior
to 3.5 standard deviations of the mean value for the whole trial were rejected.
On average, seven trials per subject were rejected for angry facial expression,
and eight trials for happy facial expressions. For each accepted trial, RMS was
calculated for the 1000 ms baseline as well as for 14 successive 100 ms bins
after the stimulus onset. Finally, the RMS of the baseline was subtracted from
each time interval. For each subject, trials with angry expressions and trials with
happy expressions were averaged separately for each of the four facial muscles.

2.5. Electrophysiological recordings: EEG


Scalp-EEG was amplified using the same BIOSEMI Active-Two amplifier
system and was recorded from 64 active electrodes mounted in an elastic cap
and evenly distributed over the head surface according to the extended 1020
EEG system (Oostenveld & Praamstra, 2001). These 64 electrodes included
conventional midline sites with FPz, AFz, Fz, FCz, Cz, CPz, Pz, POz, Oz, Iz
electrodes; Fp1, AF3, AF7, F1, F3, F5, F7, FC1, FC3, FC5, FT7, C1, C3, C5,
T7, CP1, CP3, CP5, TP7, P1, P3, P5, P7, P9, PO3, PO7, O1 electrodes in
the left hemisphere; the homolog/even recording sites in the right hemisphere.
The acquisition rate and filtering parameters were the same as for EMG (see
above). Using Brain Vision Analyzer, we first digitally filtered the raw data with
a 1 Hz low-pass filter when slow derivations were visible (for nine subjects), and
segmented the recording into 1200 ms epochs (with 200 ms baseline) for each
trial. All trials rejected for the EMG analysis were automatically rejected for
the EEG analysis. In addition, trials containing blinks were corrected using the

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

1107

Fig. 2. Topographical maps (upper view and back view of the head) depicting the scalp distribution of electrical activity at P1 latency on the left side (a and b) and
at N170 latency on the right side (c and d) for both for happy and angry expressions, regardless of the intensity of facial mimicry.

Gratton & Coles method (Gratton, Coles, & Donchin, 1983), and EEG activity
was re-referenced to an average reference. Finally, after baseline correction,
trials showing extreme amplitudes were rejected using a dynamic procedure
(mean threshold 70 V; range: 4595 V), so as to keep an average of 70%
of trials for each condition.
For each emotional condition (angry vs. happy faces), the EEG trials were
then split into two groups according to the EMG activity recorded for each
muscle on any given trial (high or low relative to the median of the corresponding
trial type). Median split was chosen as the most efficient and direct way to
separate trials in two opposite categories of muscle activity. Thus, we obtained
two different distributions of EEG epochs for each emotion condition, one based
on the CS activity and the other one based on the ZM activity. In total, eight
conditions were obtained: high-CS activity in response to angry faces (CAhi),
low-CS activity in response to angry faces (CAlo), high-ZM activity in response
to angry faces (ZAhi), low-ZM activity in response to angry faces (ZAlo), highCS activity in response to happy faces (CHhi), low-CS activity in response
to happy faces (CHlo), high-ZM activity in response to happy faces (ZHhi),
low-ZM activity in response to happy faces (ZHlo). The final number of trials
kept after artifact rejection and trial selection procedures did not significantly
differ between conditions, as verified by a repeated measure ANOVA with the
factors muscle (corrugator vs. zygomatic), emotion (happy vs. angry) and
intensity of EMG mimicry (high vs. low). This analysis showed no main effect
or interaction, suggesting that a comparable number of trials was used for each
condition (on average 43 trials for CAhi and CAlo, 41.5 trials for CHhi, 42.9
trials for CHlo, 43.8 trials for ZAhi, 42.5 trials for ZAlo and ZHhi and 42.2
trials for ZHlo). For statistical comparisons of ERPs, only the emotion-relevant
muscle was considered for each expression type (i.e. corrugator activity for
movies of angry faces and zygomatic activity for movies of happy faces). Note
that there was neither a difference in EEG response to happy faces as a function
of corrugator activity, nor to angry faces as a function of zygomatic activity.
Note also that there was no difference in corrugator activity in response to
happy faces when considering trials evoking high-zygomatic activity vs. trials
evoking low-zygomatic activity. In the same way, there was no difference in
zygomatic activity in response to angry faces when considering trials evoking
high corrugator activity vs. trials evoking low corrugator activity.
To simplify the analysis, and because muscular activity is thought to be
more intense on the left side of the face (Dimberg & Petterson, 2000; Zhou
& Hu, 2004), trials were classified according to EMG activity from the left
side only. However, it is noteworthy that complementary analyses taking into
account EMG activity of the right side of the face led to the same pattern of
results (including the same lateralization effects in EEG, see below). Therefore, our results generalize to mimicry activity recorded from either side of the
face.

EEG epochs of the trials from each EMG condition were averaged to
obtain event-related potential (ERPs) for each subject, filtered with a 30 Hz
low-pass filter (as recommended in the literature; Picton et al., 2000) and downsampled to 512 Hz. A few noisy electrodes (less than seven per recording) were
interpolated in Cartool 3.22 using a standard spherical spline transformation
(http://brainmapping.unige.ch). Early visual ERP components (P1, N170) were
measured at the electrodes where they were most prominent, as verified by
inspection of scalp topographic maps for these two components: P1 was measured at O1, O2, PO3, PO4, PO7 and PO8 (Fig. 2a and b), while N170 had a more
lateral occipito-temporal scalp distribution and was measured at PO7, PO8, P9
and P10 (Fig. 2c and d). Both amplitude and latency values were calculated.

2.6. Statistical analysis


All statistical analyses were performed in SPSS v.14.0 (SPSS Inc.). EMG and
EEG data were submitted to repeated measure ANOVAs. Type I errors associated
with inhomogeneity of variance were controlled by decreasing the degrees of
freedom using the GreenhouseGeisser epsilon. EMG data were analyzed as a
function of muscle (two levels: ZM vs. CS), emotion (2 levels: happy vs. angry),
side of the face (2 levels: right vs. left), and time-bins (14 levels: 100 ms time-bins
from 0 to +1400 ms post-stimulus onset). Comparisons between values at each
time-bin for the response to angry and happy stimuli were assessed by paired
t-tests. Here, we used one-tailed t-tests as we had strong a priori predictions
regarding the direction of the effect for the EMG modulation, with larger CS
activity for angry relative to happy faces with the symmetric pattern for ZM
(Dimberg & Thunberg, 1998; Dimberg et al., 2002). EEG data were analyzed
as a function of emotion (two levels), muscle (two levels), intensity of EMG
mimicry (two levels), as well as hemisphere (two levels) and electrode where
the peak of interest was measured (three levels for P1 and 2 levels for N170).
Post-hoc comparisons between conditions were performed using paired t-tests.

3. Results
3.1. EMG data
We first examined EMG activity in bilateral CS and ZM muscles in response to angry and happy faces, as compared to the
preceding baseline. The time-course of muscular response to
both facial expressions is shown in Fig. 1 for both muscles.
There was a clear pattern of covert mimicry in EMG activity.

1108

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

Table 1
(a) Corrugator supercilii and (b) zygomaticus major activity in response to angry and happy faces for each time interval followed by the statistics (in italic) testing
the difference between angry and happy activation for each time-bin (Significant p values (p < 0.05) are in bold)
Time interval in ms
100

200

300

400

500

600

1000

1100

1200

1300

1400

(a) Corrugator supercilii


Angry (V)
0.065
0.127
0.037
0.055
0.117
0.176
0.259
0.224
0.267
0.205
Happy (V)
0.116 0.009 0.166 0.418 0.647 0.802 0.823 0.883 0.838 0.691
T value t(14)
1.666
2.499
2.914
3.332
2.707
2.971
2.980
3.153
2.773
2.962
p value (one-tailed)
0.059
0.013
0.006
0.002
0.009
0.005
0.005
0.004
0.007
0.005

0.135
0.685
2.875
0.006

0.104
0.642
2.714
0.008

0.033
0.618
2.860
0.006

0.049
0.613
3.347
0.002

(b) Zygomaticus major


Angry (V)
0.107 0.045 0.104 0.136 0.111 0.092
0.001 0.094 0.022
0.021
Happy (V)
0.123 0.096 0.080 0.107
0.012
0.159
0.276
0.242
0.309
0.408
t value t(14)
0.766
1.542 0.731 0.633 1.896 1.896 2.140 2.225 1.860 1.825
p value (one-tailed)
0.228
0.073
0.238
0.268
0.039
0.039
0.025
0.022
0.042
0.045

0.071
0.379
2.091
0.028

0.079
0.421
2.014
0.032

0.007
0.386
2.036
0.031

0.025
0.305
1.680
0.058

Because the four-way ANOVA showed no significant main


or interaction effects for the side of the face (right vs. left), this
factor was not further considered, and data recorded from the
left and right sides were collapsed for each muscle. More critically, there were significant interactions between muscle and
emotion factors [F(1,14) = 8.694, p < 0.02], and between muscle, emotion and time-bin factors [F(13,182) = 5.449, = 0.137,
p < 0.02], suggesting a differential activation of the two muscles
as a function of the displayed facial expression, as well as a different temporal dynamic for these activations. As illustrated in
Fig. 1, the ZM response was larger in response to happy facial
expressions than to angry facial expressions, whereas the inverse
pattern was seen for the CS. Post-hoc t-tests at each successive
time-bin further showed that the EMG responses to these two
expressions was significantly different from 200 ms onwards for
the CS muscle [t(14) = 2.499, p < 0.05], but only from 500 ms
onwards for the ZM muscle [t(14) = 1.896, p < 0.05], demonstrating a differential time-course of activation for these two
muscles (Table 1 provides a more detailed statistical descriptions). Taken together, these data confirm a covert mimicry
reaction to both facial expressions, with differential activity
in specific facial muscles, as previously reported with static
faces (Dimberg & Petterson, 2000; Dimberg & Thunberg, 1998;
Dimberg et al., 2000, 2002).
Note that corrugator responses involved an apparent relaxation to happy faces, as calculated for stimulus-evoked changes
relative to baseline activity. Such a pattern is commonly observed
(Dimberg & Petterson, 2000; Dimberg & Thunberg, 1998;
Dimberg et al., 2000, 2002) and presumably results from natural
frowning activity during the baseline/fixation period preceding
face onset, as typically observed in anticipatory states or effortful task conditions (van Boxtel, Damen, & Brunia, 1996; van
Boxtel & Jessurun, 1993). However, when considering highintensity corrugator responses to angry expressions, corrugator
activity was increased significantly over the preceding baseline,
clearly indicating a positive mimicry response to angry expressions. Also, corrugator response to happy faces was not different
when comparing trials evoking high-zygomatic activity vs. trials evoking low-zygomatic activity, indicating that the relaxation
observed is not dependant of the degree of mimicry.

700

800

900

3.2. EEG data


Early stages of visual processing were assessed by the P1
and N170 components of visual ERPs, measured over posterior
temporal-occipital regions, allowing us to compare trials with
high or low activation of each muscle (ZM and CS) in response
to each facial expression (happy and angry). Importantly, we
verified that the different face stimuli were equally distributed
across trials with high- or low-muscular activation using an itembased analysis.1 This analysis thus ensured that the intensity of
facial mimicry did not systematically relate to a distinct subset
of faces.
P1 had a mean latency of 119 ms, and N170 a mean latency
of 184 ms, consistent with earlier ERP studies (Batty & Taylor,
2003; Pourtois et al., 2005). Fig. 2 shows the topographical
maps at both latencies in response to happy and angry expressions. Statistical analyses did not reveal any significant effect
of mimicry intensity on ERP latencies, but on ERP amplitudes
only. Accordingly, hereafter we focus on the results for amplitudes.
For P1 amplitude, the five-way repeated-measures ANOVA
showed a main effect of electrode [F(2,28) = 13.955, = 0.838,
p < 0.001], indicating that P1 was larger for more occipital electrodes in both hemispheres (PO7, PO8, O1, O2, as compared
to PO3 and PO4). Other main effects were not significant.
More critically, there was a significant quadruple interaction of
emotion muscle intensity of EMG mimicry hemisphere
[F(1,14) = 6.159, p < 0.03]. To further investigate this interaction, the effect of the intensity of facial mimicry on P1 amplitude
was studied for each emotion separately.

1 We calculated the proportion of trials for which a given face stimulus was
associated with a high- or a low-EMG facial mimicry, for each movie clip and
each condition, across all subjects. For each angry face, 4657% of the presentations triggered a high-CS response, and none of these proportions differed
significantly from 50%. Similarly, for each happy face, 4753% of the presentations triggered a high-ZM response, and none of these proportions was
significantly different from 50%. In other words, all movie clips elicited highor low-facial mimicry on a similar number of trials across all subjects.

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

1109

Fig. 3. Grand average ERPs for P1 on trials with stronger vs. weaker mimicry in emotion-relevant muscle, showing brain response to faces when EMG disclosed
high-ZM vs. low-ZM activity in response to happy expressions (a); high-CS vs. low-CS activity in response to angry expressions (b). Data were averaged over
electrodes where the P1 component was maximal for each hemisphere. * p < 0.05.

For trials with happy faces, the ANOVA disclosed a significant muscle intensity of EMG mimicry hemisphere
interaction [F(1,14) = 5.982, p < 0.03]. When we considered
each muscle separately in this condition, we found that the ZM
showed a significant intensity of EMG mimicry hemisphere
interaction [F(1,14) = 9.585, p < 0.01], in addition to a significant
main effect of electrode [F(2,14) = 14.509, = 0.813, p < 0.001].
As this interaction did not include an electrode effect, we
averaged the values of the three electrodes from each hemisphere. Fig. 3a shows the resulting mean P1 amplitudes for
both the right and the left hemisphere, which was modulated
by the intensity of mimicry to happy faces on the right side,
but not on the left. Post-hoc t-tests confirmed that P1 amplitudes were larger in the right-hemisphere electrodes when ZM
muscle activity was high as compared to low [t(14) = 2.663,
p < 0.01]. This result indicated that the higher the P1 amplitude
over the right hemisphere, the stronger the ZM mimicry to happy
faces. By contrast for CS, the intensity of EMG mimicry did not
reliably modulate P1 amplitude for happy faces; only the electrode main effect was significant [F(2,28) = 14.669, = 0.823,
p < 0.001].
For trials with angry faces, the interaction muscle intensity
of EMG mimicry was not significant. However, when ERPs
were averaged over the three same electrodes from each hemisphere (as for the happy condition above), the comparison
between trials with high and low-CS activity in response to
the angry faces (Fig. 3b) revealed the same modulation of
amplitude as that observed for happy faces. Moreover, note
that the significant (p < 0.03) quadruple interaction of emotion muscle intensity of EMG mimicry hemisphere for
P1 amplitude found at the first level of the statistical analysis
(see above) further justified this direct decomposition. Thus, a
post-hoc paired t-test of mean P1 amplitude between these two

critical conditions (high-CS vs. low-CS response) demonstrated


a significant enhancement over the right (but not the left) hemisphere when CS response to angry faces was high as compared
to low [t(14) = 2.224, p < 0.03], as shown in Fig. 3b. This pattern
again suggested that increased P1 amplitude was associated with
stronger facial mimicry, an effect present for the two emotional
expressions manipulated in this study and seen at the level of
the ZM for happy faces and at the level of the CS for angry
faces.
Next, we analyzed N170 amplitude using the same five-way
ANOVA as for the P1. This analysis showed a significant
main effect of electrode [F(1,14) = 17.635, p < 0.005], which
was explained by larger N170 amplitude for the more lateral
temporal electrodes (P9 and P10 as compared to adjacent but
more medial PO7, PO8). Critically, this analysis also revealed a
significant interaction of muscle emotion intensity of EMG
mimicry [F(1,14) = 5.925, p < 0.05]. As the electrode factor did
not interact with the other factors, data were averaged across
the two electrodes in each hemisphere (PO7 and P9 for the left
hemisphere, PO8 and P10 for the right hemisphere). As can be
seen in Fig. 4, N170 amplitude over the right hemisphere was
reliably modulated by the intensity of EMG mimicry.
Post-hoc analyses showed that right N170 amplitude
was attenuated when facial imitation was more intense,
for both types of emotion expression. N170 was significantly smaller when ZM activity in response to happy
faces was high as compared to low [t(14) = 2.093, p < 0.01]
and when CS activity in response to angry faces was
high as compared to low [t(14) = 2.665, p < 0.05]). This
effect was similar for both emotion types and both muscles.
There was no significant correlation across subjects between
the P1 enhancement and reduction of the N170 (all r < 0.48,

1110

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

Fig. 4. Grand average ERPs for N170 on trials with stronger vs. weaker mimicry in emotion-relevant muscle, showing brain response to faces when EMG revealed
high-ZM vs. low-ZM activity in response to happy expressions (a); high-CS vs. low-CS activity in response to angry expressions (b). Data were averaged over
electrodes where N170 component was maximal for each hemisphere. * p < 0.05.

p > 0.07), suggesting that these two effects most likely reflect
distinct underlying mechanisms.
Finally, there were no modulations of ERPs at later latencies,
encompassing P2 and later components.
4. Individual personality factors
Because intensity of facial mimicry has previously been
related to emotionality (Hess et al., 1998; Sonnby-Borgstrom,
2002; Sonnby-Borgstrom & Jonsson, 2004; Sonnby-Borgstrom
et al., 2003) and empathy (de Wied et al., 2006; Hermans et
al., 2006), we tested for any correlation between the intensity of
EMG responses to facial expressions and individual personality
traits measured by questionnaires (STAI and EQ). Intensity of
mimicry in the CS was assessed for each subject by subtracting the mean EMG activity over the 1400 ms time window after
happy expressions onset from the mean EMG activity over the
1400 ms time window after angry expression onset. Conversely,
mean ZM activity in response to angry expressions was subtracted from mean ZM activity in response to happy expressions.
However, the intensity of facial mimicry in each muscle was correlated with neither levels of anxiety nor levels of empathy (all
r < 0.33, p > 0.1).
5. Discussion
To our knowledge, this study is the first to combine simultaneous EEG and EMG recordings to assess the pattern of
neural activation associated with involuntary mimicry of emotional facial expressions. By tracking the time-course of both
EEG and EMG responses to dynamic emotional faces, we were
able to identify processing stages that were differentially activated as a function of mimicry intensity, on a trial-by-trial
basis, i.e. when covert facial imitation was higher as com-

pared with when it was lower. Our study provides several new
findings.
First, our results replicate the facial imitation phenomenon
in 15 healthy subjects using synthesized dynamic facial expressions, with selective modulation of ZM muscles by the
presentation of happy faces, and of CS muscle by the presentation of angry faces. These data extend previous observations
of mimicry elicited by static pictures of faces (Dimberg &
Petterson, 2000; Dimberg & Thunberg, 1998; Dimberg et al.,
2000, 2002) by showing that similar EMG responses can reliably be measured when the emotion in the face is dynamically
expressed. This was the case even though we used artificial
movie clips of morphed (non-natural) expressions. Moreover,
it is remarkable that such reactions were still measurable after
many repetitions of the same facial expressions (125 trials each,
see Section 2), without any apparent habituation, as there were
as many trials with high levels of imitation in the second half of
the experiment as in the first half (data not shown). In contrast
to this large number of trials needed for reliable EEG recordings in our study, previous behavioral studies on facial mimicry
have used only very few trials with emotional expressions (either
four (de Wied et al., 2006; Hess & Blairy, 2001; Hess et al.,
1998), six (Dimberg & Petterson, 2000; Dimberg et al., 2002)
or eight (Dimberg & Thunberg, 1998; Vrana & Gross, 2004;
Weyers et al., 2006)). Therefore, our study indicates that facial
mimicry can be repeatedly elicited over many successive trials,
supporting a strong degree of automaticity for this phenomenon,
and establishing an opportunity for repeated measurements of
the concomitant neural activity. The automatic nature of this
phenomenon is also substantiated by the fact that none of our
subjects was actually aware of the recording of facial EMG and
mimicry.
Secondly, our EMG data revealed different temporal dynamics for CS and ZM responses. The CS activity was enhanced

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

in response to angry faces as soon as 200 ms after stimulus


onset, while ZM activity was increased in response to happy
faces after 500 ms only. This contrasts with the findings of
Dimberg and Thunberg (1998), who found similar latencies
for both expressions, but this is likely due to the fact that we
used a different set of face stimuli with dynamic expressions,
and because our paradigm did not evoke an early emotionindependent increase in CS activity (blinks to sudden picture
onset), which was present across all conditions in Dimbergs
studies (Dimberg & Petterson, 2000; Dimberg & Thunberg,
1998; Dimberg et al., 2000, 2002). A more rapid onset of
the angry CS response, relative to the happy ZM response,
might reflect faster processing of negative, threat-related signals (LeDoux, 2000; Schupp et al., 2004) or some intrinsic
advantage for the innervations of the upper facial musculature
(Morecraft et al., 2001, 2004). Note however that we found no
evidence in EEG recordings for any difference in the latency
or amplitude of early visual ERPs to angry vs. happy faces. It
is therefore probable that differences in EMG activation latencies result from differences in the innervation and mechanical
properties of upper and lower facial muscles. On the other hand,
unlike a few previous studies (Dimberg & Petterson, 2000; Zhou
& Hu, 2004), we did not find any asymmetry in muscular activity for the left and right side of the face, for both emotion
expressions.
Thirdly, and much more critically, the most novel aspect of
our study concerned the link between brain activity and intensity of facial mimicry. We found that two distinct stages of face
processing were differentially activated on trials where mimicry
was more intense, relative to trials with weaker mimicry, as evidenced by simultaneous EEG and EMG recordings. A first effect
arose at the level of the visual P1 response. The amplitude of
this early visual component was enhanced in the right hemisphere on trials with more intense imitation of happy faces (i.e.
with greater ZM activity) and, to a lesser degree, on trials with
more intense imitation of angry faces (i.e. greater CS activity).
This relation between P1 and muscular activity was selective
to the corresponding emotional expressions, arising for the ZM
in response to happy faces and for the CS in response to angry
faces, but not vice versa, which clearly demonstrates that it was
related to mimicry of the seen expression, not just to non-specific
muscular reactivity.
The P1 component has repeatedly been shown to be generated
in extrastriate visual cortex and associated with early processing of visual stimuli (Luck & Hillyard, 1995; Martinez et al.,
1999). It is classically modulated by selective attention (Heinze
& Mangun, 1995; Luck, Heinze, Mangun, & Hillyard, 1990;
Luck & Hillyard, 1995), as its amplitude is larger when the visual
stimulus is shown at an attended compared to unattended spatial location. Moreover, recent studies found that P1 amplitude
is also modulated by the emotional content of facial expressions, relative to neutral expressions (Batty & Taylor, 2003,
2006; Pourtois et al., 2005), possibly related to enhanced attention towards emotionally salient stimuli. Here, we did not find
a global difference in P1 amplitude in response to happy faces
as compared to angry faces, but our study was not designed to
allow a comparison between emotional vs. neutral faces as in

1111

these previous studies. Because the P1 component is associated


with early perceptual analysis, a modulation of its amplitude
might be related to the visual properties of stimuli, and hence
the enhanced response on high-mimicry trials might potentially
be due to a different distribution of the stimuli in each condition.
However, our item-based analysis showed that trials with highor low-muscular activations included an equal proportion of the
same face stimuli. Thus, for each subject and for each of the
faces, roughly half of the repetitions were associated with highfacial muscle activity (taking into account each expression and
muscle separately), while the other half were associated with
low-facial muscle activity. Therefore, the P1 amplitude modulation cannot simply be explained by visual aspects of stimuli.
Instead, we suggest that higher amplitude of P1 on trials with
higher mimicry might be due to increased attention to the stimulus, leading to better perceptual processing of the face and thus
to stronger motor imitation. Moreover, the lateralization of this
effect over the right hemisphere is consistent with the hypothesis that the right hemisphere might be dominant for processing
both faces and emotion information (Borod et al., 1998; Borod
& Koff, 1990; Borod, Stclair, Koff, & Alpert, 1990). Stronger
activation of the right hemisphere at early processing stages
might thus enhance emotional face processing and subsequent
mimicry.
A second effect in ERPs arose at the level of the N170
component. We found that lower intensity of facial imitation
was associated with greater amplitude of the N170. This effect
was visible in the right hemisphere only, again in accordance
with a right-hemisphere dominance in face and emotion perception (Borod et al., 1990, 1998; Borod & Koff, 1990). The
N170 component is classically associated with visual categorization of faces (Bentin et al., 1996) and of facial expressions
(Batty & Taylor, 2003). It is generally thought to reflect the
structural encoding of faces for recognition and identification.
Thus, our data suggest that when specific face recognition processes are more strongly recruited, the imitation of emotional
expression might be less effectively manifested. This pattern
of results may be consistent with the classic cognitive model
of face recognition (Bruce & Young, 1986; Haxby, Hoffman, &
Gobbini, 2000), according to which facial identity and emotional
expression are processed in two separate pathways, such that a
preferential recruitment of identity-related pathways (indexed
by N170 activity) might divert resources from the expressionrelated pathways and hence lead to reduced mimicry of facial
expression.
Alternatively, reduced mimicry on trials with larger N170
might result from some indirect effect of proprioceptive facial
feedback during face processing, in line with earlier proposals (Tourangeau & Ellsworth, 1979) that facial imitation can
serve to enhance the recognition of expression in other people
through somatosensory feedback. According to such a hypothesis, facial imitation would lead to motor resonance with the
emotions displayed by others and hence better recognition of
the expressed affect. Facial imitation would therefore be particularly crucial for empathy, as suggested by several studies
showing a link between empathy and intensity of mimicry (de
Wied et al., 2006; Hermans et al., 2006; McIntosh, Reichmann-

1112

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113

Decker, Winkielman, & Wilbarger, 2006; Sonnby-Borgstrom,


2002; Sonnby-Borgstrom & Jonsson, 2004; Sonnby-Borgstrom
et al., 2003). In this framework, our data might indicate that
when higher-order visual processes are strongly recruited for
face processing (corresponding to larger N170 amplitude), less
mimicry is needed to process and recognize the emotion displayed in faces. However, in our study, we did not find any
evidence that the relation between N170 amplitude and expression mimicry was modified during blocks where subjects viewed
faces passively or blocks where subjects categorized expressions explicitly (data were collapsed across blocks because of
this lack of task effect). Another recent study (Hess & Blairy,
2001) could not confirm any relationship between mimicry and
explicit emotion recognition. Further investigations are needed
to make more detailed comparisons between covert mimicry,
subjective ratings of emotion, and concomitant brain activity, in order to better understand the functional links between
early visual ERPs and the subsequent response of facial muscles.
Moreover, unlike previous studies (Sonnby-Borgstrom,
2002; Sonnby-Borgstrom et al., 2003), here we found no reliable
correlation between the intensity of facial mimicry in EMG and
empathy level. These previous studies included a larger sample
of subjects and distinguished high vs. low empathizers based
on different measures, such as the questionnaire measure of
emotional empathy (QMEE) (Chlopan, Mccain, Carbonell, &
Hagen, 1985). However, the empathy measure used here (EQ)
is thought to provide a relatively pure measure of empathy,
unlike previous tests (Chlopan et al., 1985) that may also be
sensitive to emotional arousability to the environment in general rather than to peoples emotion in particular (Mehrabian,
Young, & Sato, 1988). In any case, these differences may explain
why we did not replicate previous findings on empathy (SonnbyBorgstrom, 2002; Sonnby-Borgstrom et al., 2003) and suggest
that individual differences in empathy per se do not seem critically implicated in the effects reported in the present study.
Similarly, we did not find any correlation between intensity of
facial mimicry and anxiety level. A few studies suggested such
a correlation but did not directly measure it (Sonnby-Borgstrom
& Jonsson, 2004; Vrana & Gross, 2004).
Finally, we note that previous EEG studies have reported
conflicting results concerning the effect of expression on the
N170, with a modulation by emotional factors found in some
cases (Ashley et al., 2004; Batty & Taylor, 2003; Campanella
et al., 2002) but not in others (Eimer & Holmes, 2002). While
some aspects of these apparent discrepancies might potentially
be explained by variations in face configuration (Ashley et
al., 2004), selective attention (Eimer & Holmes, 2002), and/or
homogeneity of stimulus set (Thierry, Martin, Downing, &
Pegna, 2007) an intriguing possibility suggested by our novel
findings is that different degrees of mimicry (due to different
processing modes or different emotional resonance) might also
partly account for a variable modulation of N170 to emotional
faces.
In summary, by combining simultaneous EEG and EMG
recordings, our study reveals for the first time that the intensity of facial mimicry to emotional expressions (either positive

or negative) is associated with distinct patterns of brain activity


at the earliest stages of visual face processing, involving both
the P1 and N170 components in ERPs. These early modulations agree with the fact that mimicry is automatically elicited
and may persist without conscious awareness. More generally, our work shows the feasibility and potential interest of
combining simultaneous peripheral psychophysiology and neurophysiology measures to investigate cerebral events underlying
emotional perception and reaction.
Acknowledgments
This research was supported by the Swiss National Center
of Competence in Research (NCCR) for Affective Sciences,
and grants from the Swiss National Science Foundation
(105311-108187, 3100A0-102133). We thank David Sander for
collaboration and helpful discussions.
References
Ashley, V., Vuilleumier, P., & Swick, D. (2004). Time course and specificity
of event-related potentials to emotional expressions. Neuroreport, 15(1),
211216.
Baron-Cohen, S., & Wheelwright, S. (2004). The empathy quotient: An investigation of adults with Asperger syndrome or high functioning autism, and
normal sex differences. Journal of Autism and Developmental Disorders,
34(2), 163175.
Batty, M., & Taylor, M. J. (2003). Early processing of the six basic facial
emotional expressions. Brain Research: Cognitive Brain Research, 17(3),
613620.
Batty, M., & Taylor, M. J. (2006). The development of emotional face processing
during childhood. Developmental Science, 9(2), 207220.
Benson, P. J., & Perrett, D. I. (1993). Extracting prototypical facial images from
exemplars. Perception, 22(3), 257262.
Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive
Neuroscience, 8(6), 551565.
Bentin, S., Deouell, L. Y., & Soroker, N. (1999). Selective visual streaming
in face recognition: Evidence from developmental prosopagnosia. Neuroreport, 10(4), 823827.
Borod, J. C., Cicero, B. A., Obler, L. K., Welkowitz, J., Erhan, H. M., Santschi,
C., et al. (1998). Right hemisphere emotional perception: Evidence across
multiple channels. Neuropsychology, 12(3), 446458.
Borod, J. C., & Koff, E. (1990). Lateralization for facial emotional behaviorA
methodological perspective. International Journal of Psychology, 25(2),
157177.
Borod, J. C., Stclair, J., Koff, E., & Alpert, M. (1990). Perceiver and poser
asymmetries in processing facial emotion. Brain and Cognition, 13(2),
167177.
Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal
of Psychology, 77(Pt 3), 305327.
Burrows, A. M., Waller, B. M., Parr, L. A., & Bonar, C. J. (2006). Muscles of facial expression in the chimpanzee (Pan troglodytes): Descriptive,
comparative and phylogenetic contexts. Journal of Anatomy, 208(2), 153
167.
Campanella, S., Quinet, P., Bruyer, R., Crommelinck, M., & Guerit, J. M. (2002).
Categorical perception of happiness and fear facial expressions: An ERP
study. Journal of Cognitive Neuroscience, 14(2), 210227.
Chlopan, B. E., Mccain, M. L., Carbonell, J. L., & Hagen, R. L. (1985).
EmpathyReview of available measures. Journal of Personality and Social
Psychology, 48(3), 635653.
Darwin, C. (1872). The expression of emotion in man and animals. London:
Murray.

A. Achaibou et al. / Neuropsychologia 46 (2008) 11041113


DeLuca, C. J. (1997). The use of surface electromyography in biomechanics.
Journal of Applied Biomechanics, 13(2), 135163.
de Wied, M., van Boxtel, A., Zaalberg, R., Goudena, P. P., & Matthys, W.
(2006). Facial EMG responses to dynamic emotional facial expressions in
boys with disruptive behavior disorders. Journal of Psychiatric Research,
40(2), 112121.
Dimberg, U., & Petterson, M. (2000). Facial reactions to happy and angry facial
expressions: Evidence for right hemisphere dominance. Psychophysiology,
37(5), 693696.
Dimberg, U., & Thunberg, M. (1998). Rapid facial reactions to emotional facial
expressions. Scandinavian Journal of Psychology, 39(1), 3945.
Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactions
to emotional facial expressions. Psychological Science, 11(1), 8689.
Dimberg, U., Thunberg, M., & Grunedal, S. (2002). Facial reactions to emotional stimuli: Automatically controlled emotional responses. Cognition &
Emotion, 16(4), 449471.
Eimer, M., & Holmes, A. (2002). An ERP study on the time course of emotional
face processing. Neuroreport, 13(4), 427431.
Eimer, M., & McCarthy, R. A. (1999). Prosopagnosia and structural encoding
of faces: Evidence from event-related potentials. Neuroreport, 10(2), 255
259.
Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA:
Consulting Psychologists Press.
Field, T. M., Woodson, R., Greenberg, R., & Cohen, D. (1982). Discrimination
and imitation of facial expression by neonates. Science, 218(4568), 179181.
Fridlund, A. J., & Cacioppo, J. T. (1986). Guidelines for human electromyographic research. Psychophysiology, 23(5), 567589.
Gratton, G., Coles, M. G. H., & Donchin, E. (1983). A new method for off-line
removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55(4), 468484.
Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed
human neural system for face perception. Trends in Cognitive Science, 4(6),
223233.
Heinze, H. J., & Mangun, G. R. (1995). Electrophysiological signs of sustained and transient attention to spatial locations. Neuropsychologia, 33(7),
889908.
Hermans, E. J., Putman, P., & van Honk, J. (2006). Testosterone administration
reduces empathetic behavior: A facial mimicry study. Psychoneuroendocrinology, 31(7), 859866.
Hess, U., & Blairy, S. (2001). Facial mimicry and emotional contagion to
dynamic emotional facial expressions and their influence on decoding accuracy. International Journal of Psychophysiology, 40(2), 129141.
Hess, U., Philippot, P., & Blairy, S. (1998). Facial reactions to emotional facial
expressions: Affect or cognition? Cognition & Emotion, 12(4), 509531.
LeDoux, J. E. (2000). Emotion circuits in the brain. Annual Review of Neuroscience, 23, 155184.
Luck, S. J., Heinze, H. J., Mangun, G. R., & Hillyard, S. A. (1990). Visual eventrelated potentials index focused attention within bilateral stimulus arrays. 2.
Functional dissociation of P1 and N1 components. Electroencephalography
and Clinical Neurophysiology, 75(6), 528542.
Luck, S. J., & Hillyard, S. A. (1995). The role of attention in feature detection
and conjunction discriminationAn electrophysiological analysis. International Journal of Neuroscience, 80(14), 281297.
Martinez, A., Anllo-Vento, L., Sereno, M. I., Frank, L. R., Buxton, R. B.,
Dubowitz, D. J., et al. (1999). Involvement of striate and extrastriate
visual cortical areas in spatial attention. Nature Neuroscience, 2(4), 364
369.
McIntosh, D. N., Reichmann-Decker, A., Winkielman, P., & Wilbarger, J. L.
(2006). When the social mirror breaks: Deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Developmental
Science, 9(3), 295302.
Mehrabian, A., Young, A. L., & Sato, S. (1988). Emotional empathy and associated individual-differences. Current Psychology: Research & Reviews, 7(3),
221240.
Morecraft, R. J., Louie, J. L., Herrick, J. L., & Stilwell-Morecraft, K. S. (2001).
Cortical innervation of the facial nucleus in the non-human primateA new

1113

interpretation of the effects of stroke and related subtotal brain trauma on


the muscles of facial expression. Brain, 124, 176208.
Morecraft, R. J., Stilwell-Morecraft, K. S., & Rossing, W. R. (2004). The motor
cortex and facial expression: New insights from neuroscience. Neurologist,
10(5), 235249.
Oostenveld, R., & Praamstra, P. (2001). The five percent electrode system for
high-resolution EEG and ERP measurements. Clinical Neurophysiology,
112(4), 713719.
Parr, L. A., Waller, B. M., & Fugate, J. (2005). Emotional communication in
primates: Implications for neurobiology. Current Opinion in Neurobiology,
15(6), 716720.
Picton, T. W., Bentin, S., Berg, P., Donchin, E., Hillyard, S. A., Johnson, R., Jr.,
et al. (2000). Guidelines for using human event-related potentials to study
cognition: Recording standards and publication criteria. Psychophysiology,
37(2), 127152.
Pourtois, G., Dan, E. S., Grandjean, D., Sander, D., & Vuilleumier, P. (2005).
Enhanced extrastriate visual response to bandpass spatial frequency filtered fearful faces: Time course and topographic evoked-potentials mapping.
Human Brain Mapping, 26(1), 6579.
Sato, W., & Yoshikawa, S. (2006). Spontaneous facial mimicry in response to
dynamic facial expressions. Cognition, 104(1), 118.
Schupp, H. T., Ohman, A., Junghofer, M., Weike, A. I., Stockburger, J., &
Hamm, A. O. (2004). The facilitated processing of threatening faces: An
ERP analysis. Emotion, 4(2), 189200.
Sonnby-Borgstrom, M. (2002). Automatic mimicry reactions as related to differences in emotional empathy. Scandinavian Journal of Psychology, 43(5),
433443.
Sonnby-Borgstrom, M., & Jonsson, P. (2004). Dismissing-avoidant pattern of
attachment and mimicry reactions at different levels of information processing. Scandinavian Journal of Psychology, 45(2), 103113.
Sonnby-Borgstrom, M., Jonsson, P., & Svensson, O. (2003). Emotional empathy
as related to mimicry reactions at different levels of information processing.
Journal of Nonverbal Behavior, 27(1), 323.
Soussignan, R., Schaal, B., Marlier, L., & Jiang, T. (1997). Facial and autonomic
responses to biological and artificial olfactory stimuli in human neonates: Reexamining early hedonic discrimination of odors. Physiology & Behavior,
62(4), 745758.
Spielberger, C. D. (1983). Manual for the state-trait anxiety inventory: Selfevaluation questionnaire. Palo Alto, CA: Consulting Psychologist Press Inc.
Thierry, G., Martin, C. D., Downing, P., & Pegna, A. J. (2007). Controlling for
interstimulus perceptual variance abolishes N170 face selectivity. Nature
Neuroscience, 10(4), 505511.
Tourangeau, R., & Ellsworth, P. C. (1979). The role of facial response in the
experience of emotion. Journal of Personality and Social Psychology, 37(9),
15191531.
van Boxtel, A. (2001). Optimal signal bandwidth for the recording of surface
EMG activity of facial, jaw, oral, and neck muscles. Psychophysiology, 38(1),
2234.
van Boxtel, A., Damen, E. J., & Brunia, C. H. (1996). Anticipatory EMG
responses of pericranial muscles in relation to heart rate during a warned
simple reaction time task. Psychophysiology, 33(5), 576583.
van Boxtel, A., & Jessurun, M. (1993). Amplitude and bilateral coherency of
facial and jaw-elevator EMG activity as an index of effort during a two-choice
serial reaction task. Psychophysiology, 30(6), 589604.
Vrana, S. R., & Gross, D. (2004). Reactions to facial expressions: Effects of
social context and speech anxiety on responses to neutral, anger, and joy
expressions. Biological Psychology, 66(1), 6378.
Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements.
Journal of Personality and Social Psychology, 78(1), 105119.
Weyers, P., Muhlberger, A., Hefele, C., & Pauli, P. (2006). Electromyographic
responses to static and dynamic avatar emotional facial expressions. Psychophysiology, 43(5), 450453.
Zhou, R., & Hu, S. (2004). Effects of viewing pleasant and unpleasant photographs on facial EMG asymmetry. Perceptual and Motor Skills, 99(3 Pt
2), 11571167.

Você também pode gostar