Você está na página 1de 17

ETHICS & BEHAVIOR, 19(5), 432447

Copyright 2009 Taylor & Francis Group, LLC


ISSN: 1050-8422 print / 1532-7019 online
DOI: 10.1080/10508420903035455
SHARPE
A SECOND
AND
LOOK
FAYE
AT DEBRIEFING

A Second Look at Debriefing Practices:


Madness in Our Method?

Donald Sharpe
University of Regina

Cathy Faye
York University

This article is a reconsideration of Teschs (1977) ethical, educational, and methodological functions
for debriefing through a literature review and an Internet survey of authors of articles published in the
Journal of Personality and Social Psychology and Journal of Traumatic Stress. We advocate for a
larger ethical role for debriefing in nondeception research. The educational function of debriefing is
examined in light of the continued popularity of undergraduate participant pools. A case is made for
the methodological function of debriefing to clarify aspects of research participation. Recommenda-
tions are made to improve the conducting and reporting of debriefings.

Keywords: debriefing, deception, informed consent, participant pools

Thirty years ago, an article with the provocative title Debriefing Research Participants: Though
This Be Method There Is Madness to It was published in the Journal of Personality and Social
Psychology. When Frederick E. Teschs (1977) article appeared, social psychology was in the
midst of a crisis of confidence (Elms, 1975). In part as a reaction to Milgrams (1963) controver-
sial obedience studies and his use of deception, codes of ethics for the conducting of psychologi-
cal research had been written and Institutional Review Boards (IRBs) established to enforce these
codes. The central element to these codes of ethics was informed consent. Potential research par-
ticipants were to be given the opportunity to agree to participate after a complete and truthful dis-
closure of the purpose and nature of the experiment. Another important element of these codes of
ethics was debriefing. Research participants were to be given an explanation of the purpose and
nature of the experiment at its conclusion to remove any lingering negative effects from research
involvement. Debriefing was the focus of Teschs article. In that article, Tesch identified three
functions for debriefing: ethical, educational, and methodological. Tesch concluded that efforts to
meet these functions indeed had resulted in a debriefing strategy; however, he also regarded this
strategy to be more madness than method.
Teschs (1977) article and his concerns regarding debriefing had little discernable impact on
the research literature. A PsycINFO search revealed a mere 9 citations to the Tesch article; a Web

Correspondence should be addressed to Donald Sharpe, Department of Psychology, University of Regina, Regina, SK,
Canada S4S 0A2. E-mail: sharped@uregina.ca
A SECOND LOOK AT DEBRIEFING 433

of Science search found 13 citations. In the same vein, debriefing has received comparatively little
attention from social scientists in the intervening 3 decades since Tesch. The history of debriefing
was presented in a chapter by Harris (1988); a review of debriefing practices comprised one chap-
ter of Greenberg and Folger (1988). By comparison to the little recent attention focussed on de-
briefing, there are more than 500 citations to informed consent in the title of articles in PsycINFO.
Informed consent is a central principle to all ethical codes in the behavioral and medical sciences
and has been described as the cornerstone of ethical research (Miller, 2003, p. 130). Like its pre-
decessors, the most recent version of the American Psychological Association (2002) Ethical
Principles and Code of Conduct devotes much greater space to informed consent than to debrief-
ing. Most IRBs have policies regarding the structure and content of informed consent forms; de-
briefing is acknowledged but not elaborated upon. Even the term debriefing has been appropriated
by a distinct literature pertaining to psychological interventions after critical incidents such as
school shootings.
In this article, we revisit Tesch (1977) as the impetus for an evaluation of the current state of
debriefing practices. Much has changed over 3 decades, but we argue that the ethical, educational,
and methodological functions of debriefing identified by Tesch (1977) are as relevant now as
ever. Our emphasis is on the literature pertaining to debriefing of adult research participants pub-
lished since the Greenberg and Folger (1988) review (for a review of debriefing of child partici-
pants, see Fisher, 2005). We focus specifically on debriefing in social psychology research and
trauma studies. In that regard, we report on a survey of authors from a social psychology journal
and a trauma journal to tap their current thinking on debriefing practices. The Journal of Person-
ality and Social Psychology (JPSP) is the flagship American Psychological Association journal
for social psychology and articles from JPSP have been the focus of previous reviews of debrief-
ing practices and deception. The Journal of Traumatic Stress (JTS) is the official publication for
the International Society for Traumatic Stress Studies. We chose to survey authors of JTS in part
because trauma research is a fertile ground for the study of ethical issues but more importantly be-
cause we wanted to expand the role of debriefing beyond the confines of deception in social psy-
chology research.
For our survey of current debriefing practices, we examined the method sections of all JPSP ar-
ticles from the second half of 2006 and the first half of 2007 (12 issues) and all 12 issues of JTS
published in 2006 and 2007. The first experiment of multiple experiment articles was evaluated.
Review articles, commentaries, and archival studies were excluded. The method sections of these
JPSP and JTS studies were coded for four variables: whether debriefing was mentioned, whether
informed consent was mentioned, whether remuneration of participants was mentioned, and
whether the sample was composed primarily of university students. We rated the studies inde-
pendently; the small number of disagreements was settled by discussion.
E-mails were then sent to corresponding JPSP and JTS authors describing a Web-based survey
of debriefing practices. Two hundred seventy-three articles were evaluated (JPSP n = 139; JTS n
= 134). The authors of 30 studies could not be surveyed because of multiple authorships (1 article
was selected randomly) or because the e-mail was returned unread. Of the 121 JPSP authors con-
tacted by e-mail, 47 answered the survey, for a response rate of 38.8%. Of the 122 JTS authors
contacted by e-mail, 50 answered, for a response rate of 41.0%. Our response rates are somewhat
lower than the 60.6% and 49% reported, respectively, by Sigmon (1995) and by Sigmon, Boulard,
and Whitcomb-Smith (2005) for mail surveys of clinical authors for their ethical practices but
higher than the 32% response rate reported by Newman, McCoy, and Rhodes (2002) for a mail
434 SHARPE AND FAYE

survey of medical and psychiatric researchers. In addition, our response rates are within the range
of response rates found in a meta-analysis of e-mail and Web-based surveys in psychological re-
search (C. Cook, Heath, & Thompson, 2000).

ETHICS AND DEBRIEFING

Debriefing and Deception Research


Tesch (1977) regarded the ethical function of debriefing as a means of undoing deceptions (p.
217). Debriefing was tied to deception in two influential articles from the mid-1970s by Holmes
(1976a, 1976b). Holmes argued that debriefing serves two purposes with regard to deception:
dehoaxing or revealing to participants the nature of the deception, and desensitizing or eliminating
harmful effects of research participation stemming from deception. The frequency of reports of
debriefing grew with the popularity of deception. By the end of the 1970s, more than half of the
empirical studies in JPSP employed deception (Adair, Dushenko, & Lindsay, 1985). In the same
vein, reports of debriefing in JPSP rose from 29% of studies published in 1971 to 45% of studies
published in 1979.
Tesch (1977) did not question the relationship between deception and debriefing; what he did
question was the evidence for the effectiveness of debriefing after deception. Tesch commented
that the paucity of studies of debriefing effectiveness is surprising (p. 218). Such evidence re-
mains elusive. Greenberg and Folger (1988) summarized the results from five studies that pro-
duced inconclusive results as to the success of debriefing in dehoaxing research participants of
false information. Greenberg and Folger suggested, however, that debriefing was more successful
at desensitizing participants based on the results of four studies (including the Milgram study) that
found positive reactions and no negative aftereffects from being deceived.
More recent research has compared standard versus more elaborate debriefings for dehoaxing
and desensitizing effectiveness. Turning first to dehoaxing, Misra (1992) provided participants
with false performance feedback (Study 1) or false information about a company or service
(Studies 2 and 3) followed by a conventional debriefing that described the feedback as false or an
explicit debriefing that explained how false beliefs can persist after debriefing. Across the three
studies, the explicit debriefing but not the conventional debriefing was found to reduce the impact
of the false feedback; for example, participants explicit ratings of their self-assessed ability to
perform a similar experimental task did not differ from control group participants ratings (Study
1). Further to dehoaxing, McFarland, Cheam, and Buehler (2007) gave participants false success
or failure feedback on their ability to distinguish genuine and fake suicide notes supposedly as a
measure of social perception. Participants then received one of four types of feedback/debriefing:
(a) no debriefing, (b) a standard outcome debriefing that the test feedback was false, (c) a revised
outcome debriefing that the test feedback was false and the suicide note test was bogus, or (d) a
process debriefing that the test feedback was false and false beliefs can persist after debriefing.
Across two studies, the revised and process debriefings but not the standard debriefing were effec-
tive at eliminating the influence of false feedback on perceptions of task performance and ratings
of ability. Desensitizing research participants to negative effects from deception by debriefing was
examined in a study by Toy, Olsen, and Wright (1989). Participants received a minimal debriefing
that revealed some aspects of the experiment were not as they appeared or a more thorough de-
A SECOND LOOK AT DEBRIEFING 435

briefing that described aspects of a mild deception involving the cost and availability of ballpoint
pens. Regardless of whether the debriefing was minimal or more thorough, participants reactions to
research participation and deception were positive. That is itaside from these few studies, there is
no other current, direct evidence addressing the effectiveness of debriefing after deception.
Changing research interests, new methodologies, and the constraints imposed by the ethics
codes themselves led some to predict that deception of research participants would become less
frequent (Adair, 2001; Nicks, Korn, & Manieri, 1997) and the need for debriefing would decline
as a consequence. Yet deception has hardly disappeared from the research landscape. Recent re-
views have found from one third to one half of current JPSP studies employ deception (Epley &
Huff, 1998; Nicks et al., 1997; Sieber, Iannuzzo, & Rodriguez, 1995). Kimmel (2001) found
32.6% of researchers in JPSP studies published in 1996 provided false information to participants
and a further 30.1% failed to convey some element of the research to participants. Thus, deception
has maintained a significant presence in social psychological research. Indeed, there has been a re-
cent replication of the Milgram obedience study (Burger, 2007) that received IRB approval.
In spite of the persisting popularity of deception, our examination of the Method sections of
JPSP articles found slightly less than one third (n = 45, 32.4%) included reports of debriefing.
This figure is comparable to the number of reports of informed consent (n = 33, 23.7%) but it is
substantially smaller than the number of reports of participant remuneration (n = 108, 77.7%).
More telling, typical descriptions of debriefing in our JPSP studies were very brief; Participants
in this and all following experiments were debriefed prior to dismissal; After completing this
task, participants completed a brief demographics questionnaire and were debriefed; and Partic-
ipants were also debriefed and thanked for their assistance.
A failure to report debriefing in published articles clearly does not prove participants were not
debriefed. We therefore queried JPSP authors regarding their debriefing practices. Twenty-seven
of the 47 respondents did not report debriefing participants in the Method sections of their pub-
lished articles. However 22 of those 27 indicated when surveyed that they had indeed debriefed
their participants. All 5 JPSP authors who responded to our survey and had not debriefed their
participants commented that there was no deception in their studies. One caveat is that our survey
results may overestimate the frequency of debriefing; those authors who had mentioned debrief-
ing in their published article were more likely to respond to our survey. Of the 45 authors who
mentioned debriefing in their articles, nearly half (n = 20, 44.4%) responded to our survey. For
those JPSP authors who responded to our survey, debriefing went hand-in-hand with deceptive
research practices. More than half (n = 27, 57.4%) admitted they had purposefully withheld infor-
mation about the study from their participants, and all but 1 of these 27 authors reported they had
debriefed their participants. Similarly, approximately half (n = 23, 48.9%) indicated they had pur-
posefully misled their participants about some aspect of the study, and all 23 of these authors re-
ported debriefing their participants.

Debriefing and Nondeception Research

As is clear from both our literature review and our survey findings, debriefing is considered first
and foremost to be a corrective for deception. What is the status of debriefing in nondeception re-
search? There is a widely held view that questionnaire and interview studies are intrinsically safe
(Evans et al., 2002). A report written for the American Association of University Professors
(Committee A on Academic Freedom and Tenure, 2006) called for their exclusion from the re-
436 SHARPE AND FAYE

quirement of review by IRBs. Authors of a letter published in the British Medical Journal called
for a separate ethics submission procedure for medical surveys. In that letter, Kaur and Taylor
(1996) stated confidently, Epidemiological surveys based on questionnaires are safe. They are
non-invasive, do not involve a hazardous procedure or a controlled trial, and do not breach any
medical confidentiality (p. 54).
This assumption has been called into question by recent investigations examining reactions of
participants with a history of trauma exposure to research participation. Newman and Kaloupek
(2004) summarized 12 trauma studies and Jorm, Kelly, and Morgan (2007) evaluated 46 trauma
and psychiatric research studies. Two consistent findings emerged from these reviews irrespec-
tive of the source of the trauma, the nature of the psychiatric disorder, the research methodology,
or the specific questions asked. The good news is the majority of participants experience no ill ef-
fects from their research involvement. The bad news is a small but substantial number of partici-
pants report distress from their research involvementless than 1% to as high as 9% in the
Newman and Kaloupek review, and less than 10% from psychiatric research but higher from
trauma related research in the Jorm et al. review.
One response to the highlighting of negative effects is to emphasize potential positive benefits
of research participation. In the context of trauma research, Newman and Kaloupek (2004) drew
attention to the emotional relief that many participants identify as a benefit of the experience (p.
383) and insights or improvement in well-being that result from reflecting on traumatic life
events in a safe context (p. 385). Pennebaker and colleagues have found emotional disclosure af-
ter traumatic events to improve psychological functioning and physical health (for reviews, see
Pennebaker, 2003; Smyth, 1998). However, there are important differences between emotional
expression as part of therapy and as a consequence of research involvement. First, the typical re-
search participant rarely has an opportunity to communicate extensively with experimenters ver-
bally or in writing. Second, emotional expression after trauma is a therapeutic process conducted
by trained therapists not researchers. Third, emotional expression studies typically last for 15 to
30 min a day from 1 to 4 days (Pennebaker, 1997).
Accepting that there can be benefits to participation does not permit us to ignore nor does it
serve to negate the substantial number of participants self-reports of distress. Newman and
Kalopuek (2004) saw this distress as reflecting involvement in the research and not necessarily
harm. Clearly not all distress is harmful; powerful reactions by participants to research have been
seen even when the overall evaluation of the research is uniformly positive (e.g., A. S. Cook &
Bosley, 1995). This distress is admittedly anecdotal, self-reported, and rarely verified independ-
entlyas is true for most benefits attributed to research involvement. Most individuals tolerate
research participation well and researchers are under no ethical obligation to remedy participants
preexperimental psychological problems. Nevertheless, it would be prudent to heed Newman,
Willard, Sinclair, and Kaloupek (2001) that all studies have effects potentially negative and
positive and that all research samples can include individuals at risk, and heed Newman and
Kaloupeks directive that investigators need to devise protocols to minimize all potential
risks (pp. 383384). As part of such a protocol, debriefing could serve to ensure that any distress
resulting from research participation is identified, addressed, and minimized.
The results from our survey of JTS authors provide a glimpse into current trends in debriefing
practices within trauma research. Not surprisingly, deception was rare. Only three JTS authors
(6.0%) indicated that information regarding the nature of the study was purposefully withheld
from participants, and only one JTS author (2.0%) indicated that participants were purposefully
A SECOND LOOK AT DEBRIEFING 437

misled about some aspect of the study. Given the scarceness of deception among trauma research-
ers, it is not surprising that debriefing is far less commonly reported in JTS studies compared to
JPSP studies. Fewer than 1 of 10 JTS studies (n = 9, 6.7%) mentioned debriefing research partici-
pants in their Method sections. Informed consent was far more frequently stated than was debrief-
ing (n = 90, 67.2%). Comparable to JPSP studies, the descriptions of debriefing in JTS studies
were very cursory, often not more than a sentence. In a study of thought suppression as a mediator
between mood and posttraumatic stress in female victims of sexual assault, for example, partici-
pants were debriefed following participation.
As was the case with our survey of JPSP authors, JTS authors reported debriefing research par-
ticipants more frequently than is implied by what is written in Method sections. Slightly more than
half of the 50 JTS authors that responded to our survey (n = 27, 54%) indicated having debriefed
their research participants. This frequency of debriefing by trauma researchers is comparable to
that from two surveys of authors from clinical psychology journals; 73% of authors in Sigmon
(1995) and 50% of authors in Sigmon et al. (2002). Comments from JTS authors in response to
why their participants were or were not debriefed were many and varied. Unlike JPSP authors,
only 1 JTS author made mention of deception. Informed consent was mentioned by 5 authors. Par-
ticipants awareness of what was going to be asked of them through informed consent was high-
lighted, suggesting such awareness substitutes for debriefing. In this vein, 1 author described the
choice of informed consent in lieu of debriefing as a brief rather than debrief. Thus, JTS au-
thors seemed to link debriefing practices to informed consent rather than to deception as was
found with JPSP authors, but the difference is in some ways trivial. When the nature and purpose
of the study is clear, both JPSP and JTS authors regard debriefing as unnecessary.

EDUCATION AND DEBRIEFING

Tesch (1977) also spoke of an educational function for debriefing. Brody, Gluck, and Aragon
(2000) explained, The debriefing session is the formal vehicle that allows researchers to explain
the purpose and significance of their research to participants, correct misperceptions, and relate
the research experience to the course work (p. 15).

Debriefing and Participant Pools

Tesch (1977) observed that many research participants in social psychology are undergraduates
for whom participation is a course requirement. In our examination of JPSP studies, more than
three fourths of the samples were comprised of undergraduate students (n = 108, 77.7%). Accord-
ing to Tesch, the justification for research participation as a course requirement is that students
will better understand and appreciate psychology as a behavioral science if they themselves ex-
perience the science at work (p. 220). On the other hand, psychology undergraduate participant
pools have been criticized for being coercive and of questionable educational value. According to
Adair (2001), the requirement to participate in an experiment for marks is a strong rival to decep-
tion as a target for ethical criticism and challenge (p. 33). Such coercion may be justifiable if re-
search participation is indeed educational. Sieber and Saks (1989) scrutinized the handouts given
by psychology departments to researchers seeking to access undergraduate students. Some hand-
438 SHARPE AND FAYE

outs asked researchers to provide educational feedback during debriefing, but it was not clear to
Sieber and Saks if this was done, how it was done, and if it was done in a timely manner.
Surveys of undergraduates provide another means for evaluating the educational value of re-
search participation and debriefing. Introductory psychology students surveyed by Sharpe, Adair,
and Roese (1992) gave debriefings received a midrange rating of 4.7 on an 8-point scale, with
higher ratings indicating more positive perceptions. Brody et al. (2000) reported on the debriefing
experiences of 65 undergraduate students. Forty-one percent thought the debriefings they had re-
ceived had been performed well, but 29% responded the debriefings were unclear or left questions
unanswered, and 12% described the debriefings as brief. Positive reactions to debriefings were as-
sociated with researchers providing thorough explanations of the research procedures, highlight-
ing the relevance of the research objectives, and expressing positive attitudes toward student par-
ticipants. Debriefings that failed to provide specific results or that merely repeated what was
stated in the informed consent were not viewed positively. Although few of Coulters (1986) un-
dergraduate students objected to participation on ethical grounds, many questioned its value. Her
students viewed experiments and debriefings as boring, irrelevant, and a waste of time (p. 317).
Coulter herself criticized debriefing statements that are tersely written and vaguely worded para-
graphs either read aloud or passed out as handouts to the students when they left the laboratory
(p. 317). Furthermore, none of Coulters students could report on the purpose or design of the ex-
periments in which they had participated. Admittedly, some of the reluctance by researchers to
providing comprehensive educational debriefings is a function of student disinterest. Some under-
graduate students admitted to Brody et al. (2000) their sole motivation for research service was to
obtain course credit and they did not care much about the debriefing or learning from research par-
ticipation. A comment expressed by one author we surveyed highlights this disinterest: A de-
briefing should always be provided. Whether participants want to read (or listen) to the debrief-
ings is up to them.
Despite this shared apathy on the part of participants and researchers, debriefings have been
shown to be effective in educating participants. Allen, DAlessio, Emmers, and Gebhardt (1996)
conducted a meta-analysis of 10 studies that looked at the role of educational debriefings for re-
search participants exposed to sexually explicit material. The fictional nature of the sexually ex-
plicit material was emphasized in debriefings. Six of these 10 studies found a positive impact such
that rape myths were believed less after debriefing.

Debriefings and Nonstudent Samples

The educational value of debriefings is not limited to university students and undergraduate partici-
pant pools. Debriefings can serve to educate nonstudents about the nature, value, and outcome of
their research participation. This issue has not been debated in the social psychology or trauma liter-
atures, but the provision of feedback after participation has been discussed by medical researchers.
For example, Roberts, Warner, Anderson, Smithpeter, and Rogers (2004) interviewed people with
schizophrenia who had been participants in research studies. According to Roberts et al.,

perhaps the most powerful findings of this small study were the strong preferences expressed about
debriefing. The schizophrenia research participants we interviewed were very interested in receiving
information about the scientific findings from the protocols to which they had contributed. (p. 289)
A SECOND LOOK AT DEBRIEFING 439

Despite studies that demonstrate being informed of research results is important to partici-
pants, just under half of medical researchers surveyed by Di Blasi, Kaptchuk, Weinman, and
Kleijnen (2002) informed participants of their assignment to treatment or placebo conditions.
Reasons given for not informing participants included not having considered doing so, not
wanting to bias the results of the study, not wanting to perform the extra work required to inform
participants, and not believing participants had a right or need to know their assignment to con-
ditions. Other authors have pointed to the financial and logistical costs of disseminating re-
search results to a large number of widely dispersed participants, the effort required to prepare
information in a manner that can be understood by participants, and the distress that might be in-
duced in those who did not benefit from the intervention, who were harmed by it, or who simply
did not want to know the results (Eriksson & Helgesson, 2005; Fernandez, Skedgel, & Weijer,
2004; Partridge & Winer, 2002). Arguments for disclosure included confirming the importance
of the participants involvement in the research, reducing any perceptions of exploitation of the
participants by the researcher, providing information that could improve the quality of life of re-
search participants, offering more accurate findings to research participants than available in
the media, and improving the publics perception of research (Fernandez, Kodish, Taweel,
Shurin, & Weijer, 2003).

METHODOLOGY AND DEBRIEFING

Tesch (1977) pointed to a third function of debriefing: the methodological function. According to
Tesch, postexperimental interviews conducted as part of a debriefing serve a variety of purposes.
These purposes include

to judge the degree to which the manipulations and measures were appropriate and effective, to deter-
mine the extent and accuracy of participants suspicions, and to verify participants construed the situa-
tion as intended (e.g., accepted the cover story) and were involved with it (i.e., experimental realism).
(p. 219)

Historically, the methodological function of debriefing has been isolated from its ethical and edu-
cational functions. Greenberg and Folger (1988) summarized research pertaining to the ethical
and educational functions in a debriefing chapter, but the methodological function was discussed
in a chapter on postexperimental inquiries. This may be because the ethical and educational func-
tions are of benefit primarily for participants; the methodological function is of benefit primarily
to researchers.
Tesch (1977) believed that support for the effectiveness of the methodological function was
not strong because participants may not view the experimental and debriefing phases of a re-
search study as distinct. Experimenters hope the participant will act as an open, honest and col-
laborative debriefer (p. 220), but the participant may maintain a pact of ignorance (Orne, 1962)
and not reveal what he or she has discovered during participation in the experiment. Nonethe-
less, there is value to such an inquiry. Blanck, Bellack, Rosnow, Rotheram-Borus, and Schooler
(1992) provided two examples. The first is a research participant revealing awareness of being
assigned to the placebo control condition. Thus, one benefit to methodological debriefing is as a
manipulation check. A second example is adolescent participants in an AIDS prevention pro-
440 SHARPE AND FAYE

gram reporting sexual encounters during debriefing not admitted to otherwise. Thus, a second
benefit to methodological debriefing is to identify problems with the present study and direc-
tions for future research.
DAngiulli and Smith LeBeau (2002) highlighted other valuable information that might be ob-
tained from questioning participants during debriefing. Was the participant bored? Was the exper-
iment frustrating or did it cause any discomfort? Was the experimental task seen by participants as
different from what was described in the informed consent form or the sign-up sheet? By asking
questions of participants at the end of a study, unanticipated responses may result. After being
puzzled by performance in a modified prisoners dilemma game, Frohlich and Oppenheimer
(1999) were led to ask questions of their participants.

To make a long story short, the answers to those questions explained a lot.
Participants disbelief in the experiment explained a good deal of their behavior. Much of what
appeared to be purely selfish behavior was just a result of disbelief in the experimental design or inter-
pretation of it as a game. When we were able to convince them, by changes in the design, that others
were really there and would get the money behavior changed radically. (p. 497)

This particular example points to the methodological benefits of an in-depth debriefing process
that seeks to improve a study by means of participant input. As stated by Sieber (1992), the inter-
pretation and application of findings are strengthened by thoughtful discussion with participants.
Many a perceptive researcher has learned more from the debriefing process than the data alone
could ever reveal (p. 40).
Why the hesitation on the part of researchers to ask participants questions about their research
involvement during debriefing? Some researchers may fear debriefed participants will divulge
details of the research to other potential participants, will become suspicious of what transpired or
will not reveal their true feelings or thoughts. More likely, researchers may believe participants
statements will provide nothing of value. Furthermore, some participants may report negative
emotions arising from participation. Reports of distress or boredom by participants are no doubt
disturbing to researchers, but these reports should be viewed as evidence of good research prac-
tice rather than to assign fault (Kassam-Adams & Newman, 2002, p. 340).
When Newman et al. (2002) surveyed medical and psychiatric researchers, the methodological
function for debriefing rated far lower than educating participants, checking on participants and
expressing gratitude. We asked the authors of JTS and JPSP studies who had debriefed their par-
ticipants to rate three statements from Tesch (1977) reflecting the three functions of debriefing: I
employed debriefing in my study for ethical reasons such as to make certain participants did not
leave my experiment feeling less positive about themselves than when they entered the experi-
ment, I employed debriefing in my study for educational reasons such as to provide participants
with information about the study and the nature of their participation, and I employed debrief-
ing in my study for methodological reasons such as to judge the degree to which the manipulations
and measures were appropriate and effective. Ratings were on a 5-point Likert scale from
strongly disagree to strongly agree with higher values indicating greater agreement, There were
no differences in ratings between journals but authors endorsed the ethical (M = 4.11) and educa-
tional statements (M = 4.46) for conducting a debriefing more so than the methodological state-
ment (M = 2.43).
A SECOND LOOK AT DEBRIEFING 441

DISCUSSION

Tesch (1977) described the method of debriefing as having three functions. The ethical function
has been linked almost exclusively to deceptive research practices. There is little evidence that de-
briefing removes any negative effects incurred as a result of deception or that debriefing helps de-
ceived participants understand why they were deceived. At best, there is a small body of studies
that suggests a more extensive debriefing may provide greater relief than a standard debriefing.
What is clear is deceptive research practices persist in social psychology. What is also clear is
that debriefing has been inexorably linkedby history, by research practices, by codes of eth-
icsto deception. When deception is practiced, there is no other method at our disposal to address
the failure to provide research participants with true informed consent. Deception reflects a utili-
tarian tradition that according to Principle C of the code of ethics of the American Psychological
Association (2002) asks researchers to maximizes benefits and minimizes costs (p. 1062). De-
ception and the utilitarian perspective are reflected in the sections of the code that speaks to decep-
tion (Standard 8.07) and debriefing (Standard 8.08). Standard 8.07 addresses deception but directs
readers to Standard 8.08, Debriefing. In Standard 8.08(a), debriefing functions as a response to
deception, to correct any misconceptions that participants may have of which the psychologists
are aware. The utilitarian perspective emerges in Standard 8.08(c) that when psychologists be-
come aware that research procedures have harmed a participant, they take reasonable steps to
minimize the harm. Thus, an effective debriefing is important from a utilitarian perspective as a
remedy to any harm from deception.
Deceiving research participants is not permitted from a deontological perspective that focuses
on respecting the rights of research participants to free choice (Principle E of the American Psy-
chological Association code). Informed consent rather than deception reflects this deontological
tradition (see Pittenger, 2002) in so far as the voluntary consent of the human subject is abso-
lutely essential (Nuremberg Military Tribunal, 1949). Adopted by psychologists from medicine,
the emphasis placed on informed consent in psychology is so great that Adair (1988) once com-
mented there is an erroneous view that the problem of informed consent is so fundamental that
once it is solved, other ethical problems will fall into place (p. 825). We saw this emphasis on in-
formed consent in the Method sections we reviewed and in the comments of the authors we
surveyed.
If one regards debriefing as no more than the reflexive response to deception, then debriefing
would not be relevant from a deontological perspective that forbids deception (Kimmel & Smith,
2001). However, a case for debriefing can be made from a deontological perspective as an essen-
tial means for researchers to show respect for the rights of research participants in research studies
that have no elements of deception. This argument was reflected in a comment by one of the JPSP
authors we surveyed. According to this author, debriefing was a matter of courtesy. If partici-
pants are willing to make the effort to come in, I think its just professional courtesy to tell them
what we are doing and how they are contributing to science. I think its a message of apprecia-
tion. The educational function of debriefing presents yet another opportunity to show respect to
research participants by providing university undergraduates with a learning experience, and by
offering all participants (university students and nonstudents) feedback about their own research
involvement. In doing so, it is crucial to assess the feedback desired by research participants and
to ascertain how that feedback might best be transmitted in an educational, ethically appropriate,
and engaging manner.
442 SHARPE AND FAYE

The methodological contributions that debriefing can offer are perhaps the least well under-
stood. The present review of the literature suggests that open discussions with participants during
the debriefing phase may contribute substantially to the quality of the research. In-depth debrief-
ings also have the potential to open up new and fruitful avenues for future research. Despite its po-
tential, our survey results imply researchers do not value debriefing for its methodological contri-
butions.

IMPROVING DEBRIEFING PRACTICES

Despite the lack of attention to the ethical, educational, and methodological functions of debrief-
ing, many of our survey respondents expressed a belief in the importance of debriefing. We asked
them if debriefing is (a) necessary in all research studies, (b) necessary in some research studies,
or (c) not necessary in most research studies. Only a small number of JPSP authors (n = 5, 10.6%)
and JTS authors (n = 8, 16.0%) indicated debriefing was not necessary in most research studies.
Responses to this survey question and others show researchers to be thoughtful and interested
when it comes to their debriefing practices. We therefore offer the following suggestions to im-
prove debriefings.
First and foremost, we need more data pertaining to debriefing methods. How effective is de-
briefing for participants? What types of debriefing practices best ameliorate distress? How can we
ensure that participants walk away with a better understanding of the research? In turn, how can
we structure debriefings in a way that improves the quality of our research? These are questions
that must be answered before we can begin to reach an informed consensus regarding debriefing
practices. Empirical data demonstrating the effectiveness of debriefing practices would also
strengthen the case for researchers seeking approval from IRBs for ethically sensitive research.
Data are also needed to address the frequently expressed fear that the debriefing itself can have
negative effects on research participants. Debriefing has been regarded as problematic or inappro-
priate when the participants behavior is socially undesirable (Sieber, 1992), the participant does
poorly on an experimental task (Eyde, 2000) or the participant may be embarrassed by falling for a
deception (Fisher & Fyrberg, 1994). From our perspective, these concerns highlight the need to
construct and test effective and safe debriefings. Tesch (1977) acknowledged that his views of de-
briefing were based on speculation, unsystematic impressions and observations, and little hard
data (p. 221). Little has changed in that regard over the last 30 years.
Second, debriefing procedures need to be identified and discussed more fully in the literature.
A detailed debriefing protocol was provided by Mills (1976), but it is specific to debriefing after
deception and is now dated. Another comprehensive but dated source is a booklet published by the
American Psychological Association (1982) that addressed a number of debriefing related situa-
tions such as the need to delay debriefing, multiple sessions studies, disbelief of the debriefing and
the withholding of harmful information from the debriefing. More recent sources include Bussell,
Matsey, Reiss, and Hetherington (1995), who advised including information about psychological
and social services as part of the debriefing, and Boothroyd (2000), who recommended having
strategies in place for helping individuals who feel upset immediately after participation or if con-
cerns develop later.
Improvements to current debriefing practices also need to be encouraged and reported in the
literature. Some of these improvements focus on the ethical function of debriefing. In the context
A SECOND LOOK AT DEBRIEFING 443

of a test of suggestibility in which participants were given misleading memory cues after reading a
story of a fictitious robbery, Oczak and Niedzwienska (2007) compared a standard debriefing to
an extended debriefing. The extended debriefing included an explanation to participants regard-
ing how they had been misled, but also an opportunity to demonstrate their understanding of the
debriefing by not falling for misleading memory cues after reading a second related story. Not
only did Oczak and Niedzwienska find less suggestibility after the extended debriefing, but they
also found those participants to be in a better mood and to hold more positive views of research.
Oczak and Niedzwienskas results have implications for both the ethical and educational func-
tions of debriefing. Landrum and Chastain (1995) presented a debriefing questionnaire that could
serve to check whether research participation is seen as educational by students, and Tangney
(2000) offered suggestions as to how debriefings can be made more educational by identifying
what constitutes good educational practices. Methodologically, Vohra (2001) offered guidance
for improving postexperimental questioning by forewarning participants and by priming coopera-
tive experimental sets, and Stewart (1992) gave guidelines for postexperiental debriefing that
seeks to provide insights into the meaning of participation. These potential improvements and
useful guidelines are seldom acknowledged in the literature.
Third, current debriefing practices need to be reported more frequently and comprehensively
in the Method sections of journal articles. We asked the authors in our survey where they obtained
guidance in constructing their debriefings. IRBs for JPSP authors and colleagues for JTS authors
were the most popular sources of information; the published literature was not ranked highly. Pro-
viding information about compensation and informed consent but not debriefing sends a message
that this ethical practice is unimportant and its absence denies other researchers the opportunity to
be educated as to the effectiveness of debriefing practices (Kimmel, 2001; Pittenger, 2002). Adair
(2001) suggested a brief paragraph in Method sections about ethical issues encountered and how
those issues were addressed. With the advent of Web pages for supplementary materials, debrief-
ing protocols could be archived there, but brief descriptions of debriefings should still be pre-
sented in Method sections.
Fourth, consideration should be given to formalizing debriefing practices. Brody et al. (2000)
reported considerable variation in how debriefing sessions are structured, how much time is allo-
cated to debriefing, how much information is provided, and how participant questions are ad-
dressed. IRBs scrutinize informed consent forms; typically, written debriefings are not given the
same attention. IRBs could say more about debriefing and could draw attention to the importance
of debriefing in nondeception research. The IRB application instructions at the first authors insti-
tution say this little about debriefing: Describe any debriefing procedures that will be used. (Note
that if deception is used, debriefing is necessary). Once again, the contrast between how in-
formed consent and debriefing are treated is striking. Almost three fourths of JPSP authors (n =
34, 72.3%) indicated that debriefings were required by their IRBs, but only 28.0% (n = 14) of JTS
authors made the same affirmation. In contrast, when asked if their IRBs required informed con-
sent, an overwhelming 87.2% (n = 41) of JPSP authors and 96.0% (n = 48) of JTS authors an-
swered in the affirmative. Authors from both journals commented in great detail about the very
specific criteria required for informed consent forms. A few copied their IRBs requirements for
the content of informed consent forms verbatim. Clearly, IRB policies around debriefing need to
be flexible and to be updated as methodologies change. For example, online experiments present
new ethical challenges (see Azar, 2000; Keller & Lee, 2003), but the Internet also provides an op-
portunity to make debriefing more interactive (Fraley, 2007).
444 SHARPE AND FAYE

Fifth, the educational value of debriefings should be raised. Both Davis and Fernald (1975) and
Tesch (1977) recommended that undergraduate students write reports on their research participa-
tion. Richardson, Pegalis, and Britton (1992) asked their students to write such reports, specifi-
cally commenting on the topic of the research, how data were collected, the hypotheses of the
study, the practical applications for the research, and how the study related to what was covered in
class. Not surprisingly, students who were required to write these reports were more interested in
receiving debriefings, but they also evaluated their research participation more positively than
other students. Course credit awarded for research participation should also be given for writing
such reports and for attending debriefing sessions (Perry & Abramson, 1980).
Sixth, the experimenterparticipant relationship needs reworking. DAngiulli and Smith LeBeau
(2002) argued that participants passive receipt of information from researchers led them to con-
strue debriefing specifically and experimental participation generally to be a waste of time. Where
appropriate, there should be a bidirectional exchange between participant and researcher (Sieber,
1992). Researchers would learn about how participants view the experimental task, what makes
sense and what does not, and what the participants think it was all about. Participants would learn
about doing research, the joys and the frustrations, and the excitement of discovery.
Finally, systematic data should be gathered about the research experience from the perspective
of the participants. This might include asking participants to evaluate the researcher, the clarity of
informed consent forms and their satisfaction with the debriefing, and the overall research partici-
pation experience (see Labott & Johnson, 2004; Sieber & Saks, 1989). The Reactions to Research
Participation Questionnaire (Newman et al., 2001) asks participants to indicate agreement to 23
statements such as I gained something positive from participating and I found the questions too
personal. The results of such investigations could provide guidance to ethics review committees
who are often left to speculate on the impact of research procedures on participants in the absence
of such data (see Pollick, 2008).

CONCLUSION

Thirty years ago, Tesch (1977) observed there was a method to debriefing, but the lack of evi-
dence for the effectiveness of debriefing coupled with an unfounded faith in the ability of debrief-
ing to erase ethical problems led Tesch to conclude this method was madness. Are our current de-
briefing practices best characterized as method or madness? The present review suggests that
Teschs observations are as relevant today as they were 30 years ago. Debriefing practices occupy
a marginal place in the ethical practices of psychological researchers and the discipline continues
to pay little attention to best practices when it comes to debriefing protocols. Furthermore, few at-
tempts have been made to incorporate novel debriefing practices, and there is little recognition of
the potential educational and methodological gains that might be achieved from a carefully
crafted debriefing. Such improvements are further hindered by the comparatively few articles that
have been devoted to this important topic and researchers have few resources when it comes to ex-
panding this part of their work.
Teschs (1977) call for more attention to debriefing practices remains unanswered. However,
we go further than Tesch in that we do not see debriefing as merely a corrective for deception.
Tesch situated debriefing in the context of deception in social psychologyunderstandable given
his audience of social psychologists, the salience of deception and the temporal proximity of the
A SECOND LOOK AT DEBRIEFING 445

Milgram studies. In this review, our focus has been on social psychology research and trauma re-
search, but much greater consideration of debriefing is demanded in other research domains. In-
stead, we have come to rely almost exclusively on informed consent, a valued approach that ad-
dresses legal concerns and that provides fair warning to participants but an approach that does not
address the consequences of our actions as researchers. Progress will be made when researchers
recognize the importance of debriefing or when some unfortunate circumstance forces such rec-
ognition. Therefore, 30 years after Tesch, there continues to be much madness in our method with
regard to debriefing.

ACKNOWLEDGMENTS

We thank Gitte Jensen for her assistance in conducting the survey and two anonymous reviewers
for their comments.

REFERENCES

Adair, J. G. (1988). Research on research ethics. American Psychologist, 43, 825826.


Adair, J. G. (2001). Ethics of psychological research: New policies; continuing issues; new concerns. Canadian Psychol-
ogy 42, 2537.
Adair, J. G., Dushenko, T. W., & Lindsay, R. C. L. (1985). Ethical regulations and their impact on research practice. Amer-
ican Psychologist, 40, 5972.
Allen, M., DAlessio, D., Emmers, T. M., & Gebhardt, L. (1996). The role of educational briefings in mitigating effects of
experimental exposure to violent sexually explicit material: A meta-analysis. Journal of Sex Research, 33, 135141.
American Psychological Association. (1982). Ethical principles in the conduct of research with human participants.
Washington, DC: Author.
American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychol-
ogist, 57, 10601073.
Azar, B. (2000). Online experiments: Ethically fair or foul? Monitor on Psychology, 31. Retrieved March 8, 2007, from
http://www.apa.org/monitor/apr00/fairorfoul.html
Blanck, P. D., Bellack, A. S., Rosnow, R. L., Rotheram-Borus, M. J., & Schooler, N. R. (1992). Scientific rewards and
conflicts of ethical choices in human subjects research. American Psychologist, 47, 959965.
Boothroyd, R. A (2000). The impact of research participation on adults with severe mental illness. Mental Health Services
Research, 2, 213221.
Brody, J. L., Gluck, J. P., & Aragon, A. S. (2000). Participants understanding of the process of psychological research:
Debriefing. Ethics and Behavior, 10, 1325.
Burger, J. (2007). Replicating Milgram. American Psychological Science Observer, 20(11).
Bussell, D. A., Matsey, K. C., Reiss, D., & Hetherington, M. (1995). Debriefing the family: Is research an intervention?
Family Process, 34, 145160.
Committee A on Academic Freedom and Tenure. (2006). Research on human subjects: Academic freedom and the institu-
tional review board. Washington, DC: American Association of University Professors. Retrieved November 9, 2006,
from http://www.aaup.org/AAUP/comm/rep/A/humansubs.htm
Cook, A. S., & Bosley, G. (1995). The experience of participating in bereavement research: Stressful or therapeutic?
Death Studies, 19, 157170.
Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in web- or internet-based surveys. Edu-
cational and Psychological Measurement, 60, 821836.
Coulter, X. (1986). Academic value of research participation by undergraduates. American Psychologist, 41, 317.
DAngiulli, A., & Smith LeBeau, L. (2002). On boredom and experimentation in humans. Ethics and Behavior, 12,
167176.
446 SHARPE AND FAYE

Davis, J. R., & Fernald, P. S. (1975). Psychology in action: Laboratory experience versus subject pool. American Psychol-
ogist, 30, 523524.
Di Blasi, Z, Kaptchuk, T. J., Weinman, J., & Kleijnen, J. (2002). Informing participants of allocation to placebo at trial clo-
sure: Postal survey. British Medical Journal, 325. Retrieved March 8, 2007, from http://www.bmj.com/cgi/reprint/
325/7376/1329.pdf
Elms, A. C. (1975). The crisis of confidence in social psychology. American Psychologist, 30, 967976.
Epley, N., & Huff, C. (1998). Suspicion, affective response, and educational benefit as a result of deception in psychology
research. Personality and Social Psychology Bulletin, 24, 759768.
Eriksson, S., & Helgesson, G. (2005). Keep people informed or leave them alone? A suggested tool for identifying re-
search participants who rightly want only limited information. Journal of Medical Ethics, 31, 674678.
Evans, M., Robling, M., Maggs Rapport, F., Houston, H., Kinnersley, P., & Wilkinson, C. (2002). It doesnt cost anything
just to ask, does it? The ethics of questionnaire-based research. Journal of Medical Ethics, 28, 4144.
Eyde, L. D. (2000). Other responsibilities to participants. In B. D. Sales & S. Folkman (Eds.), Ethics in research with hu-
man participants (pp. 6173). Washington, DC: American Psychological Association.
Fernandez, C. V., Kodish, E., Taweel, S., Shurin, S., & Weijer, C. (2003). Disclosure of the right of research participants to
receive research results: An analysis of consent forms in the childrens oncology group. Cancer, 97, 29042909.
Fernandez, C. V., Skedgel, C., & Weijer, C. (2004). Considerations and costs of disclosing study findings to research par-
ticipants. Canadian Medical Association Journal, 170, 14171419.
Fisher, C. B. (2005). Deception research involving children: Ethical practices and paradoxes. Ethics and Behavior, 15,
271287.
Fisher, C. B., & Fyrberg, D. (1994). Participant partners: College students weigh the costs and benefits of deceptive re-
search. American Psychologist, 49, 417427.
Fraley, R. C. (2007). Using the internet for personality research: What can be done, how to do it, and some concerns. In R.
W. Robins, R. C. Fraley, & R. F. Krueger (Eds.), Handbook of research methods in personality psychology (pp.
130147). New York: Guilford.
Frohlich, N., & Oppenheimer, J. (1999). What we learned when we stopped and listened. Simulation and Gaming, 30,
494497.
Greenberg, J., & Folger, R. (1988). Controversial issues in social research methods. New York: Springer-Verlag.
Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. G. Morawski (Ed.), The rise of experimen-
tation in American psychology (pp. 188212). New Haven, CT: Yale University Press.
Holmes, D. S. (1976a). Debriefing after psychological experiments: I. Effectiveness of postdeception dehoaxing. Ameri-
can Psychologist, 31, 858867.
Holmes, D. S. (1976b). Debriefing after psychological experiments: II Effectiveness of postexperimental desensitizing.
American Psychologist, 31, 868875.
Jorm, A. F., Kelly, C. M., & Morgan, A. J. (2007). Participant distress in psychiatric research: A systematic review. Psy-
chological Medicine, 37, 917926.
Kassam-Adams, N., & Newman, E. (2002). The reactions to research participation questionnaires for children and for par-
ents (RRPQ-C and RRPQ-P). General Hospital Psychiatry, 24, 336342.
Kaur, B., & Taylor, J. (1996). Separate criteria should be drawn up for questionnaire based epidemiological studies. Brit-
ish Medical Journal, 312, 54.
Keller, H. E., & Lee, S. (2003). Ethical issues surrounding human participants research using the internet. Ethics and Be-
havior, 13, 211219.
Kimmel, A. J. (2001). Ethical trends in marketing and psychological research. Ethics and Behavior, 11, 131149.
Kimmel, A. J., & Smith, N. C. (2001). Deception in marketing research: Ethical, methodological, and disciplinary impli-
cations. Psychology and Marketing, 18, 663689.
Labott, S. M., & Johnson, T. P. (2004). Psychological and social risks of behavioural research. IRB: Ethics and Human Re-
search, 26(3), 1115.
Landrum, R. E., & Chastain, G. (1995). Experiment spot-checks: A method for assessing the educational value of under-
graduate participation in research. IRB: Ethics and Human Research, 17(4), 46.
McFarland, C., Cheam, A., & Buehler, R. (2007). The perseverance effect in the debriefing paradigm: Replication and ex-
tension. Journal of Experimental Social Psychology, 43, 233240.
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371378.
Miller, R. L. (2003). Ethical issues in psychological research with human participants. In S. F. Davis (Ed.), Handbook of
research methods in experimental psychology (pp. 129147). Malden, MA: Blackwell.
A SECOND LOOK AT DEBRIEFING 447

Mills, J. (1976). A procedure for explaining experiments involving deception. Personality and Social Psychology Bulletin,
2, 313.
Misra, S. (1992). Is conventional debriefing adequate? An ethical issue in consumer research. Journal of the Academy of
Marketing Science, 20, 269273.
Newman, E., & Kaloupek, D. G. (2004). The risks and benefits of participating in trauma-focused research studies. Jour-
nal of Traumatic Stress, 17, 383394.
Newman, E., McCoy, V., & Rhodes, A. (2002). Ethical research practice with human participants: Problems, pitfalls and
beliefs of funded researchers. In N. H. Steneck & M. D. Scheetz (Eds.), Investigating research integrity: Proceedings of
the First ORI Research Conference on Research Integrity (pp. 105111). Rockville, MD: Office of Research Integrity.
Retrieved March 8, 2007, from http://ori.dhhs.gov/documents/proceedings_rri.pdf
Newman, E., Willard, T., Sinclair, R. & Kaloupek, D. (2001). Empirically supported ethical research practice: The costs
and benefits of research from the participants view. Accountability in Research, 8, 309329.
Nicks, S. D., Korn, J. H., & Mainieri, T. (1997). The rise and fall of deception in social psychology and personality re-
search, 1921 to 1994. Ethics and Behavior, 7, 6977.
Nuremberg Military Tribunal. (1949). Trials of war criminals before the Nuremberg military tribunals under Control
Council Law No. 10 (Vol. 2). Washington, DC: U.S. Government Printing Office.
Oczak, M., & Niedzwienska, A. (2007). Debriefing in deceptive research: A proposed new procedure. Journal of Empiri-
cal Research on Human Research Ethics, 2(3), 4959.
Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand char-
acteristics and their implications. American Psychology, 17, 776783.
Partridge, A. H., & Winer, E. P. (2002). Informing clinical trial participants about study results. Journal of the American
Medical Association, 288, 363365.
Pennebaker, J. W. (1997). Writing about emotional experiences as a therapeutic process. Psychological Science, 8, 162166.
Pennebaker, J. W. (2003). The social, linguistic, and health consequences of emotional disclosure. In J. Suls & K. A.
Wallston (Eds.), Social psychological foundations of health and illness (pp. 288313). Malden, MA: Blackwell.
Perry, L. B., & Abramson, P. R. (1980). Debriefing: A gratuitous procedure? American Psychologist, 35, 298299.
Pittenger, D. J. (2002). Deception in research: Distinctions and solutions from the perspective of utilitarianism. Ethics and
Behavior, 12, 117142.
Pollick, A. (2008). Talking with your IRBs about risk: Show them the data. American Psychological Science Observer,
21(2). Retrieved October 25, 2008, from http://www.psychologicalscience.org/observer/getArticle.cfm?ID=2297
Richardson, D. R., Pegalis, L., & Britton, B. (1992). A technique for enhancing the value of research participation. Con-
temporary Social Psychology, 16, 1113.
Roberts, L. W., Warner, T. D., Anderson, C. T., Smithpeter, M. V., & Rogers, M. K. (2004). Schizophrenia research partici-
pants responses to protocol safeguards: Recruitment, consent, and debriefing. Schizophrenia Research, 67, 283291.
Sharpe, D., Adair, J. G., & Roese, N. J. (1992). Twenty years of deception research: A decline in subjects trust. Personal-
ity and Social Psychology Bulletin, 18, 585590.
Sieber, J. E. (1992). Planning ethically responsible research: A guide for students and internal review boards. Newbury
Park, CA: Sage.
Sieber, J. E., Iannuzzo, R., & Rodriguez, B. (1995). Deception methods in psychology: Have they changed in 23 years?
Ethics and Behavior, 5, 6785.
Sieber, J. E., & Saks, M. J. (1989). A census of subject pool characteristics and policies. American Psychology, 44, 10531061.
Sigmon, S. T. (1995). Ethical practices and beliefs of psychopathology researchers. Ethics and Behavior, 5, 295309.
Sigmon, S. T., Boulard, N. E., & Whitcomb-Smith, S. (2002). Reporting ethical practices in journal articles. Ethics and
Behavior, 12, 261275.
Smyth, J. M. (1998). Written emotional expression: Effect sizes, outcome types, and moderating variables. Journal of
Consulting and Clinical Psychology, 66, 174184.
Stewart, L. P. (1992). Ethical issues in postexperimental and postexperiential debriefing. Simulation and Gaming, 23, 196211.
Tangney, J. (2000). Training. In B. D. Sales & S. Folkman (Eds.), Ethics in research with human participants (pp.
97105). Washington, DC: American Psychological Association.
Tesch, F. E. (1977). Debriefing research participants: Though this be method there is madness to it. Journal of Personality
and Social Psychology, 35, 217224.
Toy, D., Olsen, L., & Wright, L. (1989). Effects of debriefing in marketing research involving mild deceptions. Psychol-
ogy and Marketing, 6, 6985.
Vohra, N. (2001). Improving the verdicality of post experimental questionnaires. Psychological Studies, 46, 2833.

Você também pode gostar