Você está na página 1de 5

The American Journal of Surgery 183 (2002) 251255

Association for surgical education

Identification of teaching excellence in operating room and clinic


settings
Sherralyn S. Cox, Ph.D.*, Melvin S. Swanson, Ph.D.
Department of Surgery, East Carolina University, 301-A PCMH-TA, Greenville, NC 27858-4354, USA
Manuscript received July 31, 2001; revised manuscript October 23, 2001

Abstract
Background: A system for obtaining learner feedback on surgical faculty teaching is a program-specific resource for recognizing faculty
accomplishments as well as being a requirement of the Accreditation Council for Graduate Medical Education (ACGME). This investigation
uses 5 years of feedback from residents to identify surgical teaching behaviors that define teaching excellence.
Methods: Between 1995 and 1999 full-time surgeons in a division of general surgery were evaluated biannually by every resident on their
services, using two 10-item Likert scales to assess frequency of performing selected teaching behaviors. Response categories ranged from
0 (does not demonstrate) to 4 (demonstrates the behavior to a very high degree). Mean scores 3.7 (1 SD above the mean) were categorized
as evidence of superior teaching, whereas mean scores 2.4 (1 SD below the mean) were categorized as mediocre. Residents wrote
statements identifying teaching strengths.
Results: There were 753 individual resident assessments of 16 faculty. The overall mean rating for operating room and clinic teaching was 3.1,
with 24% of the ratings 3.7 and 14% of the ratings 2.4. For operating room, discriminant behaviors were: demonstrates sensitivity to resident
learning needs (3.85 versus 1.62, P 0.01) and provides direct feedback (3.60 versus 1.27, P 0.01). Residents statements yielded themes tied
to superior teaching: demonstrates technical expertise, allows resident participation, and maintains a learning climate of respect.
Conclusions: A resident-based teaching assessment system can offer a reasonable and valid form of feedback to academic surgeons. The
use of mixed methods to identify teaching behaviors that characterize excellence informs faculty of how they are perceived as educators
and provides examples of specific behaviors that merit commendation. 2002 Excerpta Medica, Inc. All rights reserved.
Keywords: Teaching; Assessment; Evaluation; Surgical teaching; Resident learners

It can be challenging to implement an effective teaching


agenda under any circumstances. When one combines the
normal demands of teaching with the complex settings that
comprise the general surgery residency, one is presented
with a veritable minefield of challenges. Most surgical faculty have little specific indication of their effectiveness as
instructors. Even the most talented teachers can wonder,
How am I doing? The academic surgeons first concern is
meeting his or her patients needs in operating room, clinic,
hospital ward, and personal office; but he or she also has
taken the role of educator or professor. Ones performance
in that role can be difficult to gauge [17].
Critique of the teaching program and individual teacher
performance is but one aspect of being an educator, of
instituting and maintaining an education program, and of

* Corresponding author. Tel.: 1-252-816-5353; fax: 1-252-816-3156.


E-mail address: coxs@mail.ecu.edu.

meeting the Accreditation Council for Graduate Medical


Education (ACGME) accreditation requirements for evaluation. For the purpose of this study, we assumed several
working definitions. Assessment refers to valueless measurement that allows one to determine the degree to which
an individual possesses a certain attribute. In the current
study, a numerical index is given to a surgeons measured
teaching performance that equates with status determination. Evaluation is the determination of worth. It is the
formal attempt to use assessment data to reach a judgment
regarding teaching merit or quality [8]. Feedback refers to
the anonymous numerical assessments and evaluative written comments provided by a learner to his or her instructors
throughout the years of surgical residency.
Several years ago, our departments surgical education
committee reviewed the ways in which we evaluated surgeons teaching in our surgical residency program. We
asked many questions to guide our review, some of which
are these: How can we obtain valid feedback on teaching

0002-9610/02/$ see front matter 2002 Excerpta Medica, Inc. All rights reserved.
PII: S 0 0 0 2 - 9 6 1 0 ( 0 2 ) 0 0 7 8 7 - 0

252

S.S. Cox and M.S. Swanson / The American Journal of Surgery 183 (2002) 251255

Table 1
Operating room teaching behaviors in a resident-based assessment
system
Instrument
code number

Teaching behavior

OR-1.

Describes upcoming surgical procedure, including


operative approach, rationale, and alternatives.
Discusses expected patient outcomes and possible
complications.
Clarifies resident roles and responsibilities.
Demonstrates technical skills with confidence and
expertise.
Permits resident participation in procedures
according to ability.
Demonstrates awareness and sensitivity to
resident learning needs.
Answers questions clearly and precisely.
Stimulates residents to think critically and
problem solve.
Provides direct and ongoing feedback regarding
resident progress.
Maintains climate of mutual respect for all
members of health care team.

OR-2.
OR-3.
OR-4.
OR-5.
OR-6.
OR-7.
OR-8.
OR-9.
OR-10.

from our residents while ensuring their anonymity? Which


teaching behaviors will be our focus? How can feedback to
faculty be helpful without being punitive? Our programs
previous teaching evaluations of surgeons always had been
completed by a low percentage of residents. Therefore, we
designed a process that could provide valid indexes of
faculty teaching performance through assessment of teaching behaviors by the resident learners who would participate
because of assurance of their anonymity. During several
months of their time in our program, the residents would
have worked closely under the tutelage of the faculty surgeons whom they were to assessthe residents would know
these surgeons teaching skills well.
Evaluation of teaching by learners has been studied for
decades, especially since the time of the 1960s call for accountability and value-added higher education. While research is
divided, learners have come to be regarded as a valid source of
feedback on teaching [919]. Resident learners are in close
proximity to their teachers over extended periods of time, and
they are present on good as well as bad teaching days. Residents can provide an internal comparison between surgeons
who are of the same faculty status.
We understood the significance of creating and maintaining a process that would be acceptable to our faculty, our
academic administrators, and our residents. To be acceptable, the process had to be regarded as being capable of
measuring what it was intended to measureteaching performance. It had to document teaching behaviors reliably. It
had to be seen as being fair to every surgeon. It had to
maintain resident anonymity. Finally, it had to document
teaching in a way that could be useful for such purposes as
chairman performance conferences with faculty, for optional placement in promotion and tenure dossiers, and for
reference in post-tenure review documentation files.

Table 2
Clinic teaching behaviors in a resident-based assessment system
Instrument
Teaching behavior
code number
C-1.
C-2.
C-3.
C-4.
C-5.
C-6.
C-7.
C-8.
C-9.
C-10.

Orients residents to practice setting and role expectations.


Outlines objectives and expected outcomes for
procedures.
Develops and sustains a positive learning atmosphere.
Permits resident participation in procedures according to
ability.
Shares up-to-date knowledge of developments in the field.
Provides ample opportunity for residents to teach.
Encourages resident questions and active participation.
Gives residents positive reinforcement.
Provides direct and ongoing feedback regarding resident
progress.
Maintains climate of mutual respect for all members of
health care team.

Design of the instrument was begun in 1994. It was


developed following procedures that included reviewing the
literature, collecting perceptions of professionals in the field
of surgery and education, soliciting opinions of residents,
consolidation by a panel of judges, and refinement after
statistical analysis. Effective teaching behaviors were identified and verified for site appropriateness, employing
groups of individuals fulfilling roles as experts and as stakeholders. There were 10 behaviors that were developed for
each of the teaching settings. Table 1 and Table 2 display
these behaviors. These 20 teaching behaviors formed the
basis of the teaching evaluation instrument that was field
tested in our residency program and modified in October
1994. The purpose of this study was to analyze 5 years of
feedback from residents and to identify surgical faculty
teaching behaviors characteristic of teaching excellence in
operating room and clinic settings.

Methods
Between 1995 and 1999 all full-time academic surgeons
in a division of general surgery were evaluated biannually
and anonymously by every resident on their services. The
study participants included 20 faculty surgeons and 49 surgical residents from all levels of training. Resident feedback
was obtained using an assessment instrument consisting of
the 20 teaching behaviors, described in Tables 1 and 2,
along with an open-ended section where each resident was
asked to list two teaching strengths in each of the teaching
settings in which the surgeon had been observed. Each
teaching behavior had an associated Likert scale consisting
of five response categories ranging from 0 (does not demonstrate the behavior) to 4 (demonstrates the behavior to a
very high degree). An additional response option was provided to allow an insufficient observation to judge choice.
For each scale, a mean scale score was derived with a

S.S. Cox and M.S. Swanson / The American Journal of Surgery 183 (2002) 251255

theoretical score range from 0 to 4.0. The independent t test


was used to compare mean scores on the teaching behaviors
between teaching performances categorized as superior and
those categorized as mediocre. This instrument was used
continuously during the study period. After each administration, the instruments internal reliability was assessed
with Cronbachs Alpha. These reliability assessments were
never lower than .90.
In addition to providing quantitative measures, the instrument provided space for residents to write statements
identifying teaching strengths of each surgeon. Separate
responses were recorded and coded so that common themes
could be identified. A matrix of written theme codes was
constructed from the transcript of resident comments about
each faculty members teaching. Theme frequencies were
recorded for comparison with quantitative findings.

253

The qualitative analysis of residents written evaluative


statements revealed three common positive themes tied to
resident-identified superior teaching performance: (1)
Demonstrates surgical technical expertise and up-to-date
knowledge, (2) Allows and encourages resident participation in patient procedures, and (3) Maintains a learning
climate of respect and support. These themes crossed
teaching settings and resident training levels. In fact, almost
all written comments were aligned with one of these three
themes. The more senior the level of resident, the more
likely the comment was expressed with some degree of
passion and at greater length. Seniors averaged four comments and sometimes wrote short paragraphs per attending.
Juniors averaged two comments per attending and rarely
developed evaluative paragraphs. Some residents were motivated to create their own assessment values in the comments section, one senior saying, He is a 10 in the department; the runner-up is a 6.

Results
Over the 5-year period, there were 753 individual resident operating room and clinic teaching assessments of 20
different faculty surgeons. Since some of these faculty had
less than five assessments, only faculty who had five or
more assessments were included. The resulting analysis file
consisted of 16 faculty members. We were also concerned
about residents not differentiating between the teaching
behaviors by selecting the same response category for each
item in the instrument. We found 17% of the operating room
teaching assessments and 19% of the clinical assessments
with identical scale scores for each item. With these assessments removed, the overall operating room and clinical
mean score was 3.1. Using the 1 standard deviation above
and below the mean criterion, we categorized scores 3.7
as superior and scores 2.4 as mediocre.
To identify the teaching behaviors that differentiated
between superior and mediocre teaching, the mean difference between superior and mediocre teaching scores for
each teaching behavior was computed. The two operating
room teaching behaviors with the largest mean difference
were OR-6 Demonstrates awareness and sensitivity to resident learning needs (3.85 versus 1.62, P 0.01) and OR-9
Provides direct and ongoing feedback regarding resident
progress (3.60 versus 1.27, P 0.01). For clinic teaching,
the two best discriminating items were C-8 Gives residents
positive reinforcement (3.87 versus 1.53, P 0.01) and
C-9 Provides direct and ongoing feedback regarding resident progress (3.73 versus 1.29, P 0.01).
We also investigated whether the assessments for faculty
improved over time. The first 2.5 years of the study were
compared with the latter 2.5 years. The overall operating
room and clinic teaching means remained unchanged from
the first half of the evaluation period to the last half. This
pattern existed for faculty with consistent superior scores
and for faculty with consistent mediocre scores, from the
first through the second halves of the study period.

Comments
This study focuses upon teaching excellence to inform
faculty of how they are perceived as educators. It would
have been just as possible to have focused upon mediocre
teaching. We instead chose to prepare a model of excellence
in instruction in two settings by identifying and highlighting
positive teaching behaviors that were shown to discriminate
between faculty in a high performance group (superior) and
those in a low performance group (mediocre). Previous
studies of teaching in surgical settings have most often
incorporated the use of student learners, usually surgical
clerks, as evaluators. Medical students can provide valuable
feedback to faculty about their teaching [4,7,10,17]. Our
study incorporated residents as assessors and evaluators
because we wanted surgeons to be notified specifically
about how they are perceived as instructors by more mature
learners whose input into the education program is a requirement of the ACGME.
We established the goal of feedback specificity in all
teaching settings. However, faculty teaching behavioral frequencies did not differ significantly between operating room
or clinic settings. This indicates that either superior performing faculty are good teachers regardless of setting, or
that the teaching behaviors listed in the instrument are so
closely correlated that they all tend to recognize the same
kind of activities. We could have utilized a global rating or
streamlined the 20 behaviors under consideration. Instead,
we retained the two separate teaching scales, operating
room and clinic, with the goal of frequently reminding both
faculty and residents about the listed behaviors that are
closely aligned with effectiveness, success, or excellence in
teaching [2 4,9,12,13]. Our residents defined excellence by
participating in this survey process, providing quantitative
(assessments) and qualitative (written evaluative comments)
input to faculty. The resident-selected behaviors were hand-

254

S.S. Cox and M.S. Swanson / The American Journal of Surgery 183 (2002) 251255

written in the comments section. The behaviors they identified could have been different from those provided in the
instrument lists, but we found significant similarity between
the two sources. This could mean that residents simply
wrote about behaviors that already were listed in the instrument, or that there are just a few major characteristics of
superior teaching that discriminate between teaching performances.
Faculty change in exhibiting the different teaching behaviors was never significant. That is to say that those
faculty assessed to be in the superior group were never
joined by their colleagues from the mediocre group. Faculty
response to evaluation by learners, or apparent lack of
positive response, previously has been reported for surgeons
in academic settings. Cohen et al [10] found no significant
change in overall mean scores for student ratings of surgeons over 9 years. Upon closer look, the Cohen study
showed a link between student assessments and promotion
and tenure, where good and average surgeons maintained their ratings while most in the poor group improved to average. After promotion, most surgeons
showed a decrease in their evaluation scores. The current
study emphasizes our departments process to provide feedback and identify excellence. Individual faculty members
choose whether or not to include summaries from resident
evaluations in their own promotion and tenure documentation. Several aspects of the lack of faculty change finding
make interpretation difficult. Our residency may well need
to institute a more formal program of instructional consultation, mentoring, and or participation in faculty development activities. There are several reasonable medical models for faculty development in teaching [20 26]. The
development programs often begin with the premise that
surgeons are not formally prepared as educators. Even if
they were prepared, it is logical to recognize the need for all
instructors periodically to reflect upon their own teaching
skills. Another interpretation of the lack of faculty change is
to note the importance of the consistency of resident assessments over 5 years. Those faculty perceived as superior
were consistently perceived as superior, across resident levels and years. Because our process maintains anonymity, we
cannot sort by resident to determine individual rater variability or note how surgical maturation affects assessments
over years. We can see that assessors agreement regarding
who exhibits superior teaching behaviors is reflected in
usual assessment ranges of from 3 to 4 points, while mediocre score ranges are always from 0 to 4 points. There is
stronger agreement about what is excellence.
Our study occurred during a time of great financial challenge for academic medicine and surgery. Bland and Holloway [27] have asked, Is teaching compatible with competitive managed care in the future of health care? We
believe the answer is yes, and that surgeons teaching can
be demonstrated to be of high quality and value, based upon
critical teaching behaviors that have been shown to distinguish between excellence and mediocrity. A resident-based

teaching assessment system can offer a reasonable, valid,


and reliable form of feedback to academic surgeons. The
use of mixed methods to identify teaching behaviors which
characterize and focus upon excellence informs faculty of
how they are perceived as educators and provides examples
of specific behaviors which merit recognition by departmental and institutional administrators and other colleagues.
Teaching excellence represents vigorous scholarly effort
and hard intellectual work that virtually goes unrecognized
and often is not rewarded. The process described here can
provide the basis for a system that will recognize and
reward superior teaching.

References
[1] Dill D, Aluise J. Managing the role of the academic physician. In:
McGaghie WC, Frey JJ, editors. Handbook for the academic physician. New York: Springer Verlag, 1986, p 1121.
[2] Stritter FT, Baker RM, Shahady EJ. Clinical instruction. In: McGaghie WC, Frey JJ, editors. Handbook for the academic physician.
New York: Springer Verlag, 1986, p 98 124.
[3] Strodel WE, Zelenock GB. Teaching and learning in the operating
room. In: Bartlett RH, Zelenock GB, Strodel WE, et al, editors.
Medical education: a surgical perspective. Chelsea, MI: Lewis Publishers, 1986, p 4754.
[4] Dayton MT. The operating theater as a teaching laboratory: challenges and rewards. Focus Surg Educ 1996;13(3):1516.
[5] Woolliscroft JO. Who will teach? A fundamental challenge to medical education. Acad Med 1995;70:279.
[6] Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical
teaching: evaluation of a national dissemination program. Arch Intern
Med 1992;152:1152 61.
[7] Blue AV, Griffith CH, Wilson J, et al. Surgical teaching quality
makes a difference. Am J Surg 1999;177:86 9.
[8] Popham WJ. Educational evaluation. Englewood Cliffs, NJ: PrenticeHall, 1975, p 115.
[9] Dunnington G, DaRosa D, Kolm P. Development of a model for
evaluating teaching in the operating room. Curr Surg 1993;50:5237.
[10] Cohen R, MacRae H, Jamieson C. Teaching effectiveness of surgeons. Am J Surg 1996;171:612 4.
[11] Paukert JL. When residents talk and teachers listen: a communication
analysis. Acad Med 2000;75:S657.
[12] Gilbert S, Davidson JS. Using the world-wide-web to obtain feedback
on the quality of surgical residency training. Am J Surg 2000;179:
74 5.
[13] Baumgartner WA, Greene PS. Developing the academic thoracic
surgeon: teaching surgery. J Thorac Cardiovasc Surg 2000;119:
S225.
[14] Shores JH, Clearfield M, Alexander J. An index of students satisfaction with instruction. Acad Med 2000;75:S106 8.
[15] Mullan P, Sullivan D, Dielman T. What are raters rating? Predicting
medical student, pediatric resident, and faculty ratings of clinical
teachers. Teach Learn Med 1993;5:221 6.
[16] Kendrick SB, Simmons JM, Richards BF, Roberge LP. Residents
perceptions of their teachers: facilitative behavior and the learning
value of rotations. Med Educ 1993;27:55 61.
[17] Sloan DA, Donnelly MB, Schwartz RW. The surgical clerkship:
characteristics of the effective teacher. Med Educ 1996;30:18 23.
[18] Bragg D, Treat R, Simpson DE. Have clinical teaching effectiveness
ratings changed with the Medical College of Wisconsins entry into
the health care marketplace? Acad Med 2000;75(10):S59 61.

S.S. Cox and M.S. Swanson / The American Journal of Surgery 183 (2002) 251255
[19] Copeland HL, Hewson MG. Developing and testing an instrument to
measure the effectiveness of clinical teaching in an academic medical
center. Acad Med 2000;75:161 6.
[20] Irby DM. Faculty development and academic vitality. Acad Med
1993;68:760 3.
[21] Wilkerson L, Irby DM. Strategies for improving teaching practices: a
comprehensive approach to faculty development. Acad Med 1998;
73:38796.
[22] Banta TW. Using assessment to improve instruction. In: Menges RJ,
Weimer M, editors. Teaching on solid ground. San Francisco: JosseyBass, 1996, p 363 84.
[23] Curry L, Wergin JF. Setting priorities for change in professional

[24]

[25]
[26]

[27]

255

education. In: Curry L, Wergin JF, editors. Educating professionals.


San Francisco: Jossey-Bass, 1995, p 316 27.
Green ME, Ellis CL, Fremont P, et al. Faculty evaluation in departments of family medicine: do our universities measure up? Med Educ
1998;32:597 606.
Hewson MG. A theory-based faculty development program for clinician-educators. Acad Med 2000;75:498 501.
Richards BF, Wilking AP, Kirkland RT. A four-month faculty development curriculum on teaching and learning. Acad Med 1999;74:
614 5.
Bland CJ, Holloway RL. A crisis of mission: faculty roles and
rewards in an era of health care reform. Change 1995;27:30 5.

Você também pode gostar