Escolar Documentos
Profissional Documentos
Cultura Documentos
Measuring the
Competence of
Healthcare Providers
by Neeraj Kak, Bart Burkhalter, and Merri-Ann Cooper
Executive Summary
Competence encompasses knowledge, skills, abilities, and
traits. It is gained in the healthcare professions through
pre-service education, in-service training, and work
experience. Competence is a major determinant of provider
performance as represented by conformance with various
clinical, non-clinical, and interpersonal standards. Measuring
competence is essential for determining the ability and
readiness of health workers to provide quality services.
Although competence is a precursor to doing the job right,
measuring performance periodically is also crucial to
determine whether providers are using their competence on
the job. A provider can have the knowledge and skill, but use
it poorly because of individual factors (abilities, traits, goals,
values, inertia, etc.) or external factors (unavailability of
drugs, equipment, organizational support, etc.).
This paper provides a framework for understanding the key
factors that affect provider competence. Different methods for
measuring competence are discussed, as are criteria for
selecting measurement methods. Also, evidence from various
research studies on measuring the effectiveness of different
assessment techniques is presented.
Introduction
July 2 0 0 1 ■ V o l u me N o. 2 ■ Issue 1
The Quality Assurance Project endeavors to improve healthcare provider
Recommended citation performance. The QA Project developed this paper on measuring compe-
Kak, N., B. Burkhalter, and M. Cooper. tence to guide healthcare systems in improving their performance through
2001. Measuring the competence of better hiring, job restructuring, re-organization, and the like. The paper
healthcare providers. Operations focuses on competence and reviews several studies that have contributed
Research Issue Paper 2(1). Bethesda,
to the understanding of competency in medical education and healthcare
MD: Published for the U.S. Agency for
International Development (USAID) by settings. Little research exists—and more is needed—on measuring and
the Quality Assurance (QA) Project. improving competency in developing country healthcare settings.
The QA Project
The Quality Assurance Project is funded Limits of this paper
by the U.S. Agency for International
Development, under Contract Number
The conclusions about competence measurement are largely drawn from
HRN-C-00-96-90013. The QA Project serves studies conducted in the developed world with healthcare students, nurses,
countries eligible for USAID assistance, physicians, and other healthcare workers. Very few studies have been de-
USAID Missions and Bureaus, and other signed and conducted in developing countries on measuring competence
agencies and nongovernmental organi- and the relationship between competence and provider behavior. However,
zations that cooperate with USAID. The
QA Project team, which consists of
the measurement issues involved in the assessment of competence, including
prime contractor Center for Human
Services, Joint Commission Resources,
Inc., and Johns Hopkins University
CONTENTS
(JHU) provides comprehensive, leading-
edge technical expertise in the research,
design, management, and implementa-
tion of quality assurance programs in
developing countries. Center for Human Introduction ................................................................................................................................... 1
Services, the nonprofit af filiate of Limits of this paper ................................................................................................................... 2
University Research Co., LLC, provides
technical assistance in the design, What is competence? .................................................................................................................. 3
management, improvement, and
monitoring of health systems and Measuring competence ............................................................................................................... 4
service deliver y in over 30 countries. Why measure competence? ..................................................................................................... 4
Operations Research Issue Paper Restrictions on competency assessments .............................................................................. 5
The Operations Research Issue Papers Which competencies should be measured? ........................................................................... 5
present important background informa-
tion about key subjects relevant to the Conceptual framework ................................................................................................................ 5
QA Project’s technical assistance. The How are competencies acquired? ............................................................................................ 7
series provides a review of the state
of the art in research (both published
Other factors affecting provider performance ........................................................................ 7
and nonpublished, theoretical and
operational) on a subject, along with Approaches to competence measurement in healthcare ..................................................... 9
recommendations for research questions Which assessment method should be used? ......................................................................... 9
and productive lines of inquir y for the
project’s technical staff and external Criteria for selecting measurement methods ....................................................................... 14
researchers and health professionals. Validity ...................................................................................................................................... 14
Acknowledgements Reliability .................................................................................................................................. 15
This paper was researched and written Feasibility .................................................................................................................................. 15
by Neeraj Kak (Senior QA Advisor,
QA Project) and Merri-Ann Cooper Research and implementation needs ..................................................................................... 15
(Consultant, QA Project). It was
reviewed by Joanne Ashton, Thada Conclusion ................................................................................................................................... 17
Bornstein, Sue Brechen, Wendy Edson,
Lynne Miller Franco, Kama Garrison, Appendix ...................................................................................................................................... 19
Anthony Mullings, Jollee Reinke, and
Rick Sullivan, and edited by Beth References ................................................................................................................................... 23
Goodrich.
The diagnostic and ■ Detecting and documenting significant changes in patients condition
monitoring function ■ Providing an early warning signal: anticipating breakdown and deterioration prior to explicit, confirming diagnostic signs
■ Anticipating problems
■ Understanding the particular demands and experiences of an illness: anticipating patient care needs
■ Assessing the patients potential for wellness and for responding to various treatment strategies
Administering ■ Starting and maintaining intravenous therapy with minimal risks and complications
and monitoring ■ Administering medications accurately and safely: monitoring untoward effects, reactions, therapeutic responses, toxic-
therapeutic ity, incompatibilities
interventions and
regimens ■ Creating a wound management strategy that fosters healing, comfort, and appropriate drainage
Social Factors
■ Community expectations
■ Peer pressure
■ Patient expectations
Provider Motivation
■ Social values
■ Expectations
■ Self-efficacy
■ Individual goals/
values
Organizational Factors Provider Behavior
■ Readiness to change
■ Working conditions Performance according Result
■ Monitoring system to standards
Improvements in
■ Clarity of responsibilities and ■ Complete assessment
■ Health outcomes
organizational goals ■ Correct diagnosis
■ Client satisfaction
■ Organization of services/work ■ Appropriate referrals,
Provider Competencies
processes counseling, and treatment
■ Task complexity
■ Knowledge
■ Skills
■ Incentives/rewards
■ Abilities
■ Resource availability
■ Traits
■ Standards availability
■ Training
■ Supervision
■ Self-assessment
■ Communication mechanisms
■ Performance feedback
challenging objectives work harder (Spencer et al. 1994). Provider performance varies and is partly determined by
For a thorough discussion of health worker motivation, see the social standing, awareness, and expectations of the
Franco et al. (2000). client. Poorer, less educated, and less demanding clients
often receive less attention (Schuler et al. 1985). Peer pres-
Factors external to the individual and related to organiza- sure plays a role, and some providers fail to comply with
tional and social conditions also influence provider standards even if they have the requisite knowledge and
behavior and performance. There is evidence that higher skills.
performance is associated with sufficient resources to
perform the job, clear role expectations and standards of The socio-cultural environment in which people were
performance (Pritchard 1990), feedback on performance raised, live, and work affects job performance. “Any mean-
(Bartlem and Locke 1981; Locke and Latham 1990), and ingful analysis of work motivation in developing societies
rewards that are contingent on good performance (Locke has to be juxtaposed with an analysis of the physical and
and Latham 1990) and the nature of the reward. In general, socio-cultural environment as well as the stable attributes
the expectations of the organization, profession, and com- of the individual who is a product of such an environment.”
munity may influence the behavior and performance of (Mira and Kanungo 1994, p. 33). Therefore, organizations
providers for better or worse. For example, lack of supervi- need to adopt management practices that are consistent
sion may result in some health workers’ cutting corners with local conditions (Kanungo and Jaeger 1990).
inappropriately. Of course, unavailability of basic resources Mendonca and Kanungo (1994) propose that managers set
(such as equipment, supplies, and medicines) can result in goals within the employee’s current competencies and
poor performance in spite of high competence and increase those goals as the employee experiences success
motivation. and feels more capable of achieving more difficult goals.
Assessment Method
Clinical
Job Job Anatomic Simulation Computerized Written Performance
Sample Simulation Model Testing Records Test Test Appraisal
Advantages
Approximates real situation X X X
Assesses single or multiple competencies X X X X X X X
Disadvantages
Must wait for situation X
Requires extensive resources X X X
Table 2 summarizes the advantages and disadvantages of can be used to assess clinical decision-making skills, is
various competence measurement methods. Table 5 (see computerized clinical simulation testing (CST). Although
Appendix 1) provides the results of various studies that many computer tests essentially replicate a paper test,
tested approaches for measuring provider competence. The recent technological developments enable the creation of
following summarizes the results of healthcare research in tests that more closely approximate actual job conditions.
each method. “In CST, examinees are not cued to patient problems or
possible courses of action by the presentation of questions
Written tests with decision options. Instead, a brief introduction is pre-
The patient vignette is a type of written competency test in sented and the desired nursing actions are then specified
which short case histories are presented, and test-takers are by the examinee through ‘free text’ entry using the com-
asked pertinent questions about what actions should be puter keyboard.” (Bersky and Yocom 1994) The first screen
taken if the portrayal were real. Patient vignettes measure presents the case, and then the test-taker requests patient
competency in applying knowledge. Advantages include data using free text entry. The program responds by search-
standardization of questions, objectivity in scoring, and ing a database of 15,000 nursing activity terms, such as vital
minimal costs. One disadvantage is that competencies signs, progress notes, or lab results. Realism is enhanced by
involving physical skills, traits, and abilities cannot be mea- having the patient’s condition change in response to the
sured. In addition, performance on tests is inconsistently implementation of nursing interventions, medical orders,
predictive of performance with patients (Jansen et al. 1995; and the natural course of the underlying health problem.
Newble and Swanson 1988,Van der Vleuten and Swanson The actions of the test-taker are compared to standards for
1990). a minimally competent, beginning-level nurse. Major
advantages of CST (in addition to those in Table 2) are
Computerized tests consistency of the cases, objectivity in scoring, and low
One of the disadvantages of measuring competence with cost once the program is developed. Drawbacks include
patients or models is that a trained assessor is needed. An the inability to evaluate competencies involving physical
alternative, being used in the nursing profession and which actions and interpersonal interactions, high development
costs, and lack of computers in many developing countries.
Type of Assessor
Untrained Trained Expert
Self Trainer Peer Supervisor Patients Patients Observer
Advantages
Can observe competence or performance X X X X X X X
The three methods most often used to attach a score to the Detailed checklists are particularly appropriate for use by
level of competence are checklists, rating scales, and overall less expert and less well-trained personnel, such as stan-
assessments. Checklists provide a pre-defined list of behav- dardized patients (Norman et al. 1991). Checklists are also
iors (the standards) that are judged to be at or below useful for training and self-evaluations, since they clearly
standard for a particular provider. The final score is often define the steps involved in performing a task. However,
simply the number (or percentage) of behaviors performed they may be less appropriate for the assessment of complex
at standard. Rating scales provide a range of possible competencies (such as interpersonal skills or a complex
responses for each behavior on the checklist; the scales clinical/surgical procedure) that can be difficult to de-
reflect the level of competence or performance attained scribe in terms of specific, discrete behaviors. Van der
with respect to that behavior. For example, the level could Vleuten et al. (1991) reported that checklists in an OSCE
range from 1 (worst) to 5 (best). The precision of the defini- ranged from 30 to 120 items long. Observing, recording, and
tion of the different levels varies widely. The individual checking through long lists of behaviors is an arduous task
behavior scores are summed to obtain an overall score. and may exceed the willingness of observers.
Overall assessments rely on the general evaluation of the
rater and exclude explicit evaluations of individual behav- Like checklists, rating scales focus the observer’s attention
iors. Table 3 summarizes the advantages and disadvantages on those aspects of competence that are important and
of these scoring methods. provide a convenient way to record judgments (Fitzpatrick
et al. 1994). One problem with rating scales is that they
Checklists are the least subjective of the three scoring yield inconsistent measurements if they lack precise defini-
methods, and overall assessments the most. Rating scales tions for each rating level, allow subjective interpretations
tend to be more subjective than checklists because the about the performance required for each level, or are un-
levels for each behavior are rarely defined as precisely as clear about the meaning of terms that define the levels.
the behaviors in the checklist. The historical trend in scor-
ing methods is towards objectivity, to use methods that Overall assessments require greater expertise than either
reduce judgment and increase agreement, which means in rating scales or checklists since the assessor needs to know
the direction of detailed checklists and away from overall what to observe and evaluate, as well as the standards of
evaluations. Research findings show greatest agreement competence. A significant disadvantage of overall assess-
among independent raters who use checklists rather than ments is that the assessor does not necessarily list all of his
the other two methods (Van der Vleuten et al. 1991). or her observations. Hebers et al. (1989) had physicians
observe a videotape of a resident’s performance and score
the performance using overall evaluations, a rating scale,
1
The halo effect is the generalization from the perception of one outstanding personality trait to an overly favorable evaluation of the whole
personality.
Lecture programs and conferences disseminate reports. Case reviews provide an opportunity for discus-
information about new innovations among health work- sion, usually led by a mentor or preceptor, and appear to
ers and other staff. These programs cover information be more effective than other didactic methods in
on the current scope of practice or changes in the art improving knowledge or skills.
and science, based upon scientific information learned
Grand rounds at health facilities with a diverse patient
from current medical research.
population provide a unique learning opportunity for
Continuing education (CE) courses are an effective novices. Through grand rounds, novices are exposed to
way of keeping health workers abreast of new innova- the wider continuum of patient care, and they can inter-
tions in their field of specialization. In some countries, act with other members of the healthcare team and
health workers must earn a minimum number of CE experience a higher level of cognition about patient
units every year as part of re-certification. conditions and recovery than would normally occur
outside the hospital setting.
Refresher programs enable health workers to review
the original training program in a condensed number of Sentinel-event review, involving a review of an unex-
hours. However, refresher programs do not help in pected occurrence (mortality or morbidity), provides an
expanding the cognitive or psychomotor ability above opportunity to conduct a root cause analysis. Results
the entry level. should form the basis of a plan to reduce risk. The plan
must be monitored to evaluate its effectiveness relative
Self-education is another way health providers can
to the root cause.
improve knowledge in specific areas. This method can
be useful for acquiring new knowledge or skills with Mentoring/precepting, provided by more
immediate application to a task. Self-education may experienced health professionals, is a good strategy for
involve manuals or computer-based training (CBT: improving skills of novices and new-entry health
Knebel 2000). professionals.
competence, performance, and these other factors, and to ics, nurses, doctors) in developing countries? Limited
analyze the interaction between competence and these research has been conducted on identifying cost-effective
factors on performance. In addition, research should be approaches for measuring provider competence in devel-
undertaken to provide guidance to organizations that are oping countries. Although, a lot of checklists and question-
planning to evaluate competence to determine feasible naires have been developed and used for assessing
strategies to effectively assess competencies. provider knowledge, limited models for assessing provider
skills have been designed (but see Kelley et al. 2000).
How can improvements in competencies achieved by
training be sustained? Most training programs demonstrate Who is best suited to conduct competency assessments in
improvements in immediate post-training competence developing countries? What types of measures would they
levels. However, various studies show these improvements be willing to use?
decay over time, sometimes rapidly. Little research exists on
the most cost-effective ways to maintain the improvements What measures would healthcare workers understand,
gained during training. This should be remedied (Kim et al. accept, and use in developing countries?
2000). Research is needed to identify differences in assessment
Which competency measures (detailed checklists, rating results when using different assessors (supervisors, peers,
scales, overall assessments, simulations, work samples) are external reviewers, etc.).
most relevant for assessing the competence of various levels Should competency measurement be made part of
of healthcare workers (community-based workers, paramed- licensure, certification, and accreditation requirements?
Quizzes and questionnaires are excellent tools for analysis of a sentinel event can shed light on a
assessing provider knowledge. Results can be used to provider’s rationale in following a specific treatment
develop remedial strategies for improving provider regimen.
knowledge.
Supervised patient interactions also measure
Practical performance examinations or competence. A health provider can be observed while
observations with patients (real or simulated) are performing skills and procedures on a diverse patient
cost-effective ways to assess providers’ psychomotor population in a relatively short period.
and interpersonal skills. These methods can be
administered in a minimal amount of time and cover a Hospital clinical performance evaluations can be
wide domain of practice. Exam results can be quickly used to identify performance problems and their root
tabulated for making timely decisions about remedial causes. Based on this analysis, competence of providers
actions. in providing quality care can be assessed.
Aguinis, H., and K. Kraiger. 1997. Practicing what we preach: Bersky, A.K., and C.J.Yocom. 1994. Computerized clinical
Competency-based assessment of industrial/organizational simulation testing: Its use for competence assessment in
psychology graduate students. The Industrial-Organizational nursing. Nursing & Healthcare 15:120–27.
Psychologist 34:34–40.
Borman, W.C. 1974. The rating of individuals in
American Society for Training and Development. 1996. organizations: An alternative approach. Organizational
Linking training to performance goals. Info-Line Behavior and Human Performance 12:105–24.
(Issue 9606).
Bose, S., E. Oliveras, and W.N. Edson. Forthcoming.
Anastasi, A. 1976. Psychological Testing. (4th ed.) New York: How can self-assessment improve the quality of healthcare?
Macmillan. Operations Research Issue Paper. Bethesda, MD: To be
published for the U.S. Agency for International
Bandura, A. 1982. Self-efficacy mechanism in human agency. Development (USAID) by the Quality Assurance Project.
American Psychologist 37:122–47.
Boyatzis, R.E. 1982. The Competent Manager: A Model for
Bandura, A. 1986. Social Foundations of Thought and Action: Effective Performance. New York. Wiley.
A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice
Hall, Inc. Brown, L.P., L.M. Franco, N. Rafeh, and T. Hatzell. 1992.
Quality assurance of healthcare in developing countries.
Bandura, A., and D. Cervone. 1986. Differential engagement Bethesda, MD: Report published for the U.S. Agency for
of self-reactive influences in cognitive motivation. International Development (USAID) by the Quality
Organizational Behavior and Human Decision Processes Assurance Project.
38:92–113.
Bryce, J., A.Voigt, A. Adegorovye, B. Zakari, D. Oyebolu,
Bartlem, C.S., and E.A. Locke. 1981. The Coch and French A. Rodman, and S. Saba. n.d. Skills assessment in primary
study: A critique and reinterpretation. Human Relations healthcare training. (Report for the Africa Regional Project,
34:555–66. 698-0421). Washington, DC: United States Agency for
International Development.
Bashook, P.G., and J. Parboosingh. 1998. Continuing medical
education: Recertification and the maintenance of Butler, F.C. 1978. The concept of competence: An
competence. British Journal of Medicine 316:545–48. operational definition. Educational Technology January,
7–18.
Benner, P. 1982. Issues in competency-based testing.
Nursing Outlook 303–09. Cabana, M.D., C.S. Rand, N.R. Powe, A.W. Wu, M.H. Wilson,
P. Abbond, and H.R. Rubin. 1999. Why don’t physicians
Benner, P. 1984. From Novice to Expert: Excellence and follow clinical practice guidelines? A framework for
Power in Clinical Nursing Practice. California: improvement. Journal of the American Medical Association
Addison-Wesley. 282(15):1458–65.
Beracochea, E., R. Dickson, P. Freeman, and J. Thomason. Campbell, J.P., R.A. McCloy, S.H. Oppler, and C.E. Sager. 1993.
1995. Case management quality assessment in rural areas A theory of performance. In Personnel Selection in
of Papua, New Guinea. Tropical Doctor 25:69–74. Organizations, eds. N. Schmitt, W.C. Borman, and Associates.
San Francisco: Jossey-Bass.
Berden, H.J.J.M., J.M.A. Hendrick, J.P.E.J.Van Dooren,
F.F. Wilems, N.H.J. Pijls, and J.T.A. Knape. 1993. A comparison Carper, B.A. 1992. Fundamental patterns of knowing in
of resuscitation skills of qualified general nurses and nursing. In Perspectives on Nursing Theory, ed. L.H. Nicoll.
ambulance nurses in the Netherlands. Heart & Lung Philadelphia: J.B. Lippincott Company.
22:509–15.
Friedmanc, C.P., A.S. Elstein, F.M. Fredric, G.C. Murphy, Irvine, D. 1997. The performance of doctors. II: Maintaining
T.M. Franz, P.S. Heckerling, P.L. Fine, T.M. Miller,V. Abraham. good practice, protecting patients from poor performance.
1999. Enhancement of clinicians’ diagnostic reasoning by British Medical Journal 314:1613.
computer-based consultation. Journal of American Medical
Association 282:1851–56. Jaeger, A.M. 1990. The applicability of western management
techniques in developing countries: A cultural perspective.
Gordon, J.J., N.A. Saunders, D. Hennrikus, and In Management in Developing Countries, eds. A.M. Jaeger
R.W. Sanson-Fisher. 1992. Interns’ performance with and R.N. Kanungo. London: Routledge.
simulated patients at the beginning and the end of the
intern year. Journal of General Internal Medicine 7:57–62. Jansen, J.J.M., L.H.C. Tan, C.P.M.Van der Vleuten, S.J.Van Luijk,
J.J. Rethans, and R.P.T.M. Grol. 1995. Assessment of
Green, L., M. Kreuter, S. Deeds, K. Partridge. 1980. Health competence in technical clinical skills of general
Education Planning: A Diagnosis Approach. Palo Alto, CA: practitioners. Medical Education 29:247–53.
Mayfield Press.
JCAHO (Joint Commission on Accreditation of Healthcare
Hamric, A.B. 1989. History and overview of the CNS role. In Organizations). 1996. Comprehensive accreditation manual
The clinical nurse specialist in theory and practice, eds. for hospitals: The official handbook. Oakbrook, IL.
A.B. Hamric and J.A. Spross. Philadelphia: W.B. Saunders Co.
Kanungo, R.N. and A.M. Jaeger. 1990. Introduction: The
Harden, R.M., and F.A. Gleeson. 1979. Assessment of clinical need for indigenous management in developing countries.
competence using an objective structured clinical In Management in Developing Countries, in A.M. Jaeger and
examination (OSCE). Medical Education 13:41–54. R.N. Kanungo. London: Routledge.
Hebers, J.E., G.L. Noel, G.S. Cooper, J. Harvey, L.N. Pangaro, Kapil, U., A.K. Sood, D. Nayar, D.R. Gaur, D. Paul, S. Chaturvedi,
and M.J. Weaver. 1989. How accurate are faculty evaluations and M. Srivasta. 1992. Assessment of knowledge and skills
of clinical competence? Journal of General Internal about growth monitoring amounts medical officers and
Medicine 4:202–08. multi purpose workers. Indian Pediatrics 31:43–46.
Hermida, J., D. Nicholas, and S. Blumenfeld. 1996. Kelley, E., C. Geslin, S. Djibrina, and M. Boucar. 2000. The
Comparative Validity of Three Methods for Assessment of impact of QA methods on compliance with the Integrated
Quality of Primary Healthcare: Guatemala Field Study. Management of Childhood Illness algorithm in Niger.
Quality Assurance Methodology Refinement Series. Operations Research Results 1(2). Bethesda, MD: Published
Bethesda, MD: Published for the U.S. Agency for for the U.S. Agency for International Development (USAID)
International Development (USAID) by the Quality by the Quality Assurance Project.
Assurance Project.
Kelly, M.H., L.M. Campbell, and T.S. Murray. 1999. Clinical
Hodges, B., G. Regehr, N. McNaughton, R. Tiberius, M. Hansom. skills assessment. British Journal of General Practice
1999. OSCE checklists do not capture increasing levels of 49:447–50.
expertise. Academic Medicine 74:1129–34.
Klimoski, R.J., and M. London. 1974. Role of the rater in MacDonald, P. 1995. The peer review program of the
performance appraisal. Journal of Applied Psychology Indonesian midwives association (Final Report of Phase
59:445–51. Two of the Pilot Project). Bethesda, MD: University Research
Corporation.
Knebel. E. 2000. The use and effect of computer-based
training: What do we know? Operations Research Issue Marquez, Lani. Forthcoming. Helping Healthcare Providers
Papers 1(2) Bethesda, MD: Published for the U.S. Agency Perform According to Standards. Operations Research Issue
for International Development (USAID) by the Quality Paper. Bethesda, MD: To be published for the U.S. Agency
Assurance Project. for International Development (USAID) by the Quality
Assurance Project.
Kramer, D. 1996. Keeping competency up when patient
numbers are down. Kansas Nurse 71:4–5. McCaskey, L., and M. LaRocco. 1995. Competency testing in
clinical microbiology. Laboratory Medicine 26:343–49.
Landy, F.J. 1985. Psychology of Work Behavior. (3rd ed.).
Homewood, IL: Dorsey Press. McFaul, P.B., and P.W. Howie. 1993. The assessment of
clinical competence in obstetrics and gynecology in two
Lane, D.S., and V.S. Ross. 1998. Defining competencies and medical schools by an objective structured clinical
performance indicators for physicians in medical examination. British Journal of Obstetrics and Gynecology
management. American Journal of Preventive Medicine 100:842–46.
14:229–36.
Mendonca, M., and R.N. Kanungo. 1994. Motivation through
Latham, G.P. 1986. Job performance and appraisal. In effective reward management in developing countries. In
International Review of Industrial and Organizational Work Motivation. Models for Developing Countries,
Pychology, eds. C.L. Cooper and I. Robertson. New York: eds. R.N. Kanungo and M. Mendonca. New Delhi: Sage
Wiley. Publications.
Lenburg, C.B. 1999. Redesigning expectations for initial and Miller, G.E. 1993. Conference summary. Academic Medicine
continuing medical competence for contemporary nursing 68:471–76.
practice. Online Journal of Issues in Nursing.
Misra, S., and R.N. Kanungo. 1994. Bases of work motivation
Locke, E.A., and G.P. Latham. 1990. Work motivation: in developing societies: A framework for performance
The high performance cycle. In Work Motivation, management. I n Work Motivation. Models for Developing
eds. U. Kleinbeck, H. Quast, H. Thierry, and H. Hacker. Countries, eds. R.N. Kanungo and M. Mendonca. New
Hillsdale, NJ: Erlbaum. Delhi: Sage Publications.
Loevinsohn, B.O., E.T. Gierrero, and S.P. Gregorio. 1995. Mittman, B.S., X. Tonesk, and A. Jacobson. 1995.
Improving primary healthcare through systematic Implementing clinical practice guidelines: Social influence
supervision: A controlled field trial. Health Policy and strategies and practitioner behavior change. Quality Review
Planning 10:144–53. Bulletin 18:413–22.
Steiner, B.D., R.L. Cook, A.C. Smith, P. Curtis. 1998. Does Van der Vleuten, C.P.M., and D.B. Swanson. 1990. Assessment
training location influence the clinical skills of medical of clinical skills with standardized patients: State of the art.
students? Academic Medicine 73:423–26. Teaching and Learning in Medicine 2:58–76.
Stillman, P.L., D. Swanson, S. Smee, A.E. Stillman, T.H. Ebert,V.S. Van der Vleuten, C.P.M., G.R. Norman, and E. De Graaff. 1991.
Emmel, J. Caslowitz, H.L. Greene, M. Hamolsky, C. Hatem, D.J. Pitfalls in the pursuit of objectivity: Issues of reliability.
Levenson, R. Levin, G. Levinson, B. Ley, G.J. Morgan, T. Parrino, Medical Education 25:110–18.
S. Robinson, and J. Willms. 1986. Assessing clinical skills of
residents with standardized patients. Annals of Internal Velicer, W.F., J.S. Rossi, C.C. DiClemente, and J.O. Prochaska.
Medicine 105:762–71. 1996. A criterion measurement model for health behavior
change. Addictive Behavior 5:555–84.
Stillman, P., D. Swanson, M.B. Regan, M.M. Philbin,V. Nelson,
T. Ebert, B. Ley, T. Parrino, J. Shorey, A. Stillman, E. Alpert, Vu, N.V., H.S. Barrows, M.L. March, S.J.Verhulst, J.A. Colliver,
J. Caslowitz., D. Clive, J. Florek, M. Hamolsky, C. Hatem, and T. Travis. 1992. Six years of comprehensive, clinical,
J. Kizirian, R. Kopelman, D. Levenson, G. Levinson, J. McCue, performance-based assessment using standardized patients
H. Pohl, F. Schiffman, J. Schwartz, M. Thane, and M. Wolf. 1991. at the Southern Illinois University School of Medicine.
Assessment of clinical skills of residents utilizing Academic Medicine 67:42–49.
standardized patients. Annals of Internal Medicine,
114:393–401. While, A.E. 1994. Competence versus performance: Which is
more important? Journal of Advanced Nursing 29:525–31.
Stillman, P.L. 1993. Technical issues: Logistics. Academic
Medicine 68:464–68. Wood, E.P., and E. O’Donnell. 2000. Assessment of
competence and performance at interview. British Journal
Stoy, W.A. n.d. EMT-paramedic and EMT-intermediate of Medicine 320:S2–7231.
continuing education: National guidelines. National
Highway Traffic Safety Administration. Wood, R., and C. Power. 1987. Aspects of the competence-
performance distinction: Educational, psychological and
Sullivan, C.A. 1994. Competency assessment and measurement issues. Journal of Curriculum Studies
performance improvement for healthcare providers Journal 19:409–24.
of Health Quarterly 16:14–19.