Você está na página 1de 28

ISSUE PAPER

Measuring the
Competence of
Healthcare Providers
by Neeraj Kak, Bart Burkhalter, and Merri-Ann Cooper

Executive Summary
Competence encompasses knowledge, skills, abilities, and
traits. It is gained in the healthcare professions through
pre-service education, in-service training, and work
experience. Competence is a major determinant of provider
performance as represented by conformance with various
clinical, non-clinical, and interpersonal standards. Measuring
competence is essential for determining the ability and
readiness of health workers to provide quality services.
Although competence is a precursor to doing the job right,
measuring performance periodically is also crucial to
determine whether providers are using their competence on
the job. A provider can have the knowledge and skill, but use
it poorly because of individual factors (abilities, traits, goals,
values, inertia, etc.) or external factors (unavailability of
drugs, equipment, organizational support, etc.).
This paper provides a framework for understanding the key
factors that affect provider competence. Different methods for
measuring competence are discussed, as are criteria for
selecting measurement methods. Also, evidence from various
research studies on measuring the effectiveness of different
assessment techniques is presented.

Introduction

U nderstanding the causes of poor performance of


healthcare providers in both developed and develop-
ing countries is crucial to high quality healthcare. To the
extent poor performance is caused by low competence,
improving competency would improve performance. But
how are performance and competence linked, and how well
can we measure competence?

July 2 0 0 1 ■ V o l u me N o. 2 ■ Issue 1
The Quality Assurance Project endeavors to improve healthcare provider
Recommended citation performance. The QA Project developed this paper on measuring compe-
Kak, N., B. Burkhalter, and M. Cooper. tence to guide healthcare systems in improving their performance through
2001. Measuring the competence of better hiring, job restructuring, re-organization, and the like. The paper
healthcare providers. Operations focuses on competence and reviews several studies that have contributed
Research Issue Paper 2(1). Bethesda,
to the understanding of competency in medical education and healthcare
MD: Published for the U.S. Agency for
International Development (USAID) by settings. Little research exists—and more is needed—on measuring and
the Quality Assurance (QA) Project. improving competency in developing country healthcare settings.
The QA Project
The Quality Assurance Project is funded Limits of this paper
by the U.S. Agency for International
Development, under Contract Number
The conclusions about competence measurement are largely drawn from
HRN-C-00-96-90013. The QA Project serves studies conducted in the developed world with healthcare students, nurses,
countries eligible for USAID assistance, physicians, and other healthcare workers. Very few studies have been de-
USAID Missions and Bureaus, and other signed and conducted in developing countries on measuring competence
agencies and nongovernmental organi- and the relationship between competence and provider behavior. However,
zations that cooperate with USAID. The
QA Project team, which consists of
the measurement issues involved in the assessment of competence, including
prime contractor Center for Human
Services, Joint Commission Resources,
Inc., and Johns Hopkins University

CONTENTS
(JHU) provides comprehensive, leading-
edge technical expertise in the research,
design, management, and implementa-
tion of quality assurance programs in
developing countries. Center for Human Introduction ................................................................................................................................... 1
Services, the nonprofit af filiate of Limits of this paper ................................................................................................................... 2
University Research Co., LLC, provides
technical assistance in the design, What is competence? .................................................................................................................. 3
management, improvement, and
monitoring of health systems and Measuring competence ............................................................................................................... 4
service deliver y in over 30 countries. Why measure competence? ..................................................................................................... 4
Operations Research Issue Paper Restrictions on competency assessments .............................................................................. 5
The Operations Research Issue Papers Which competencies should be measured? ........................................................................... 5
present important background informa-
tion about key subjects relevant to the Conceptual framework ................................................................................................................ 5
QA Project’s technical assistance. The How are competencies acquired? ............................................................................................ 7
series provides a review of the state
of the art in research (both published
Other factors affecting provider performance ........................................................................ 7
and nonpublished, theoretical and
operational) on a subject, along with Approaches to competence measurement in healthcare ..................................................... 9
recommendations for research questions Which assessment method should be used? ......................................................................... 9
and productive lines of inquir y for the
project’s technical staff and external Criteria for selecting measurement methods ....................................................................... 14
researchers and health professionals. Validity ...................................................................................................................................... 14
Acknowledgements Reliability .................................................................................................................................. 15
This paper was researched and written Feasibility .................................................................................................................................. 15
by Neeraj Kak (Senior QA Advisor,
QA Project) and Merri-Ann Cooper Research and implementation needs ..................................................................................... 15
(Consultant, QA Project). It was
reviewed by Joanne Ashton, Thada Conclusion ................................................................................................................................... 17
Bornstein, Sue Brechen, Wendy Edson,
Lynne Miller Franco, Kama Garrison, Appendix ...................................................................................................................................... 19
Anthony Mullings, Jollee Reinke, and
Rick Sullivan, and edited by Beth References ................................................................................................................................... 23
Goodrich.

2■ Measuring the Competence of Healthcare Providers


validity and reliability, are relevant within any cultural Competence is one of many determinants of performance.
context. The findings relevant to these issues (e.g., what The relationship between competence (can do) and perfor-
types of measures are most reliable) likely apply in any mance (does do) is complex: the first does not always
country. However, the applicability of the other findings predict the second (Southgate and Dauphinee 1998).
and recommendations should be evaluated for relevance Obviously, less competent providers are less likely to
to the specific situation. For example, there is extensive provide quality services, and healthcare providers must
evidence from developed countries that detailed and have the competencies necessary to perform their jobs
immediate feedback on performance improves both learn- according to standards in order to provide quality services.
ing and later performance. Will such feedback procedures
be equally effective in all cultures? The reader is cautioned Attempts are sometimes made to measure competence in
against assuming that all of the conclusions in this paper, terms of performance. However, competence should not
coming from one cultural context, will necessarily apply to be inferred from performance (While 1994). While compe-
other contexts. tence is defined in terms of someone’s capacity to perform,
performance is the resulting behavior. “Performance is
Second, the literature reviewed in this paper concerns the something that people actually do and can be observed.
clinical competence of individuals. Research studies on By definition, it includes only those actions or behaviors
the evaluation of team performance exist, but were not that are relevant to the organization’s goals and that can be
reviewed for this paper. scaled (measured) in terms of each person’s proficiency
(that is, level of contribution). Performance is what the
Last, self-assessment, an emerging method for measuring
organization hires one to do, and do well” (Campbell et
competence, is addressed in a forthcoming Issue Paper:
al. 1993).
“How Can Self-Assessment Improve the Quality of Healthcare?”

What is competence? Abbreviations


Competence refers to a person’s underlying characteristics CE Continuing education
that are causally related to job performance (Boyatzis
1982). Competence is defined in the context of particular CEF Clinical evaluation form
knowledge, traits, skills, and abilities. Knowledge involves CHR Center for Health Research
understanding facts and procedures. Traits are personality CPR Cardio-pulmonary resuscitation
characteristics (e.g., self-control, self-confidence) that pre-
dispose a person to behave or respond in a certain way. CST Clinical simulation testing
Skill is the capacity to perform specific actions: a person’s DSS Decision support systems
skill is a function of both knowledge and the particular IUD Intrauterine device
strategies used to apply knowledge. Abilities are the
attributes that a person has inherited or acquired through IV Intravenous needle
previous experience and brings to a new task (Landy 1985): JCAHO Joint Commission on the Accreditation of
they are more fundamental and stable than knowledge and Healthcare Organizations
skills (Fleishman and Bartlett 1969). KTS Knowledge test of skills
Competence can be defined as the ability to perform a MSE Multiple-station examination
specific task in a manner that yields desirable outcomes. OACA Objective Assessment of Competence Achievement
This definition implies the ability to apply knowledge, skills,
and abilities successfully to new situations as well as to OSCE Objective structured clinical examination
familiar tasks for which prescribed standards exist (Lane PBL Problem-based learning
and Ross 1998). Health workers acquire competence over PBT Performance-based test
time (Benner 1984). Typically, pre-service education or an
initial training opportunity creates a novice who, after addi- PVA Practice video assessment
tional training and hands-on experience, reaches a level SAQ Self-assessment questionnaire
that can be certified as competent. Although competence UAE United Arab Emirates
is considered to be a major milestone in professional
development, it is not the final point. That comes with UK United Kingdom
proficiency, and the ultimate status of expert comes after USAID U.S. Agency for International Development
many years of experience and professional growth
(Benner 1984).

QA Operations Research Issue Paper ■ 3


Measuring competence Liability and ethics: Healthcare organizations are
responsible for the quality of care their staff provide and
Why measure competence? consequently must ensure that their staffs are competent
and can meet standards for the provision of care. Assessing
There are many good reasons for measuring competence. providers’ competence periodically enables healthcare
Ministries of health, professional organizations, and organizations to meet this crucial responsibility.
healthcare organizations must ensure that appropriate
expectations for competence are set and that their staff Risk management: Competency assessments can be used
perform to standard. Healthcare organizations must meet to monitor organization-wide knowledge of policies and
certain criteria to provide services. These organizations— procedures related to high-risk areas. Feedback from these
through certification, licensure, and/or accreditation—are assessments can be used for training and continuing
able to exert control on health providers and, as a result, to education of providers and to improve overall organiza-
influence the quality of care. Although most health provid- tional performance.
ers must demonstrate minimum competence during training Certification and recertification of providers: Compe-
to move up to the next level or graduate from a course, not tency assessment is an integral part of the certification and
all healthcare organizations assess job- or skill-specific recertification processes of service providers. For example,
competencies before offering employment. Reasons why recertification programs in the U.S. use examinations and
healthcare organizations should measure competence performance assessments as “snapshots” of competence
include: every seven to ten years (Bashook and Parboosingh 1998).
Healthcare reform: The increasing complexities of
healthcare delivery and changing market conditions have
forced health policy-makers to promote the assessment of Documenting competence is becoming
initial competence of students and new graduates and the
continuing competence of experienced and certified practi- essential—not optional—and is likely to become
tioners (Lenburg 1999). In the United States, this has led to
various reforms affecting teaching and the methods used to mandatory in the near future for initial and
assess students’ actual competence. The Joint Commission
on the Accreditation of Healthcare Organizations (JCAHO) continuing licensure and certification, and
has required specific validation of the competence of
healthcare providers for institutional accreditation (JCAHO perhaps even for employment (Lenburg 1999).
1996). With rapid technological and scientific innovations,
U.S. employers are spending substantial resources to assure
Planning for new services: Competency assessment
competencies of new and existing health staff. These
can help managers identify providers who are competent
health institutions are using specific standardized tests to
to provide a new clinical service, providers who need
document the readiness of employees for full practice
improvements in specific knowledge or skill areas when a
responsibilities (Lenburg 1999).
new service is offered, and providers who are ready to act as
Organizational performance: Are providers effectively mentors of newly trained providers.
treating their clients? And if not, why? Is the problem a lack
Measuring training outcomes: Competency assessment
of competence? Healthcare organizations need to assess
can determine the efficacy of training interventions in
individual and organizational performance periodically to
closing knowledge and skill gaps and to assess and improve
assess the efficacy of their services. The results help
training. Low scores on competence assessments after
healthcare organizations determine whether they need to
training may indicate that the training was ineffective, poorly
design training and/or continuing education interventions
designed, poorly presented, or inappropriate. Trainers can
for improving provider performance.
use this information to improve training content or delivery.
Comparing assessments of competence and job perfor- If the assessments aim to improve specific components of
mance may indicate the extent to which the organization training, the trainer may be able to determine where more
provides the support needed for quality care. High compe- information is needed, which exercises require clarification,
tency and low performance may signal that an organization or if more time is required to cover a topic (Smith and
is not providing the needed resources, has not clarified Merchant 1990).
standards of care, or is not rewarding effective performance
or correcting poor performance.

4■ Measuring the Competence of Healthcare Providers


Selection of new staff: Competency assessment is useful
Model ■ Novice to Expert
when recruiting new staff to ensure they can do the job
they are hired to do or could do it with reasonable Patricia Benner (1984) provides a framework describing knowledge
orientation/training. embedded in nursing practice that accrues over time. The model
differentiates theoretical knowledge from practical knowledge acquired
Individual performance improvement: Competency from clinical practice. The five levels of skill acquisition are:
assessment can play an important role in an organization’s
performance improvement initiatives. Assessment results Novice: The novice has no background or experience in his or her area.
can identify gaps in knowledge and skills, and guide Rules and objective attributes are applied without an understanding or
managers in setting appropriate training or other remedial knowledge of the context of the situation. Nursing students are novices
interventions targeting individual providers or groups of in nursing; one becomes a novice whenever he or she is placed in an
unfamiliar area of practice.
providers.
Supervision: Competency assessments can guide Advanced Beginner: The advanced beginner demonstrates marginally
acceptable performance based on experience acquired under the
healthcare managers in providing performance improve- mentoring of a more experienced nurse or a teacher. The practice of
ment feedback to healthcare providers. nursing is rules-based and oriented toward completion of tasks. The
larger context of the situation is difficult to grasp at this stage. There
Restrictions on competency assessments is a concern for good management of skills and time, but the need for
guidance and assistance remains.
For budgetary or other reasons, many health organizations
may not routinely measure competence across the breadth Competent: Competent nurses are able to differentiate between the
of workplace tasks until a performance problem becomes aspects of the current situation and those of the future and can select
those aspects that are important. The focus on good management of
apparent. Also, some competencies may be difficult to time skills remains, but the sense of responsibility is higher. However,
describe precisely or evaluate accurately. In such cases, it is they may have an unrealistic concept of what they can actually handle.
essential that health programs link competencies with
specific performance indicators as a proxy for measuring Proficient: Proficient nurses are able to see the whole situation in
competence (Lane and Ross 1998). context and can apply knowledge to clinical practice, identifying the
most salient aspects and differentiating them from those that are less
important. Actions are intuitive and skilled. They have confidence in their
Which competencies should be measured? own knowledge and abilities, focus less on rules and time management.
Health workers need a large number of competencies for
Expert: The expert nurse is able to focus intuitively on solutions to
providing quality services. Benner (1984) and Fenton situations without having to explore alternatives. This ability is based on
(1985) have proposed several domains with specific compe- a rich experiential background. Focus is on meeting patient needs and
tencies that are critical for nursing care (see Table 1). concerns to the point of being an advocate for the patient and care. The
Although these domains have been defined from a nursing focus on self and one’s own performance is diminished.
practice perspective, they apply equally to other types of
health workers. Some of these competencies affect the
quality of care directly and others indirectly. While health
workers can gain knowledge about various competencies and responsibilities, and to meet performance standards.
during pre-service education, skills related to these compe- Competence of all staff should be assessed prior to
tencies are further advanced during practicum or on the employment, during the orientation period, and at least
job. In addition, mentors and preceptors can further assist annually thereafter. A well-defined job description and use
to improve health worker competency. of appropriate competency assessment tools go a long way
in ensuring that competent providers are recruited to
Conceptual framework begin with and that through periodic assessments appropri-
ate remedial actions are taken to close any competence
Competency is defined as the ability of a health worker to gaps.
perform according to predefined standards. Competency is
Periodic competence assessments should be considered for
developed through pre-service education, in-service train-
those areas that are considered low-volume, high-risk, or
ing, hands-on experience, and the assistance of mentors and
critical (Centra Health 1999). Low-volume competencies
preceptors.
are those that occur so infrequently that they need to be
Competency measurement is critical to ensuring that all assessed at least annually to ensure that providers are still
employees are competent to perform their assigned duties able to perform these duties. High-risk competencies are

QA Operations Research Issue Paper ■ 5


Table 1 ■ Domains of Nursing Care and Key Competencies

Domain Key Competencies


The helping role ■ The healing relationship
■ Providing comfort measures and preserving personhood in the face of pain and/or extreme breakdown
■ Maximizing the patient’s participation and control in her/her own recovery
■ Providing informational and emotional support to patient’s family
■ Guiding patient through emotional and developmental change; providing new options, closing old ones

The diagnostic and ■ Detecting and documenting significant changes in patient’s condition
monitoring function ■ Providing an early warning signal: anticipating breakdown and deterioration prior to explicit, confirming diagnostic signs
■ Anticipating problems
■ Understanding the particular demands and experiences of an illness: anticipating patient care needs
■ Assessing the patient’s potential for wellness and for responding to various treatment strategies
Administering ■ Starting and maintaining intravenous therapy with minimal risks and complications
and monitoring ■ Administering medications accurately and safely: monitoring untoward effects, reactions, therapeutic responses, toxic-
therapeutic ity, incompatibilities
interventions and
regimens ■ Creating a wound management strategy that fosters healing, comfort, and appropriate drainage

Effective ■ Skilled performance in extreme life-threatening emergencies: rapid grasp of a problem


management of ■ Contingency management: rapid matching of demands and resources in emergency situations
rapidly changing
situations
■ Identifying and managing patient crisis until physician assistance is available

The ■ Timing: capturing a patient’s readiness to learn


teaching-coaching ■ Assisting patients in integrating the implications of illness and recovery into their lifestyles
function ■ Eliciting and understanding the patient’s interpretation of his or her illness
■ Providing an interpretation of the patient’s condition and giving a rationale for procedures
■ The coaching function: making culturally avoided aspects of an illness approachable and understandable
Monitoring and ■ Providing a back-up system to ensure safe medical and nursing care
ensuring the quality ■ Assessing what can be safely omitted from or added to medical orders
of healthcare
practices
■ Recognition of a generic recurring event or problem that requires a policy change
■ Seeking appropriate and timely responses from physicians
Organizational ■ Coordinating, prioritizing, and meeting multiple patient needs and requests
and work-role ■ Building and maintaining a therapeutic team to provide optimum therapy; providing emotional and situational support
competencies to nursing staff
■ Competencies developed to cope with staff and organizational resistance to change; showing acceptance of staff
persons to resist system change; using formal research findings to initiate and facilitate system change; using
mandated change to facilitate other changes
■ Making the bureaucracy respond to the patient’s and family’s needs
The consulting role ■ Providing patient care consultation to the nursing staff through direct patient intervention and follow-up
■ Interpreting the role of nursing in specific, clinical patient care situations to nursing and other professional staff
■ Providing patient advocacy by sensitizing staff to the dilemmas faced by patients and families seeking healthcare

Adapted from Spross and Baggerly (1989)

6■ Measuring the Competence of Healthcare Providers


those that put the patient and/or organization at risk if not whether a behavior will be initiated and sustained
performed to standard. Critical competencies are ones that (Bandura 1986). Self-efficacy is determined by the confi-
are critical for effective performance. Competencies dence and/or training of a health worker. Low self-efficacy
required in the following key areas should be assessed to can lead to poor compliance with clinical guidelines and
ensure that healthcare workers are able to perform infre- other standards of care. Many traits are slow to change or
quent, high-risk, and critical healthcare activities: perform- even permanent.
ing duties, procedures, treatments, etc.; using equipment;
A prerequisite to provider behavior change is to understand
emergency response and lifesaving interventions; managing
the underpinnings of current practices. A health provider
patient/customer relations; patient assessment; and commu-
may not be able to overcome the persistence of previous
nicating with patient and/or family. Competency-related
practice or may not have the motivation to change (Cabana
data may be derived from specialized tests, interviews,
1999). According to the “readiness for change” model,
performance evaluations, quality improvement findings,
developed by Prochaska and DiClemente (1986), behavior
patient satisfaction surveys, employee surveys, and other
change consists of a continuum of steps that include
needs assessments. Based on the results of competency
pre-contemplation, contemplation, preparation, action, and
assessments, appropriate educational or remedial programs
maintenance. For example, when this model was applied to
should be developed to meet identified needs or gaps and
physician attitudes towards cancer screening guidelines, the
to improve overall performance.
results suggested that nearly half of the physicians surveyed
were in a pre-contemplation stage and not ready to change
How are competencies acquired?
behavior (Main et al. 1995).
As noted above, competence is defined in the context of
knowledge, skills, abilities, and traits. These components of Other factors affecting provider performance
competence are acquired in different ways.
Extensive research shows that the more the person’s compe-
A provider obtains knowledge in several ways, including tencies match the requirements of a job, the more effective
pre-service education and in-service training. Knowledge is the person will be performing (Hunter 1983; Spencer et al.
further enhanced through on-the-job experience—including 1994). However, competency does not always lead to effec-
feedback from supervisors and peers—and continuing tive performance. There is a difference between what an
education. Field experience shows that providers may not individual should be able to do at an expected level of
use their knowledge correctly or consistently all the time achievement and what he or she actually does in a real-life
for a variety of reasons (CHR 1999). Factors at the indi- setting (While 1994, p. 526). A number of other factors
vidual, organizational, and environmental levels all affect including personal motivation, adequate support of the
the correct use of knowledge. hospital authorities, colleagues, and even non-professional
health workers can affect worker performance (Salazar-
Skills refers to “actions (and reactions) that an individual
Lindo et al. 1991). According to Campbell (1993), motiva-
performs in a competent way in order to achieve a goal”
tion is reflected in the completeness, the intensity, and the
(Ericsson 1996). Skills are gained through hands-on training
persistence of effort. For example, a healthcare worker may
using anatomic models or real patients, or through role-
be competent to perform a medical procedure, but may not
plays. One may have no skill, some skill, or complete mas-
be willing to expend the effort to perform all the required
tery; therefore, in teaching or testing a skill, the level of
behaviors. Figure 1 illustrates the relationship of these
acceptable mastery must be defined based on the training
factors and their effect on provider performance.
level.
Motivation is strengthened and providers work with more
Abilities refers to the power or capacity to do something or
completeness, intensity, or persistence when: they are com-
act physically, mentally, legally, morally, etc. Abilities are
mitted to a clear and challenging goal (Locke and Latham
gained or developed over time and, as a result, are more
1990), their job offers an opportunity to demonstrate mas-
stable than knowledge and skills. Traits influence abilities
tery rather than an occasion to be evaluated (Nicholls 1984;
(discussed below).
Nicholls and Miller 1984), they believe that a particular
Traits refers to distinguishing characteristics or qualities, procedure or standard will be effective (Cabana 1999), and
especially of a personal nature. These include attitudes they have high expectations for success (“outcome expect-
(personal and social values), self-control, and self-confi- ancy”) (Bandura 1982, 1986; Bandura and Cervone 1986;
dence. Traits influence abilities. For example, self-efficacy is Erez 1990). Individual traits can also determine motivation.
the belief that one can do a task as required; it influences For example, people with a disposition for accomplishing

QA Operations Research Issue Paper ■ 7


Figure 1 ■ Determinants of Healthcare Provider Performance According to Standards

Social Factors
■ Community expectations

■ Peer pressure

■ Patient expectations
Provider Motivation
■ Social values
■ Expectations

■ Self-efficacy

■ Individual goals/
values
Organizational Factors Provider Behavior
■ Readiness to change
■ Working conditions Performance according Result
■ Monitoring system to standards
Improvements in
■ Clarity of responsibilities and ■ Complete assessment
■ Health outcomes
organizational goals ■ Correct diagnosis
■ Client satisfaction
■ Organization of services/work ■ Appropriate referrals,
Provider Competencies
processes counseling, and treatment
■ Task complexity
■ Knowledge
■ Skills
■ Incentives/rewards
■ Abilities
■ Resource availability
■ Traits
■ Standards availability

■ Training

■ Supervision

■ Self-assessment

■ Communication mechanisms

■ Performance feedback

challenging objectives work harder (Spencer et al. 1994). Provider performance varies and is partly determined by
For a thorough discussion of health worker motivation, see the social standing, awareness, and expectations of the
Franco et al. (2000). client. Poorer, less educated, and less demanding clients
often receive less attention (Schuler et al. 1985). Peer pres-
Factors external to the individual and related to organiza- sure plays a role, and some providers fail to comply with
tional and social conditions also influence provider standards even if they have the requisite knowledge and
behavior and performance. There is evidence that higher skills.
performance is associated with sufficient resources to
perform the job, clear role expectations and standards of The socio-cultural environment in which people were
performance (Pritchard 1990), feedback on performance raised, live, and work affects job performance. “Any mean-
(Bartlem and Locke 1981; Locke and Latham 1990), and ingful analysis of work motivation in developing societies
rewards that are contingent on good performance (Locke has to be juxtaposed with an analysis of the physical and
and Latham 1990) and the nature of the reward. In general, socio-cultural environment as well as the stable attributes
the expectations of the organization, profession, and com- of the individual who is a product of such an environment.”
munity may influence the behavior and performance of (Mira and Kanungo 1994, p. 33). Therefore, organizations
providers for better or worse. For example, lack of supervi- need to adopt management practices that are consistent
sion may result in some health workers’ cutting corners with local conditions (Kanungo and Jaeger 1990).
inappropriately. Of course, unavailability of basic resources Mendonca and Kanungo (1994) propose that managers set
(such as equipment, supplies, and medicines) can result in goals within the employee’s current competencies and
poor performance in spite of high competence and increase those goals as the employee experiences success
motivation. and feels more capable of achieving more difficult goals.

8■ Measuring the Competence of Healthcare Providers


Figure 2 ■ Assessment Methods and Job Performance

LEAST Congruence with Actual Job Performance MOST

Performance Physical Job


Written Test Computer Test Job Sample
Records Models Simulation

Approaches to competence measurement Assessment methods can be presented on a continuum


reflecting the extent to which they approximate actual job
in healthcare performance, from least to most similar, as shown in Figure
The term gold standard is used in healthcare (and other 2. Spencer et al. (1994) reported that job samples and job
arenas) to describe practices that are accepted as the best simulations are among the best predictors of job perfor-
and most effective for a particular problem, disease, or inter- mance. Written tests are probably furthest from—and the
vention. Although there are best practices that a healthcare weakest predictor of—actual job performance.
provider should use, there is no gold standard for measuring The greater the congruence between the method used to
the provider’s competence. The most effective and feasible measure competency and the actual job, the more likely the
approach depends on the situation. This section presents competency measure will predict job performance,
different approaches for measuring competence. although this does not imply that good predictors of job
performance are the best estimators of competency. In fact,
Which assessment method should be used? competency cannot always be inferred from job perfor-
Selecting a measure of competence involves making three mance. Poor job performance may have many causes, not
decisions: (a) What assessment methods are available? just lack of competency, but good job performance usually
(b) How should scores be derived based on the method? does imply the presence of competencies needed for the
and (c) Who should observe/evaluate competence? job, especially for tasks that are reasonably complex. Simple
tasks, however, can be performed adequately for a while
a. Assessment methods with inadequate knowledge by emulating someone else’s
behavior. In a pre-post, control group research design,
Competence can be measured using a variety of methods:
increased competency can be inferred when experimental
written tests, computerized tests, records of performance,
group performance improves significantly more than
simulations with anatomic models, other simulations, job
control group performance following an intervention aimed
samples, and supervisory performance appraisals. These
at increasing competency.
methods differ in a number of ways. Of particular relevance
to this paper is the degree to which a method assesses a Different assessment methods have different strengths and
single competency (as compared to the assessment of weaknesses. Both computerized and written tests can assess
multiple competencies) and the extent to which what is abilities, traits, and knowledge, but they cannot assess skills,
being observed or evaluated approximates on-the-job which require the physical performance of some actions.
performance. The availability of resources is a key factor Records of performance, unlike all the other methods for
in selecting a particular method or methods and setting assessing competence, can be evaluated without the
assessment frequency. provider’s awareness, but often omit important information
about performance. Anatomic models limit both the types
The advantage of assessing a single competency is that it
of competencies that can be assessed and the realism of
could then be targeted for improvement. Competency
the patient-provider interaction. Job samples and job simu-
measurements that predict job performance may increase
lations limit the circumstances and illnesses (or needs) and
the chances that a remedial action will be identified and
require both observation and evaluation. Fear about being
will increase job performance of trainees who score low on
evaluated can cause poorer performance, and awareness of
end-of-training competency tests. Competency measure-
being evaluated can contribute to greater care and atten-
ment methods that poorly predict job performance are less
tion to a patient, which may only appear to be better
likely to be effective in this regard.
performance.

QA Operations Research Issue Paper ■ 9


Table 2 ■ Advantages and Disadvantages of Different Assessment Methods

Assessment Method
Clinical
Job Job Anatomic Simulation Computerized Written Performance
Sample Simulation Model Testing Records Test Test Appraisal
Advantages
Approximates real situation X X X
Assesses single or multiple competencies X X X X X X X

Patient can report on care X X


Replicable X X X X X X X
Evaluates full range of competencies X X X

Disadvantages
Must wait for situation X
Requires extensive resources X X X

Requires trained assessor X X X X


Potential bias X

Table 2 summarizes the advantages and disadvantages of can be used to assess clinical decision-making skills, is
various competence measurement methods. Table 5 (see computerized clinical simulation testing (CST). Although
Appendix 1) provides the results of various studies that many computer tests essentially replicate a paper test,
tested approaches for measuring provider competence. The recent technological developments enable the creation of
following summarizes the results of healthcare research in tests that more closely approximate actual job conditions.
each method. “In CST, examinees are not cued to patient problems or
possible courses of action by the presentation of questions
Written tests with decision options. Instead, a brief introduction is pre-
The patient vignette is a type of written competency test in sented and the desired nursing actions are then specified
which short case histories are presented, and test-takers are by the examinee through ‘free text’ entry using the com-
asked pertinent questions about what actions should be puter keyboard.” (Bersky and Yocom 1994) The first screen
taken if the portrayal were real. Patient vignettes measure presents the case, and then the test-taker requests patient
competency in applying knowledge. Advantages include data using free text entry. The program responds by search-
standardization of questions, objectivity in scoring, and ing a database of 15,000 nursing activity terms, such as vital
minimal costs. One disadvantage is that competencies signs, progress notes, or lab results. Realism is enhanced by
involving physical skills, traits, and abilities cannot be mea- having the patient’s condition change in response to the
sured. In addition, performance on tests is inconsistently implementation of nursing interventions, medical orders,
predictive of performance with patients (Jansen et al. 1995; and the natural course of the underlying health problem.
Newble and Swanson 1988,Van der Vleuten and Swanson The actions of the test-taker are compared to standards for
1990). a minimally competent, beginning-level nurse. Major
advantages of CST (in addition to those in Table 2) are
Computerized tests consistency of the cases, objectivity in scoring, and low
One of the disadvantages of measuring competence with cost once the program is developed. Drawbacks include
patients or models is that a trained assessor is needed. An the inability to evaluate competencies involving physical
alternative, being used in the nursing profession and which actions and interpersonal interactions, high development
costs, and lack of computers in many developing countries.

10 ■ Measuring the Competence of Healthcare Providers


Review of medical records standardized patients and that history taking, physical
examination, findings, and diagnoses are quite similar for
Medical records have some advantages and many disadvan-
announced and unannounced standardized patients.
tages as a data source for assessing competency. On the
positive side, data can be obtained retroactively and at a One advantage of standardized clients is that they can be
relatively low cost compared to other methods. Providers trained to accurately and consistently evaluate and report
are not aware of or influenced by the data collection, elimi- provider performance. In one study (Colliver and Williams
nating one source of bias. On the negative side, patient 1993), standardized patients consistently agreed with 83
records are often incomplete. Using standardized patients, percent of the evaluations of clinical skills made by three
Norman et al. (1985) analyzed the completeness of patient faculty physician observers. In another, standardized
records and found that many omitted critical actions. For patients were 95 percent accurate in portraying the details
instance, counseling was rarely recorded. Overall one-third of an illness (Colliver and Williams 1993). Standardized
to one-half of the procedures performed were not recorded. clients also provide a replicable case for multiple
Furthermore, record audits proved unlikely to detect miss- healthcare providers, thus enabling direct comparison of
ing diagnoses or misdiagnoses. Missing and poor quality their performances (Stillman et al. 1986). In addition, unlike
records are prevalent in developing countries, especially in an actual client, a standardized client can portray the
primary care facilities. Another problem is that competency disease or problem in a way that is most relevant to the
cannot be reliably inferred from performance. particular competency being measured.

Anatomic models Use of standardized clients also has disadvantages. It can


be difficult to separate competence from performance.
Anatomic models are often used in healthcare for training
Studies show that clinical performance by a single provider
and competency measurement. They are especially appro-
is not consistent across patients and specifics of a disease
priate for assessing competency (as opposed to perfor-
(Stillman et al. 1991); therefore, one standardized patient
mance) in certain physical skills, such as inserting an
does not provide a reliable estimate of provider perfor-
intravenous needle (IV) or an intrauterine device (IUD), or
mance or competence. Furthermore, unlike real clients,
cardio-pulmonary resuscitation (CPR). Other advantages
standardized clients must be paid and trained (Stillman
include low cost, standardized testing, and repeated use
1993). Tamblyn et al. (1991) reported that it took three
without burdening patients. The disadvantages include
one-hour training sessions to train standardized patients
their inability to simulate: (a) provider-client interactions
and that more experienced standardized patients per-
(including client feedback), and (b) the complications that
formed more accurately than less experienced ones. If
occur in real patients, such as multiple or inconsistent
the competence of several healthcare providers is being
symptoms.
assessed, standardized clients must be willing to present
Job simulation their stories several times. Certain symptoms cannot be
simulated in healthy individuals and require the use of real
In actual practice, providers work with real clients in real
clients as standardized clients (Tamblyn et al. 1991). This
job settings. In a job simulation, the clients, the setting, or
method also poses challenges if providers’ surgical compe-
both are not real. In the last 20 years, two job simulation
tencies (e.g., minilaparotomy under local anesthesia) are to
techniques—standardized clients (either announced or
be assessed.
unannounced) and the objective structured clinical exami-
nation—have emerged as simulation methods for assessing The objective structured clinical examination (OSCE)
provider competence. measures clinical skills using a uniform, structured format
of rotating stations. A healthcare provider performs a clini-
Standardized clients can be either real clients or healthy
cal task at each station. OSCE assesses a variety of compe-
individuals who have been trained to provide a reproduc-
tencies and has been most often used in medical schools
ible and unbiased presentation of an actual patient case
and residency programs. It has the same advantages and
(Tamblyn et al. 1991). Further, standardized clients can be
disadvantages as standardized clients except that the latter
either announced or unannounced. With announced stan-
makes greater demands of time and resources, such as
dardized clients, the provider is made aware that the client
examining rooms, trained examiners, preparation of materi-
is standardized and is a healthy individual pretending to
als, training of and payments to standardized patients,
have a medical concern. With unannounced standardized
equipment and materials (Cusimano et al. 1994). Research
clients (sometimes referred to as “mystery clients”), the
indicates that three to four hours (per examinee) are neces-
provider is not informed that the client has been trained to
sary for the consistent measurement of performance. Time
perform as a patient. Studies show that experienced physi-
spent per station does not appear to be an important factor
cians cannot differentiate real patients from unannounced

QA Operations Research Issue Paper ■ 11


Table 3 ■ Advantages and Disadvantages of Performance appraisals
Scoring Methods Periodic appraisals by supervisors or peers and self-assess-
ments can also be used to infer competence. Supervisor
and peer appraisals can use data from multiple sources,
Scoring Methods
including observations of provider-patient interactions,
Rating Overall record reviews, patient interviews, self-appraisals, and senti-
Checklists Scales Assessments nel event data to assess competence. Using multiple data
Advantages sources helps to reduce assessor bias. Self-assessments can
use pre-structured checklists to reduce bias when identify-
Can be used with X ing areas of poor performance (Bose et al. forthcoming).
limited training Performance appraisals suffer from the difficulties of infer-
Useful for self-evaluation X X ring competence from performance.

Disadvantages b. How should scores be derived?


Often time-consuming X Regardless of the method used for measuring competence,
a scoring technique is needed. In order to develop a mea-
Differences in interpretation X X sure, data (test scores, observations, records) need to be
analyzed to derive scores, which indicate the provider’s
Requires training to use X X
level or extent of competence. The analysis involves com-
Requires expertise to use X paring the data to a standard of competence.

May leave out important X A level of competence is defined in relation to a standard.


information Competence can be above or below the standard, or in
some cases it can be stated as a proportion of the standard,
such as 70 percent of the standard. Competency standards
answer the question,“At what level of performance is some-
in the quality of the measurement (Elnicki et al. 1993). one considered competent, so that we can trust him or her
Medical students who were evaluated using this method with the healthcare of patients?” Extensive research shows
thought that the OSCE was fair and clinically relevant that raters evaluate performance differently, so a lack of
(McFaul and Howie 1993). standards is likely to yield inconsistent evaluations of com-
petence (Fitzpatrick et al. 1994; Harden and Gleeson 1979;
Job sample Stillman et al. 1991; Van der Vleuten et al. 1991).
Competence is sometimes inferred from measurements of How are standards determined? The most objective stan-
provider performance with a sample of real patients in an dards are usually based on scientific evidence and/or ex-
actual job setting. There are, however, difficulties in measur- pert consensus (Marquez forthcoming). It is difficult to
ing job performance, including the unpredictability of the argue against a standard of performance with known links
environment, waiting for a clinical situation to arise to to positive patient outcomes (Benner 1982). International
accommodate testing (Ready 1994), and differences in the and national health organizations such as the World Health
levels of difficulty among different cases. In addition, a Organization, medical societies, the Agency for Healthcare
single observation of performance does not provide a Research and Quality, and national health ministries estab-
reliable estimate of the provider’s competence. Studies of lish standards for patient care based on evidence from the
clinicians’ performances indicate that the quality of service scientific literature as interpreted by expert panels. Groups
varies widely from one patient to another (Stillman et al. of professors often identify standards of performance for
1986). In a specific area of practice, the provider interacts evaluating the competence of medical or nursing students.
with patients who differ in terms of illness or need, works Standards of competence can also be based on the perfor-
under a variety of circumstances, performs activities requir- mance of individuals selected as excellent clinicians (Sloan
ing a range of competencies, and proceeds without formal et al. 1993). Alternatively, behaviors that differentiate be-
observation or evaluation. Multiple observations of cases tween experienced and inexperienced providers can be
with somewhat similar problems would provide better infor- used as the standards of competent performance (Benner
mation about a provider’s competencies. 1982).

12 ■ Measuring the Competence of Healthcare Providers


Table 4 ■ Advantages and Disadvantages of Different Types of Assessors of Competence

Type of Assessor
Untrained Trained Expert
Self Trainer Peer Supervisor Patients Patients Observer
Advantages
Can observe competence or performance X X X X X X X

Can evaluate competence or performance X X X X X


Has many opportunities to observe performance X X X
Disadvantages

Bias due to halo effect X X X


Provokes anxiety X X X

The three methods most often used to attach a score to the Detailed checklists are particularly appropriate for use by
level of competence are checklists, rating scales, and overall less expert and less well-trained personnel, such as stan-
assessments. Checklists provide a pre-defined list of behav- dardized patients (Norman et al. 1991). Checklists are also
iors (the standards) that are judged to be at or below useful for training and self-evaluations, since they clearly
standard for a particular provider. The final score is often define the steps involved in performing a task. However,
simply the number (or percentage) of behaviors performed they may be less appropriate for the assessment of complex
at standard. Rating scales provide a range of possible competencies (such as interpersonal skills or a complex
responses for each behavior on the checklist; the scales clinical/surgical procedure) that can be difficult to de-
reflect the level of competence or performance attained scribe in terms of specific, discrete behaviors. Van der
with respect to that behavior. For example, the level could Vleuten et al. (1991) reported that checklists in an OSCE
range from 1 (worst) to 5 (best). The precision of the defini- ranged from 30 to 120 items long. Observing, recording, and
tion of the different levels varies widely. The individual checking through long lists of behaviors is an arduous task
behavior scores are summed to obtain an overall score. and may exceed the willingness of observers.
Overall assessments rely on the general evaluation of the
rater and exclude explicit evaluations of individual behav- Like checklists, rating scales focus the observer’s attention
iors. Table 3 summarizes the advantages and disadvantages on those aspects of competence that are important and
of these scoring methods. provide a convenient way to record judgments (Fitzpatrick
et al. 1994). One problem with rating scales is that they
Checklists are the least subjective of the three scoring yield inconsistent measurements if they lack precise defini-
methods, and overall assessments the most. Rating scales tions for each rating level, allow subjective interpretations
tend to be more subjective than checklists because the about the performance required for each level, or are un-
levels for each behavior are rarely defined as precisely as clear about the meaning of terms that define the levels.
the behaviors in the checklist. The historical trend in scor-
ing methods is towards objectivity, to use methods that Overall assessments require greater expertise than either
reduce judgment and increase agreement, which means in rating scales or checklists since the assessor needs to know
the direction of detailed checklists and away from overall what to observe and evaluate, as well as the standards of
evaluations. Research findings show greatest agreement competence. A significant disadvantage of overall assess-
among independent raters who use checklists rather than ments is that the assessor does not necessarily list all of his
the other two methods (Van der Vleuten et al. 1991). or her observations. Hebers et al. (1989) had physicians
observe a videotape of a resident’s performance and score
the performance using overall evaluations, a rating scale,

QA Operations Research Issue Paper ■ 13


observers. However, it may be difficult for someone who
knows the healthcare provider to make an unbiased assess-
ment of specific competencies, independent from their
overall assessment of the person (sometimes referred to as
and a checklist. The overall the halo effect1 ).
evaluations were not specific
and included few comments Not only do different types of assessors rate differently; the
on good performance. Com- people being assessed respond differently. Use of supervi-
parison of the ratings and the sors to assess competence increases the perception that the
detailed checklists with the overall evaluations indicated testing environment is threatening. This anxiety can con-
that although the assessors identified specific examples of tribute to lower performance during testing (Ready 1994).
good and poor performance, they did not include many of Mohrman et al. (1989) reported that the act of measurement
these examples on the general comment sheet. may influence performance or competence under test
conditions, either degrading it (by increasing anxiety) or
c. Who should assess competence? improving it (because the person under assessment is mak-
ing his or her best effort).
Information about a healthcare worker’s competence
can be provided by different types of people, including Regardless of who is selected to rate performance, it is
colleagues (e.g., supervisors, peers, trainers, professors), essential that the assessor understand the standards for
patients, independent observers, and the provider him- or effective performance. The assessor can either be an expert
herself. Table 4 summarizes advantages and disadvantages on the topic or can be trained to observe and evaluate
of these different types of raters. Research on performance specific competencies. Research indicates that a trained,
and competency appraisal does not provide evidence but non-expert, assessor can provide accurate assessments
about the relative superiority of one type over another of competence. Standardized patients, given training by
(Borman 1974; Klimoski and London 1974). The use of medical school faculty, reliably evaluated the same clinical
multi-source performance appraisals (e.g., 360-degree feed- skills as those evaluated by faculty physicians (Colliver and
back) suggests that evaluative information from many Williams 1993). MacDonald (1995) reported that trained
sources provides a more complete picture of performance midwives were as accurate in their evaluation of peers as
than evaluations from only one perspective (Franco et al. the gold standard observers. Even untrained people, such
2000). as mothers who observe interactions between health
workers and their children, can accurately report what the
Assessors are not interchangeable. Research on the evalua-
workers do (Hermida et al. 1996). In this latter situation,
tion of job performance indicates that assessors who
someone who understands competence can use these
occupy different positions in an organization notice differ-
behavioral reports to prepare an evaluation of competence.
ent aspects of performance and, thus, evaluate performance
differently (Mohrman et al. 1989). For example, peer
appraisals focus on the way employees relate to each other Criteria for selecting measurement methods
(Latham 1986). When peers evaluate a colleague, they
tend to compare the colleague’s performance with that of Validity
co-workers or with their own performance (Mumford 1983).
In some environments, peers may be unwilling to appraise Validity concerns the degree to which a particular measure-
each other because they view evaluation as a management ment actually measures what it purports to measure. Valid-
job and assume that their role is to protect (i.e., provide no ity provides a direct check on how well the measure fulfills
negative information about) their colleagues. its function (Anastasi 1976). Does it truly measure the
particular competency it intends to? Is it capturing a
Supervisors and peers have many opportunities to observe different competency? Or, more frequent, is it measuring
providers and generally know them well. As a result, they performance rather than competency? Does it identify gaps
may be able to give a more accurate and thorough evalua- in knowledge, skills, abilities, or traits that are needed for
tion of competence than either patients or independent

1
The halo effect is the generalization from the perception of one outstanding personality trait to an overly favorable evaluation of the whole
personality.

14 ■ Measuring the Competence of Healthcare Providers


the job and that can be corrected through training, experi- ■ Both patients and caretakers of patients accurately report
ence, or better methods for matching individuals to jobs? healthcare provider performance (Colliver and Williams
1993; Franco et al. 1996; Hermida et al. 1996). While these
Below are some findings from the literature about the reports can be used as inputs into an evaluation of com-
validity of various competency measures: petence, they are not measures of competence
■ Assessments of medical records are not good indicators
of healthcare provider competence, largely because Feasibility
many medical procedures are not recorded (Franco et al.
Resources are required to design and implement an
1997; Hermida et al. 1996; Norman et al. 1985)
assessment of competence. Decisions to determine which
■ Performance on tests is inconsistently correlated with measure to use should reflect the following issues:
performance with patients (Jansen et al. 1995; Sloan et al. ■ The number of individuals to be assessed
1993). Written, oral, and computerized tests are primarily
measures of knowledge: patient care requires several ■ The time available for the assessment
skills and abilities in addition to knowledge
■ The willingness of the assessor to use the assessment
■ There is substantial evidence that the OSCE method can instrument
be used to effectively assess a wide variety of competen-
■ The willingness of the healthcare provider to accept the
cies (Colliver and Williams 1993; Elnicki et al. 1993;
assessment
Stillman et al. 1986)
■ The extent of training available those who will participate
■ Norman et al. (1985) found only moderate correlation
in the assessment
between the evaluations of performance by the standard-
ized-patient method and by patient records. Lyons (1974) ■ The resources (funding, assessors, equipment, space, etc.)
reported similar low correlations between medical available for the assessment. For example, OSCE
records and medical care procedures require extensive resources and may be
cost-effective only when evaluating a number of
Reliability individuals on a variety of competencies
Reliability refers to the consistency of scores for a particu- ■ The time, staff, and funding available for development and
lar person with respect to a particular competency when pretesting of instruments. Computerized simulations are
evaluated by different methods, by different raters, or for expensive but offer promise for evaluating knowledge
more than one patient. Below are some of the findings and decision-making skill
about the reliability of competency measures.
■ The competency to be assessed. For example, compe-
■ Raters use different criteria in evaluating the competence tency in removing Norplant implants is much easier to
of healthcare providers (Norman et al. 1985). The consis- measure than competency in managing complications in
tence in ratings provided by different raters is low to a delivery
moderate (Stillman 1993)
■ Performance with one patient does not represent perfor- Research and implementation needs
mance on other cases (Cohen et al. 1996; Franco et al.
1996; Norman et al. 1985; Reznick et al. 1992). In order to According to discussions with health program managers
obtain adequate reliability, multiple patients are required and international health experts, healthcare provider
(Colliver and Williams 1993; Sloan et al. 1993) competencies should be measured periodically. Limited
information is available on competency measurement in
■ Clarifying the checklist improves the reliability of the developing country health programs. This section raises
ratings (Colliver and Williams, 1993) other needs.
■ Trained peers can accurately assess healthcare provider What is the relationship between competence and perfor-
performance (MacDonald 1995) mance? There is extensive evidence that: (a) although
competency affects performance, the relationship is not
■ Healthcare provider self-assessments are consistent with
direct, and (b) other factors (the work setting, time, and
their observed performance with patients (Jansen et al.
motivation) play a major role in determining performance.
1995; MacDonald 1995; Bose et al. forthcoming)
More research is needed to study the relationship between

QA Operations Research Issue Paper ■ 15


Strategies for Improving Provider Competence

Lecture programs and conferences disseminate reports. Case reviews provide an opportunity for discus-
information about new innovations among health work- sion, usually led by a mentor or preceptor, and appear to
ers and other staff. These programs cover information be more effective than other didactic methods in
on the current scope of practice or changes in the art improving knowledge or skills.
and science, based upon scientific information learned
Grand rounds at health facilities with a diverse patient
from current medical research.
population provide a unique learning opportunity for
Continuing education (CE) courses are an effective novices. Through grand rounds, novices are exposed to
way of keeping health workers abreast of new innova- the wider continuum of patient care, and they can inter-
tions in their field of specialization. In some countries, act with other members of the healthcare team and
health workers must earn a minimum number of CE experience a higher level of cognition about patient
units every year as part of re-certification. conditions and recovery than would normally occur
outside the hospital setting.
Refresher programs enable health workers to review
the original training program in a condensed number of Sentinel-event review, involving a review of an unex-
hours. However, refresher programs do not help in pected occurrence (mortality or morbidity), provides an
expanding the cognitive or psychomotor ability above opportunity to conduct a root cause analysis. Results
the entry level. should form the basis of a plan to reduce risk. The plan
must be monitored to evaluate its effectiveness relative
Self-education is another way health providers can
to the root cause.
improve knowledge in specific areas. This method can
be useful for acquiring new knowledge or skills with Mentoring/precepting, provided by more
immediate application to a task. Self-education may experienced health professionals, is a good strategy for
involve manuals or computer-based training (CBT: improving skills of novices and new-entry health
Knebel 2000). professionals.

Case reviews rely on patient-care reports, audio/video


tapes of services, and laboratory and other diagnostic

competence, performance, and these other factors, and to ics, nurses, doctors) in developing countries? Limited
analyze the interaction between competence and these research has been conducted on identifying cost-effective
factors on performance. In addition, research should be approaches for measuring provider competence in devel-
undertaken to provide guidance to organizations that are oping countries. Although, a lot of checklists and question-
planning to evaluate competence to determine feasible naires have been developed and used for assessing
strategies to effectively assess competencies. provider knowledge, limited models for assessing provider
skills have been designed (but see Kelley et al. 2000).
How can improvements in competencies achieved by
training be sustained? Most training programs demonstrate Who is best suited to conduct competency assessments in
improvements in immediate post-training competence developing countries? What types of measures would they
levels. However, various studies show these improvements be willing to use?
decay over time, sometimes rapidly. Little research exists on
the most cost-effective ways to maintain the improvements What measures would healthcare workers understand,
gained during training. This should be remedied (Kim et al. accept, and use in developing countries?
2000). Research is needed to identify differences in assessment
Which competency measures (detailed checklists, rating results when using different assessors (supervisors, peers,
scales, overall assessments, simulations, work samples) are external reviewers, etc.).
most relevant for assessing the competence of various levels Should competency measurement be made part of
of healthcare workers (community-based workers, paramed- licensure, certification, and accreditation requirements?

16 ■ Measuring the Competence of Healthcare Providers


Tools for Assuring Knowledge and Skill Proficiency

Quizzes and questionnaires are excellent tools for analysis of a sentinel event can shed light on a
assessing provider knowledge. Results can be used to provider’s rationale in following a specific treatment
develop remedial strategies for improving provider regimen.
knowledge.
Supervised patient interactions also measure
Practical performance examinations or competence. A health provider can be observed while
observations with patients (real or simulated) are performing skills and procedures on a diverse patient
cost-effective ways to assess providers’ psychomotor population in a relatively short period.
and interpersonal skills. These methods can be
administered in a minimal amount of time and cover a Hospital clinical performance evaluations can be
wide domain of practice. Exam results can be quickly used to identify performance problems and their root
tabulated for making timely decisions about remedial causes. Based on this analysis, competence of providers
actions. in providing quality care can be assessed.

Review of records and/or sentinel events is also


cost-effective and measures competency. A root cause

Developing country health programs should consider Conclusion


making periodic competency assessments as part of the
licensure and accreditation requirements. However, there is The literature suggests several conclusions, summarized
a concern that giving too much control to government insti- below, concerning the measurement of competency:
tutions for licensure and accreditation will complicate
Competency can be assessed using tests or inferred from
issues and create bottlenecks. Research is needed to iden-
performance that has been assessed using simulations or
tify appropriate public-private mechanisms for licensure
work samples. The major advantage of tests is that single
and re-certification, where individual and organizational
competencies can be distinguished and targeted for
competencies are taken into account.
improvement. The major advantage of simulated patients
How can feedback from competency assessments be used and job samples is that they are more predictive of job
to improve compliance by health providers? Feedback performance.
about competence and performance can motivate changes
Competency is not performance. Although competency can
in provider attitudes and behaviors in developing countries.
predict performance, a competent healthcare provider may
Research is needed to identify cost-effective approaches to
not necessarily use effective procedures on the job. Both
provider feedback (one-to-one, one-to-group, newsletters,
internal factors (motivation, agreement with a standard,
etc.).
self-efficacy, inertia, etc.) and external factors (supervision,
Does an understanding of competency requirements for a feedback, availability of resources, community, peer expecta-
specific job improve the selection and training of staff? tions, and incentives) affect whether a healthcare provider
will apply his or her competency.
Health staff is often recruited without conducting an
in-depth analysis of competency requirements. There is a Detailed and immediate feedback to the healthcare
growing belief that it is simply impossible to improve a provider about his or her competence is useful for both
person’s performance until specific competencies required learning and improving performance.
for satisfactory or superior performance are identified. It is
Standards of competency must be defined carefully, even
only after these competencies are identified that staff can
for expert assessors. Clear statements about both the indi-
be selected. Some new recruits may require skill or knowl-
cators and the levels of competency improve the reliability
edge enhancement to be able to perform optimally.
and validity of the assessment.
Research is needed to identify the impact of clearly defined
competency requirements on staff performance, if any.

QA Operations Research Issue Paper ■ 17


Competency can be measured using a variety of methods, provide a teaching tool and a job aid. However, some raters
including tests presented in a written format or on a may be unwilling to use checklists, which can be long and
computer, in an interview, or through a simulation or work time-consuming. It is necessary to determine a length that
sample. Each assessment method has its strengths and provides useful feedback and that assessors will use.
weaknesses. While written tests or interviews can assess
Written or computerized tests are an effective way to
knowledge, assessments using models or job samples can
measure knowledge but not to assess skills that are required
closely assess skills.
for some tasks.
All assessors must be trained to give accurate reports and
The effective ways to measure competency include evalua-
evaluations of competency. The length and content of the
tions of performance by experts or trained observers,
training depends on the expertise of the assessor, the com-
reports from patients (especially trained or standardized
petency to be assessed, the assessment instrument used,
patients), and objective structured clinical examinations.
and the conditions for evaluation.
The literature review also highlights the need to promote
Different types of assessors, including supervisors, peers,
competency measures in developing country health pro-
patients, and observers, can accurately report and assess
grams. Most developing country health programs do not
healthcare provider competency.
use measures to identify competency gaps. The introduc-
Checklists are particularly useful for giving feedback to tion of periodic competency assessments will assist in
the healthcare provider and for use by non-expert raters. improving the quality of care.
Detailed checklists are also useful for self-evaluation as they

18 ■ Measuring the Competence of Healthcare Providers


Appendix
Table 5 ■ Selected Studies on Competence Measurement of Healthcare Providers

Target Target Area Description of Statistically Significant Other


Author Group (Focus of Study) Intervention Improvement Findings
Das et al. Medical Comparison of First-year medical students (64) Self-evaluation results and The sharing of assess-
(1998) students self- and tutor undergoing problem-based learning tutor assessment scores ment reports between
(UAE) evaluation results (PBL) underwent self-assessments were similar, but male students and tutors was
as well as tutor evaluations. student self-evaluation perceived to be useful
scores were higher than for the students’ devel-
female students’ on overall opment of the skills of
scores. analysis, differentiation,
and critical appraisal.
Devitt and Third-year Use of computers History-taking and physical examina- Students scored equally on Study shows that
Palmer medical in assessing tion skills of 136 students were the computer-based tasks computer-based clinical
(1998) students competence assessed in a series of structured and in the observed stations, simulations can be
(USA) and observed clinical stations and but the weaker students who constructed to supple-
compared to similar computer-based passed on a clinic station ment conventional
problems. were likely to fail on the assessment processes
computer task. in clinical medicine and
may have a role in
increasing their
reliability.
Elnicki Internal Reliability and Third-year medical students (68), Results showed that OSCE is OSCE is not meant to
et al. medicine content validity 6 residents, and 9 students a robust way of finding the evaluate cognitive
(1993) junior of objective finishing their medical rotation competence of students, and achievement or inter-
clerkship structured clinical completed the 15 OSCE stations; the results were comparable personal skills, so it is
(USA) examination each station took 15 minutes for to other methods of essential to use other
(OSCE) in patient examination. evaluation. methods of evaluation
assessing clinical in conjunction with
competence OSCE.
Friedman Physicians Enhancement of Two computer-based decision Correct diagnoses appeared
et al. (USA) clinicians’ diagnos- support systems (DSS) were used in subjects’ hypothesis lists
(1999) tic reasoning by by 216 participants for diagnostic for 39.5 percent of cases
computer-based evaluation of 36 cases based on without DSS and 45.4 percent
consultation actual patients. After training, each of cases after DSS.
subject evaluated 9 of the 36
cases, first without and then with a The study supports the idea
DSS, and suggested an ordered list that “hands-on” use of
of diagnostic hypotheses after each diagnostic DSS can influence
evaluation. diagnostic reasoning of
clinicians.
Hodges Medical staff Effectiveness of Forty-two clinical clerks, family Binary checklists proved to Diagnostic accuracy
et al. binary content practice residents, and family be less valid for measuring increased for all groups
(1999) checklists in physicians participated in two increasing clinical compe- between the two-minute
measuring 15- minute standardized-patient tence. On global scales, the and 15-minute marks
increasing levels of interviews. An examiner rated each experienced clinicians scored without significant
clinical competence participant’s performance using a significantly better than did differences between the
binary content checklist and a the residents and clerks, but groups.
global processing rating. The par- on checklists, the experienced
ticipants provided a diagnosis two clinicians scored significantly
minutes into and at the end of the worse than did the residents
interview. and clerks.

Continued on following page

QA Operations Research Issue Paper ■ 19


Table 5 ■ Selected Studies on Competence Measurement of Healthcare Providers (Continued)

Target Target Area Description of Statistically Significant Other


Author Group (Focus of Study) Intervention Improvement Findings
Hull Internal Validity of different Multitrait-multimethod was used to The results suggest that there
et al. medicine methods of assess student performance. is a statistically significant
(1995) clerks (USA) assessing clinical Methods were clinical evaluation but lower than expected
competence form (CEF), OSCE, and National convergence in the measures
Board of Medical Examiners. of clinical skills and knowl-
edge across the three assess-
ment methods.
Jansen General Use of The three different assessment The mean scores on the Performance-based
et al. practitioners performance-based tools were administered to 49 GPs PBT and KTS showed no testing is a better way
(1995) (GP) and test (PBT), a written and 47 trainees in general practice. substantial differences be- to assess proficiency in
trainees in knowledge test of tween GPs and trainees, hands-on skills.
general skills (KTS), and a while GPs scored higher on
practice self-assessment the SAQ.
questionnaire (SAQ)
to assess provider
competence
McFaul and Medical Compare and OSCE with stations designed to For a majority of stations, OSCE, well accepted by
Howie students (UK) assess clinical assess student competencies in there was no statistically students and teachers,
(1993) competence among history-taking, physical examination, significant difference between can be introduced easily
final-year medical interpretation of data or results, the mean scores received by in a medical school.
students of two interpersonal skills, practical proce- the students of two medical
medical schools dures, and factual knowledge schools.
Peabody Staff Comparison of Staff physicians were randomly Using vignettes consistently The study also found
et al. physicians vignettes, standard- selected at 2 facilities. Performance produced scores closer to the that low competence
(2000) (USA) ized patients, and was measured on 4 common gold standard of standardized may be significantly
chart abstraction outpatient conditions: low back patients than using chart determined by physician
methods for pain, diabetes mellitus, chronic abstractions. This pattern was characteristics and not
measuring the obstructive pulmonary disease, found to be robust when the merely structural
quality of care and coronary artery disease. scores were disaggregated by effects.
case complexity, by site, and
by level of physician training.
Ram Family Comparative study Consultations of 90 family PVA was better able to assess Observed participants
et al. physicians of measurement physicians were videotaped both in performance of physicians’ judged the videotaped
(1999) (USA) characteristics of a MSE and their daily practices. Peer practices than MSE using practice consultations
multiple-station observers used a validated instru- standardized patients. to be “natural,” whereas
examination (MSE) ment (MAAS-Global) to assess the Content validity of the PVA most physicians did not
using standardized physicians’ communication with was superior to that of the see their usual practice
patients and a patients and their medical MSE, since the domain of behavior while reviewing
practice video performances. general family practice was videotaped consulta-
assessment (PVA) of better covered. tions of the MSE.
regular consultations
in daily practice
Ready Nurses (USA) Periodic clinical All clinical nurses providing patient The competency-testing
(1994) competency testing care were evaluated using cognitive program helped emergency
of emergency and psychomotor tests. While cogni- nurses apply established
nurses as part of tive skills were evaluated by written standards of care more
the credentialing examinations, practical skills were consistently.
process assessed by following a sequential
set of psychomotor skills.

20 ■ Measuring the Competence of Healthcare Providers


Table 5 ■ Selected Studies on Competence Measurement of Healthcare Providers (Continued)

Target Target Area Description of Statistically Significant Other


Author Group (Focus of Study) Intervention Improvement Findings
Reznick First- and Validity of OSCE for The 240 residents were randomly The test results showed that The study also showed
et al. second-year assessing clinical assigned to take the 20-station OSCE is the state-of-the-art that the production of
(1992) residents at 4 competence of OSCE test for national licensure. method for testing clinical this type of examination
sites (USA) providers for the skills as it provided a major could not be rushed.
issuance of a advantage of observing actual Adequate time is
license to practice performance. Also, the use of needed for case
medicine standardized patients enabled development, standard
this examination to stimulate setting, translation, and
real-world conditions with a the technical formatting
high degree of fidelity. necessary for computer
formatting. In addition,
an examination of this
scale requires that all
aspects of development,
production, and site
supervision be
centralized.
Sloan et al. Surgical Efficacy of OSCE in Comprehensive 35-station OSCE The out-going interns per-
(1993) interns (USA) evaluating resident was administered to 23 in-coming formed significantly better
competence interns and 7 out-going interns. than the in-coming ones.
Snyder Evaluators Level of rater reli- Licensed instructor-coordinators Variations were found in
and Smit (Australia) ability for evaluators (104) were assessed on their scores given by evaluators
(1998) of Emergency scoring of two practical examina- for a single observed student.
Medical Technicians tions on videotape, one passing Also, evaluators often did not
practical examina- and one failing performance. agree with a student’s skill
tions using the performance using the
Michigan practical Michigan practical
examination examination.
instrument
Steiner Medical Effect of clinical Medical students, who had encoun- Overall, students, irrespective The training outcomes
et al. students training setting on tered patients with low-back pain in of the training setting, per- can be improved by
(1998) (USA) skills primary care settings, tertiary care formed poorly, suggesting that setting clearer expecta-
settings, both, or neither, were the curriculum inadequately tions of competencies
tested using standardized patients teaches clinical skills needed and by setting mecha-
for their skills in the areas of his- to assess and manage com- nisms that assure that
tory-taking, physical examination, mon problems. preceptors in ambula-
and the selection of a diagnostic tory settings will help
strategy. students meet those
expectations.
Swartz et al. Medical Comparison of Five faculty physicians indepen- The mean global ratings of
(1999) students validity of global dently observed and rated video- clinical competence were higher
(USA) ratings of checklists taped performances of 44 medical with videotapes than checklists,
and observed students on the seven standard- whereas the mean global ratings
clinical performance ized patient (SP) cases. A year of interpersonal and communi-
with standardized later, the same panel of raters cation skills were lower with
patients reviewed and rated checklists for videotapes. The results raise
the same 44 students on 5 of the serious questions about the
same SP cases. viability of global ratings of
checklists as an alternative to
ratings of observed clinical
performance as a criterion for
SP assessment.

Continued on following page

QA Operations Research Issue Paper ■ 21


Table 5 ■ Selected Studies on Competence Measurement of Healthcare Providers (Continued)

Target Target Area Description of Statistically Significant Other


Author Group (Focus of Study) Intervention Improvement Findings
Wood and Medical Measuring compe- “Objective Assessment of Compe- The OACA approach was More stations are
O’Donnell professionals tence at job inter- tence Achievement” (OACA) was better able to identify candi- needed for a compre-
(2000) views: a traditional designed to assess the competence dates with low competence. hensive assessment.
interview gives of applicants. The system follows Most of the “poorly” compe- These stations could
interviewers a the OSCE model, whereby the tent candidates would have have reviewed prior
chance to assess interview panel split into several sailed through interviews. assessments of skill or
the intellect, rooms, each assessing a different competence done at
enthusiasm, and aspect of performance, e.g., the workplace, such as
“sparkle” of candi- consultation skills, achievements, videos of operations or
dates, which and interpersonal relationships. a consultation.
cannot be faithfully
conveyed in a
curriculum vitae,
but assesses
competence and
achievement poorly.

22 ■ Measuring the Competence of Healthcare Providers


References

Aguinis, H., and K. Kraiger. 1997. Practicing what we preach: Bersky, A.K., and C.J.Yocom. 1994. Computerized clinical
Competency-based assessment of industrial/organizational simulation testing: Its use for competence assessment in
psychology graduate students. The Industrial-Organizational nursing. Nursing & Healthcare 15:120–27.
Psychologist 34:34–40.
Borman, W.C. 1974. The rating of individuals in
American Society for Training and Development. 1996. organizations: An alternative approach. Organizational
Linking training to performance goals. Info-Line Behavior and Human Performance 12:105–24.
(Issue 9606).
Bose, S., E. Oliveras, and W.N. Edson. Forthcoming.
Anastasi, A. 1976. Psychological Testing. (4th ed.) New York: How can self-assessment improve the quality of healthcare?
Macmillan. Operations Research Issue Paper. Bethesda, MD: To be
published for the U.S. Agency for International
Bandura, A. 1982. Self-efficacy mechanism in human agency. Development (USAID) by the Quality Assurance Project.
American Psychologist 37:122–47.
Boyatzis, R.E. 1982. The Competent Manager: A Model for
Bandura, A. 1986. Social Foundations of Thought and Action: Effective Performance. New York. Wiley.
A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice
Hall, Inc. Brown, L.P., L.M. Franco, N. Rafeh, and T. Hatzell. 1992.
Quality assurance of healthcare in developing countries.
Bandura, A., and D. Cervone. 1986. Differential engagement Bethesda, MD: Report published for the U.S. Agency for
of self-reactive influences in cognitive motivation. International Development (USAID) by the Quality
Organizational Behavior and Human Decision Processes Assurance Project.
38:92–113.
Bryce, J., A.Voigt, A. Adegorovye, B. Zakari, D. Oyebolu,
Bartlem, C.S., and E.A. Locke. 1981. The Coch and French A. Rodman, and S. Saba. n.d. Skills assessment in primary
study: A critique and reinterpretation. Human Relations healthcare training. (Report for the Africa Regional Project,
34:555–66. 698-0421). Washington, DC: United States Agency for
International Development.
Bashook, P.G., and J. Parboosingh. 1998. Continuing medical
education: Recertification and the maintenance of Butler, F.C. 1978. The concept of competence: An
competence. British Journal of Medicine 316:545–48. operational definition. Educational Technology January,
7–18.
Benner, P. 1982. Issues in competency-based testing.
Nursing Outlook 303–09. Cabana, M.D., C.S. Rand, N.R. Powe, A.W. Wu, M.H. Wilson,
P. Abbond, and H.R. Rubin. 1999. Why don’t physicians
Benner, P. 1984. From Novice to Expert: Excellence and follow clinical practice guidelines? A framework for
Power in Clinical Nursing Practice. California: improvement. Journal of the American Medical Association
Addison-Wesley. 282(15):1458–65.

Beracochea, E., R. Dickson, P. Freeman, and J. Thomason. Campbell, J.P., R.A. McCloy, S.H. Oppler, and C.E. Sager. 1993.
1995. Case management quality assessment in rural areas A theory of performance. In Personnel Selection in
of Papua, New Guinea. Tropical Doctor 25:69–74. Organizations, eds. N. Schmitt, W.C. Borman, and Associates.
San Francisco: Jossey-Bass.
Berden, H.J.J.M., J.M.A. Hendrick, J.P.E.J.Van Dooren,
F.F. Wilems, N.H.J. Pijls, and J.T.A. Knape. 1993. A comparison Carper, B.A. 1992. Fundamental patterns of knowing in
of resuscitation skills of qualified general nurses and nursing. In Perspectives on Nursing Theory, ed. L.H. Nicoll.
ambulance nurses in the Netherlands. Heart & Lung Philadelphia: J.B. Lippincott Company.
22:509–15.

QA Operations Research Issue Paper ■ 23


Centra Health. 1999. Competency Assessment Model. ———. 1995. Changing physician performance:
Lynchburg,Virginia. A systematic review of the effect of continuing medical
education strategies. Journal of the American Medical
CHR (Center for Health Research). 1999. Quality assessment Association 274(9):700–04.
study: Service delivery expansion support project, Indonesia.
Jakarta: University of Indonesia. Devitt, P., and E. Palmer. 1998. Computers in medical
education 3: A possible tool for the assessment of clinical
Coates, V.E., and M. Chambers. 1992. Evaluation of tools to competence? Australian and New Zealand Journal of
assess clinical competence. Nurse Education Today Surgery 68:602–04.
12:122–29.
Duller, S.L.F. 1995. Determining competence of experienced
Cockerill, R., and J. Barnsley. 1997. Innovation theory and its critical care nurses. Nursing Management 26:48F–48H.
application to our understanding of the diffusion of new
management practices in healthcare organizations. Elnicki, D.M., W.T. Shockcor, D.K. Morris, and K.A. Halbritter.
Healthcare Management Forum 10:35–8. 1993. Creating an objective structured clinical examination
for the internal medicine clerkship: Pitfalls and benefits.
Cohen, S.J., H.W. Halvorson, and C.A. Gosselink. 1994. The American Journal of the Medical Sciences 306:94–97.
Changing physician behavior to improve disease prevention.
Preventive Medicine 23:284–91. Ericsson, K.A., ed. 1996. The Road to Excellence. Mahwah,
NJ: Lawrence Erlbaum Associates.
Cohen, D.S., J.A. Colliver, M.S. Marcey, E.D. Fried, and
M.H. Swartz. 1996. Psychometric properties of a Erez, M. 1990. Performance quality and work motivation. In
standardized-patient checklist and rating-scale form used to Work Motivation, eds. U. Kleinbeck, H. Quast, H. Thierry, and
assess interpersonal and communication skills. Academic H. Hacker. Hillsdale, NJ: Erlbaum.
Medicine Supplement 71:S87–S89.
Fenton, M.V. 1985. Identifying competencies of clinical
Colliver, J.A., N.V.Vu, and H.S. Barrows. 1992. Screening test nurse specialists. Journal of Nursing Administration
length for sequential testing with a standardized-patient 15:32–37.
examination: A receiver operating characteristic analysis.
Academic Medicine 67:592–95. Fincher, R.E., K.A. Lewusm, and T.T. Kuske. 1993.
Relationships of interns’ performance to their
Colliver, J.A., and R.G. Williams. 1993. Technical issues: Test self-assessments of their preparedness for internship and to
application. Academic Medicine 68:454–59. their academic performance in medical school. Academic
Medicine Supplement 68:S47–S50.
Connor, M.A., and A.F. Cook. 1995. Laser operator training
and competency assessment in the community hospital Fitzpatrick, J.M., A.E. While, and J.D. Roberts. 1994. The
setting. Journal of Laser Applications 7:177–81. measurement of nurse performance and its differentiation
by course of preparation. Journal of Advanced Nursing
Cusimano, M.D., R. Cohen, W. Tucker, J. Murnaghan, R. Kodama, 20:761–68.
and R. Reznick. 1994. A comparative analysis of the costs of
administration of an OSCE. Academic Medicine 69:571–76. Fleishman, E.A., and C.J. Bartlett. 1969. Human abilities.
Annual Review of Psychology 20:349–80.
Das, M., D. Mpofu, E. Dunn, J.H. Lanphear. 1998. Self and tutor
evaluations in problem-based learning tutorials: Is there a Franco, L.M., C. Franco, N. Kumwenda, and W. Nkloma. 1996.
relationship? Medical Education 32:411–18. Comparison of methods for assessing quality of health
worker performance related to management of ill children.
Davis, D.A., M.A. Thomas, A.D. Oxman, and R.B. Haynes. 1992. Quality Assurance Methodology Refinement Series.
Evidence for the effectiveness of CME: A review of 50 Bethesda, MD: Published for the U.S. Agency for
randomized controlled trials. Journal of the American International Development (USAID) by the Quality
Medical Association 268(9):1111–17. Assurance Project.

24 ■ Measuring the Competence of Healthcare Providers


Franco, L.M., C.C. Daly, D. Chilongozi, and G. Dallabetta. 1997. Hofstede, G. 1980. Culture’s Consequences: International
Quality of case management of sexually transmitted Differences in Work-Related Values. Beverly Hill, CA: Sage.
diseases: Comparison of the methods for assessing the
performance of providers. Bulletin of the World Health Hull, A.L., S. Hodder, B. Berger, D. Ginsberg, N. Lindheim,
Organization 75(6):523–32. J. Quan, and M.E. Kleinhenz. 1995. Validity of three clinical
performance assessments of internal medicine clerks.
Franco, L.M., R. Kanfer, L. Milburn, R. Qarrain, and Academic Medicine 70:517–22.
P. Stubblebine. 2000. An in-depth analysis of individual
determinants and outcomes of health worker motivation in Irvine, D. 1997. The performance of doctors.
two Jordanian hospitals. Bethesda, MD: Partnership for I: Professionalism and self regulation in a changing world.
Health Reform Project-Abt Associates. British Medical Journal 314:1540–02.

Friedmanc, C.P., A.S. Elstein, F.M. Fredric, G.C. Murphy, Irvine, D. 1997. The performance of doctors. II: Maintaining
T.M. Franz, P.S. Heckerling, P.L. Fine, T.M. Miller,V. Abraham. good practice, protecting patients from poor performance.
1999. Enhancement of clinicians’ diagnostic reasoning by British Medical Journal 314:1613.
computer-based consultation. Journal of American Medical
Association 282:1851–56. Jaeger, A.M. 1990. The applicability of western management
techniques in developing countries: A cultural perspective.
Gordon, J.J., N.A. Saunders, D. Hennrikus, and In Management in Developing Countries, eds. A.M. Jaeger
R.W. Sanson-Fisher. 1992. Interns’ performance with and R.N. Kanungo. London: Routledge.
simulated patients at the beginning and the end of the
intern year. Journal of General Internal Medicine 7:57–62. Jansen, J.J.M., L.H.C. Tan, C.P.M.Van der Vleuten, S.J.Van Luijk,
J.J. Rethans, and R.P.T.M. Grol. 1995. Assessment of
Green, L., M. Kreuter, S. Deeds, K. Partridge. 1980. Health competence in technical clinical skills of general
Education Planning: A Diagnosis Approach. Palo Alto, CA: practitioners. Medical Education 29:247–53.
Mayfield Press.
JCAHO (Joint Commission on Accreditation of Healthcare
Hamric, A.B. 1989. History and overview of the CNS role. In Organizations). 1996. Comprehensive accreditation manual
The clinical nurse specialist in theory and practice, eds. for hospitals: The official handbook. Oakbrook, IL.
A.B. Hamric and J.A. Spross. Philadelphia: W.B. Saunders Co.
Kanungo, R.N. and A.M. Jaeger. 1990. Introduction: The
Harden, R.M., and F.A. Gleeson. 1979. Assessment of clinical need for indigenous management in developing countries.
competence using an objective structured clinical In Management in Developing Countries, in A.M. Jaeger and
examination (OSCE). Medical Education 13:41–54. R.N. Kanungo. London: Routledge.

Hebers, J.E., G.L. Noel, G.S. Cooper, J. Harvey, L.N. Pangaro, Kapil, U., A.K. Sood, D. Nayar, D.R. Gaur, D. Paul, S. Chaturvedi,
and M.J. Weaver. 1989. How accurate are faculty evaluations and M. Srivasta. 1992. Assessment of knowledge and skills
of clinical competence? Journal of General Internal about growth monitoring amounts medical officers and
Medicine 4:202–08. multi purpose workers. Indian Pediatrics 31:43–46.

Hermida, J., D. Nicholas, and S. Blumenfeld. 1996. Kelley, E., C. Geslin, S. Djibrina, and M. Boucar. 2000. The
Comparative Validity of Three Methods for Assessment of impact of QA methods on compliance with the Integrated
Quality of Primary Healthcare: Guatemala Field Study. Management of Childhood Illness algorithm in Niger.
Quality Assurance Methodology Refinement Series. Operations Research Results 1(2). Bethesda, MD: Published
Bethesda, MD: Published for the U.S. Agency for for the U.S. Agency for International Development (USAID)
International Development (USAID) by the Quality by the Quality Assurance Project.
Assurance Project.
Kelly, M.H., L.M. Campbell, and T.S. Murray. 1999. Clinical
Hodges, B., G. Regehr, N. McNaughton, R. Tiberius, M. Hansom. skills assessment. British Journal of General Practice
1999. OSCE checklists do not capture increasing levels of 49:447–50.
expertise. Academic Medicine 74:1129–34.

QA Operations Research Issue Paper ■ 25


Kim,Y.M., F. Putjuk, A. Kols, and E. Basuki. 2000. Improving Long, H.B. 1989. Selected principles developing
provider-client communication: Reinforcing IPC/C training self-direction in learning. Paper presented at the Annual
in Indonesia with self-assessment and peer review. Meeting of the American Association for Adults and
Operations Research Results 1(6). Bethesda, MD: Published Continuing Education. Atlantic City, NJ (October).
for the U.S. Agency for International Development (USAID)
by the Quality Assurance Project. Lyons, T.F. 1974. The relationship of physicians’ medical
recording performance to their medical care performance.
Kinnersley, P., and R. Pill. 1993. Potential of using simulated Medical Care 12:463–69.
patients to study the performance of general practitioners.
British Journal of General Practice 43:297–300. Main, D.S., S.J. Cohen, and C.C. DiClemente. 1995.
Measuring physician readiness to change cancer screening:
Klein, R. 1998. Competence, professional self regulation and Preliminary results. American Journal of Preventive
the public interest. British Medical Journal 316:1740–42. Medicine 11:54–58.

Klimoski, R.J., and M. London. 1974. Role of the rater in MacDonald, P. 1995. The peer review program of the
performance appraisal. Journal of Applied Psychology Indonesian midwives association (Final Report of Phase
59:445–51. Two of the Pilot Project). Bethesda, MD: University Research
Corporation.
Knebel. E. 2000. The use and effect of computer-based
training: What do we know? Operations Research Issue Marquez, Lani. Forthcoming. Helping Healthcare Providers
Papers 1(2) Bethesda, MD: Published for the U.S. Agency Perform According to Standards. Operations Research Issue
for International Development (USAID) by the Quality Paper. Bethesda, MD: To be published for the U.S. Agency
Assurance Project. for International Development (USAID) by the Quality
Assurance Project.
Kramer, D. 1996. Keeping competency up when patient
numbers are down. Kansas Nurse 71:4–5. McCaskey, L., and M. LaRocco. 1995. Competency testing in
clinical microbiology. Laboratory Medicine 26:343–49.
Landy, F.J. 1985. Psychology of Work Behavior. (3rd ed.).
Homewood, IL: Dorsey Press. McFaul, P.B., and P.W. Howie. 1993. The assessment of
clinical competence in obstetrics and gynecology in two
Lane, D.S., and V.S. Ross. 1998. Defining competencies and medical schools by an objective structured clinical
performance indicators for physicians in medical examination. British Journal of Obstetrics and Gynecology
management. American Journal of Preventive Medicine 100:842–46.
14:229–36.
Mendonca, M., and R.N. Kanungo. 1994. Motivation through
Latham, G.P. 1986. Job performance and appraisal. In effective reward management in developing countries. In
International Review of Industrial and Organizational Work Motivation. Models for Developing Countries,
Pychology, eds. C.L. Cooper and I. Robertson. New York: eds. R.N. Kanungo and M. Mendonca. New Delhi: Sage
Wiley. Publications.

Lenburg, C.B. 1999. Redesigning expectations for initial and Miller, G.E. 1993. Conference summary. Academic Medicine
continuing medical competence for contemporary nursing 68:471–76.
practice. Online Journal of Issues in Nursing.
Misra, S., and R.N. Kanungo. 1994. Bases of work motivation
Locke, E.A., and G.P. Latham. 1990. Work motivation: in developing societies: A framework for performance
The high performance cycle. In Work Motivation, management. I n Work Motivation. Models for Developing
eds. U. Kleinbeck, H. Quast, H. Thierry, and H. Hacker. Countries, eds. R.N. Kanungo and M. Mendonca. New
Hillsdale, NJ: Erlbaum. Delhi: Sage Publications.

Loevinsohn, B.O., E.T. Gierrero, and S.P. Gregorio. 1995. Mittman, B.S., X. Tonesk, and A. Jacobson. 1995.
Improving primary healthcare through systematic Implementing clinical practice guidelines: Social influence
supervision: A controlled field trial. Health Policy and strategies and practitioner behavior change. Quality Review
Planning 10:144–53. Bulletin 18:413–22.

26 ■ Measuring the Competence of Healthcare Providers


Mohrman, A., S.M. Resnick-West, and E.E. Lawler. 1989. Ram, P., C. Van der Vleuten, J.J. Rethans, J. Grol, and K. Aretz.
Designing performance appraisal systems. San Francisco: 1999. Assessment of practicing family physicians:
Jossey-Bass. Comparison of observation in a multiple-station
examination using standardized patients with observation
Mumford, M.D. 1983. Social comparison theory and the of consultations in daily practice. Academic Medicine
evaluation of peer evaluations: A review and some applied 74:62–69.
implications. Personnel Psychology 36:867–81.
Ready, R.W. 1994. Clinical competency testing for
Newble, D.I., and D.B. Swanson. 1988. Psychometric emergency nurses. Journal of Emergency Nursing 20:24–32.
characteristics of the objective structured clinical
examination. Medical Education 22:335–41. Reznick, R., S. Smee, A. Rothman, A. Chalmers, D. Swanson,
L. Dufresne, G. Lacombe, J. Baumber, P. Poldre, L. Levasseur,
Nicholls, J.G. 1984. Achievement motivation: Conceptions of R. Cohen, J. Mendez, P. Patey, D. Boudreau, and M. Berard.
ability, subjective experience, task choice, and performance. 1992. An objective structured clinical examination for the
Psychological Review 91:328–46. licentiate: Report of pilot project of the Medical Council of
Canada. Academic Medicine 67:487–94.
Nicholls, J.G., and A.T. Miller. 1984. Development and its
discontents: The differentiation of the concept of ability. In Rogers, E.M. 1962. Diffusion of Innovations. New York, NY:
Advances in motivation and achievement. The development The Free Press.
of achievement motivation (Vol. 3), ed. J.G. Nicholls,
Greenwich, CN: JAI Press. Rohwer, J., and B. Wandberg. 1993. An Innovative School
Health Education Model Designed for Student Achievement.
Norman, G.R.,V.R. Neufeld, A. Walsh, C.A. Woodward, and
G.A. McConvey. 1985. Measuring physicians’ performances Salazar-Lindo, E., E. Chea-Woo, J. Kohatsu, and P.R. Miranda.
using simulated patients. Journal of Medical Education 1991. Evaluation of clinical management training
60:925–34. programme for diarrhoea. Journal of Diarrheoal Disease
Research 9:227–34.
Norman, G.R., C.P.M.Van der Vleuten, and E. DeGraaff. 1991.
Pitfalls in the pursuit of objectivity: Issues of validity, Sandvik, H. 1995. Criterion validity of responses to patient
efficiency and acceptability. Medical Education 25:119–26. vignettes: An analysis based on management of female
urinary incontinence. Family Medicine 6:388–92.
Ofori-Adjei, D., and D.K. Arhuinful. 1996. Effect of training
on the clinical management of malaria by medical Schuler, S.R., E.N. McIntosh, M.C. Goldstein, and B.R. Pandi.
assistants in Ghana. Social Science Medicine 42:1169–76. 1985. Barriers to effective family planning in Nepal. Studies
in Family Planning 16:260–70.
Peabody, J.W., J. Luck, P. Glassman, T.R. Dresselhaus, M. Lee.
2000. Comparison of vignettes, standardized patients, and Sloan, D.A., M.B. Donnelly, S.B. Johnson, R.W. Schwartz, and
chart abstraction: A prospective validation study of 3 W.E. Strodel. 1993. Use of an objective structured clinical
methods for measuring quality. Journal of American Medical examination (OSCE) to measure improvement in clinical
Association 283:1715–22. competence during the surgical internship. Surgery
114:343–51.
Pietroni, M. 1995. The assessment of competence in
surgical trainees. Surgical Training 200–02. Smith, J.E., and S. Merchant. 1990. Using competency exams
for evaluating training. Training and Development Journal
Pritchard, R.D. 1990. Enhancing work motivation through 65–71.
productivity measurement and feedback. In Work
Motivation, eds. U. Kleinbeck, H. Quast, H. Thierry, and H. Snyder, W., and S. Smit. 1998. Evaluating the evaluators:
Hacker. Hillsdale, NJ: Erlbaum. Interrater reliability on EMT licensing examinations.
Prehospital Emergency Care 2:37–47.
Prochaska, J.O., and C.C. DiClemente. Toward a
comprehensive model of change. In Treating Addictive
Behaviors, eds. W.R. Miller and N. Heather. New York:
Plenum, 1986.

QA Operations Research Issue Paper ■ 27


Southgate, L., and D. Dauphinee. 1998. Maintaining Swartz, M.H., J.A. Colliver, C.L. Bardes, R. Charon, E.D. Fried,
standards in British and Canadian medicine: The and S. Moroff. 1999. Global ratings of videotaped
developing role of the regulatory body. British Medical performance versus global ratings of actions recorded on
Journal 316:697–700. checklists: A criterion for performance assessment with
standard patients. Academic Medicine 74(9):1028–32.
Spencer, L.M., D.C. McClelland, and S.M. Spencer. 1994.
Competency assessment methods. Boston: Hay/McBer Tamblyn, R.M., D.J. Klass, G.K. Schnabl, and M.L. Kopelow.
Research Press. 1991. The accuracy of standardized patient presentation.
Medical Education 25:100–09.
Spross, J.A., and J. Baggerly. 1989. Models of advanced
nursing practice. In The Clinical Nurse Specialist in Theory Upmeyer, A. 1971. Social perception and signal detectability
and Practice, eds. A.B. Hamric and J.A. Spross. Philadelphia: theory: Group influence on discrimination and usage of
W.B. Saunders Company. scale. Psychol Forsch 34:283–94.

Steiner, B.D., R.L. Cook, A.C. Smith, P. Curtis. 1998. Does Van der Vleuten, C.P.M., and D.B. Swanson. 1990. Assessment
training location influence the clinical skills of medical of clinical skills with standardized patients: State of the art.
students? Academic Medicine 73:423–26. Teaching and Learning in Medicine 2:58–76.

Stillman, P.L., D. Swanson, S. Smee, A.E. Stillman, T.H. Ebert,V.S. Van der Vleuten, C.P.M., G.R. Norman, and E. De Graaff. 1991.
Emmel, J. Caslowitz, H.L. Greene, M. Hamolsky, C. Hatem, D.J. Pitfalls in the pursuit of objectivity: Issues of reliability.
Levenson, R. Levin, G. Levinson, B. Ley, G.J. Morgan, T. Parrino, Medical Education 25:110–18.
S. Robinson, and J. Willms. 1986. Assessing clinical skills of
residents with standardized patients. Annals of Internal Velicer, W.F., J.S. Rossi, C.C. DiClemente, and J.O. Prochaska.
Medicine 105:762–71. 1996. A criterion measurement model for health behavior
change. Addictive Behavior 5:555–84.
Stillman, P., D. Swanson, M.B. Regan, M.M. Philbin,V. Nelson,
T. Ebert, B. Ley, T. Parrino, J. Shorey, A. Stillman, E. Alpert, Vu, N.V., H.S. Barrows, M.L. March, S.J.Verhulst, J.A. Colliver,
J. Caslowitz., D. Clive, J. Florek, M. Hamolsky, C. Hatem, and T. Travis. 1992. Six years of comprehensive, clinical,
J. Kizirian, R. Kopelman, D. Levenson, G. Levinson, J. McCue, performance-based assessment using standardized patients
H. Pohl, F. Schiffman, J. Schwartz, M. Thane, and M. Wolf. 1991. at the Southern Illinois University School of Medicine.
Assessment of clinical skills of residents utilizing Academic Medicine 67:42–49.
standardized patients. Annals of Internal Medicine,
114:393–401. While, A.E. 1994. Competence versus performance: Which is
more important? Journal of Advanced Nursing 29:525–31.
Stillman, P.L. 1993. Technical issues: Logistics. Academic
Medicine 68:464–68. Wood, E.P., and E. O’Donnell. 2000. Assessment of
competence and performance at interview. British Journal
Stoy, W.A. n.d. EMT-paramedic and EMT-intermediate of Medicine 320:S2–7231.
continuing education: National guidelines. National
Highway Traffic Safety Administration. Wood, R., and C. Power. 1987. Aspects of the competence-
performance distinction: Educational, psychological and
Sullivan, C.A. 1994. Competency assessment and measurement issues. Journal of Curriculum Studies
performance improvement for healthcare providers Journal 19:409–24.
of Health Quarterly 16:14–19.

Sullivan, R.L. 1996. Transferring performance skills: A


clinician’s case study. Technical & Skills Training 14–16.

28 ■ Measuring the Competence of Healthcare Providers

Você também pode gostar