Você está na página 1de 5

www.asq.

org

Guest Essays

FMEA vs. FRACAS vs. RCA

By Jan Krouwer

F M E ( C ) A – Failure mode effects (and criticality) analysis


F R A C A S – Failure reporting and corrective action system
R C A – Root cause analysis

Many people who work in hospitals have never heard of the reliability tools that are
commonplace in manufacturing environments. Those who have—often representatives on
patient safety committees—more likely know of FMEA and RCA, but not FRACAS. This
essay explains the differences among these techniques, with emphasis on the perspective
of the hospital worker.

Background

The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) requires:

• FMEA to be performed once a year


• RCA to be performed for sentinel events and near misses

Hospital use of RCA has been critiqued by Berwick (1), who suggests that RCA seeks to
find a single cause. I have responded (2), as in my experience I have found RCA is not
limited to seeking single causes. A more important limitation of RCA as practiced by
hospitals is that it often restricts attention, as implied by JCAHO policy, to sentinel
events and near misses. This focus leaves the many less severe events out of the
picture.

FRACAS (3-4) is essentially the same as RCA, although FRACAS is often combined
with other tools such as reliability growth management, which is based on learning curve
theory. Moreover, in FRACAS, all observed error events are analyzed, and in this way
FRACAS is similar to FMEA.

ASQ’s Healthcare Update Newsletter, February 2007 Page 1 of 5


www.asq.org

The term FRACAS will be used here instead of RCA. Table 1 shows a comparison of
some attributes of FMEA and FRACAS.

Table 1 Attributes of FMEA and FRACAS for a process

FMEA FRACAS

General “Proactive” “Reactive”

Affect the design before Correct problems after


Purpose
launch launch*

Errors May occur – the potential Have occurred – observed


errors must be enumerated errors are simply counted

Error rate Assumed Measured

Issues with All errors counted?


Is it complete?
technique Culture inhibits reporting
Models can be wrong.
errors.
Can be
Fault trees Fault trees
combined with
Evaluate quality Difficult – completeness,
Simple – measure
of the technique reasonableness of
error rate
mitigations is qualitative

*For a product FRACAS, problems can be observed after design but before launch

FMEA and FRACAS Can Inform Fault Trees

Both FRACAS and FMEA can be combined with fault trees, which offer a “top-down”
structured way of representing causes for an undesirable event. The graphical structure
imposed by a fault tree increases the likelihood that a FMEA will be more complete, since a
FMEA is basically an unordered list in a table.

Fault trees allow multiple causes for an event and use “and” and “or” gates to distinguish
between error types. Because fault trees can contain both potential and observed errors,
they are ideal for containing the knowledge expressed in both a FMEA and FRACAS. That
is, when a process is designed, the ways it m i g h t fail are captured in a fault tree (and
FMEA). After the process is launched, the ways in which the process h a s failed are
captured through FRACAS, and this knowledge is used to update the fault tree. In both
the FMEA and FRACAS, the fault tree is also updated when a mitigation is implemented,
since this represents a design change to the process (see Figure 1).

ASQ’s Healthcare Update Newsletter, February 2007 Page 2 of 5


www.asq.org

Figure 1 Use of FMEA, FRACAS, and Fault trees to prevent errors in processes

ASQ’s Healthcare Update Newsletter, February 2007 Page 3 of 5


www.asq.org

Don’t Neglect FRACAS

Both FMEA and FRACAS are useful. Yet, the JCAHO requirement focuses on FMEA. In
a sense, this is logical because FMEA is more encompassing than FRACAS. FMEA
addresses potential errors but can also accommodate observed errors, whereas
FRACAS is intended only for observed errors. The problem is that with 98,000 deaths
due to medical errors each year, the huge number of observed errors introduces the
possibility of paying insufficient attention to potential errors if one performs only FMEA.

Consider two error events for a hypothetical FMEA for a transplant service:

1. Patient infection after surgery – an observed error


2. Organ selected with incorrect blood type – a potential error

If one takes into account the entire service, the number of observed error events will likely
cause a ranking problem. Ranking is important because the service will have limited funds
to apply to mitigations. So even though selection of an organ with the wrong blood type
may have never occurred, the selection process could possibly be flawed and could
benefit from mitigations. Yet, another possibility is that this will not occur because the focus
is on observed errors. Hence, one should perform both FMEA and FRACAS, as indicated
in Figure 1. The combination of the two tools reduces the likelihood of ranking problems
since the FMEA will focus on potential problems and the FRACAS will focus on observed
problems.

A Challenge with FMEAs

As indicated in Table 1 under “purpose,” use of FMEA is intended to affect the design of a
process. In the medical diagnostics industry, however, an FDA-required hazard analysis
for instrument systems (a fault tree/FMEA for hazards) was at times merely a
documentation of an existing design. The same issue exists for FMEAs in hospitals, since
many FMEAs will be performed for existing processes.

ASQ’s Healthcare Update Newsletter, February 2007 Page 4 of 5


www.asq.org

References

1. D. M. Berwick, “Errors Today and Errors Tomorrow,” New England Journal of


Medicine 348 (2003): 2570-2572.
2. J. S. Krouwer, “There Is Nothing Wrong with the Concept of a Root Cause,”
International Journal for Quality in Health Care 16 (2004): 263.
3. Department of Defense Handbook: Failure Reporting, Analysis and Corrective
Action Taken, Mil-Std-2155 (Washington, DC: Department of Defense, 1995).
4. J. S. Krouwer, “Using a Learning Curve Approach to Reduce Laboratory Errors,”
Accreditation and Quality Assurance Journal 7 (2002): 461-467, available at
http://krouwerconsulting.com/KrouwerLearningCurve.pdf.

About the Author

Jan Krouwer, Ph.D., has more than 20 years of experience in the medical diagnostics
industry. For most of the time, Dr. Krouwer worked for Bayer Diagnostics and its
previously acquired companies of Chiron Diagnostics, Ciba Corning, and Corning Medical.
He also was the executive director of the Evaluations and Reliability Department, an
internal consulting group. He earned his Ph.D. in synthetic organic chemistry from M.I.T.
and completed postdoctoral work in enzymology from New England Medical Center. Dr.
Krouwer is the principal owner of Krouwer Consulting (http://krouwerconsulting.com/).

Copyright © 2007, Jan Krouwer. Used with permission.

“Guest Essays” is a feature of ASQ’s Healthcare Update newsletter. If you would like to
submit an essay for consideration, e-mail HealthcareUpdateNewsletter@asq.org.

To subscribe to the newsletter, visit http://www.asq.org/healthcare/update_info.html.

ASQ’s Healthcare Update Newsletter, February 2007 Page 5 of 5

Você também pode gostar