Você está na página 1de 21

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 15

Management of Healthcare
Processes Based on
Measurement and Evaluation:
Changing the Policy in an
Italian Teaching Hospital

Ulrich Wienand, Head of Research Innovation Quality and Accreditation Office, Ferrara
University Hospital, Ferrara, Italy
Gabriele Rinaldi, Director-General, Ferrara University Hospital, Ferrara, Italy
Gloria Gianesini, Orthopaedics Unit, Ferrara University Hospital, Ferrara, Italy
Anna Ferrozzi, Management Engineer, Research Innovation Quality and Accreditation Office,
Ferrara University Hospital, Ferrara, Italy
Luca Poretti, Founder and CEO, XPSoft.it, Ferrara, Italy
Giorgia Valpiani, Statistician, Research Innovation Quality and Accreditation Office, Ferrara
University Hospital, Ferrara, Italy
Adriano Verzola, Head of Performance Analysis and Programming Office, Ferrara University
Hospital, Ferrara, Italy

ABSTRACT
Clinical management and care outcome measures, which are now becoming mandatory in more and more
countries, can influence the quality of care if they are relevant, evidence-based, carefully crafted and subjected to periodical quality review. Over a period of 13 years, a large Italian teaching hospital has used this
framework to develop a performance measurement system, comprising a total of 768 internal and 67 external
measures, with a view to improving service provision and accountability. The web-based performance measurement system does have a cost in terms of staffing and technological requirements, but the integration of
the data it provides into the decision-making process can have a considerable impact on performance, and
therefore quality of care.
Keywords:

Accreditation, Clinical Indicators, Healthcare Management, Italian Teaching Hospital,


Measures

DOI: 10.4018/ijrqeh.2014040102
Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

16 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

INTRODUCTION
Many healthcare facilities have devised or
adopted means of assessing performance,
comprising financial, organizational and
clinical indicators under this umbrella term
(Kazandjian, 2004, p.16). However, financial
auditing often takes precedence over other
forms of self-assessment, in direct opposition
to the spirit of the strategy, namely to use
indicators suitable for assessing the processes
and outcomes of healthcare functions and to
adopt appropriate methods for their measurement. Wherever possible these must be derived
from available knowledge on the efficacy of
healthcare interventions and be shared by all
stakeholders, as declared by the Director of
our Regional Healthcare Agency, Roberto
Grilli (2001).
As regards clinical performance indicators,
on the other hand, defined as measures of clinical management and/or care outcomes (ACHS,
2013), there is considerable evidence that these
have a positive impact on the healthcare system
(Collopy, 2000), as long as they comply with
high-quality standards and are constructed
in a careful and transparent manner. Indicators must be relevant to the important aspects
of quality of care. There should be adequate
research evidence that the recommendations
from which they are derived are related to
clinical effectiveness, safety and efficiency
(Wollersheim et al., 2007, p.15).
As far back as 1998, Sheldon pointed out
that performance indicators are not simply
technical entities but they have programmatic
or normative elements which relate to the ideas
and concepts which shape the mission of practice (1998, p.S46). That being said, clinicians
in particular periodically express their concerns
regarding the use of performance indicators
(Werner & Asch, 2007). Criticism aside,
measuring and evaluating performance is now
becoming a way of life, particularly as many
healthcare facilities have or choose to conform
to external accreditation systems, whether externally validated or devised in-house (Miller,
2005; Kazandjian, 2003). Although in-house

indicators may be more appealing, those already


tried and tested in multi-centric, international
schemes, such as IQIP (Pschibilla & Matthes,
2005; Press Ganey Associates, 2010), PATH
(Vallejo et al., 2006; Veillard et al., 2005), the
Australian Council on Healthcare Standards ACHS (2013), the Danish National Indicator
Project (Mainz & Bartels, 2006) and the German SQG project, offer greater guarantees of
their effectiveness in measuring quality. The
purpose of this paper is to recount our hospitals
journey through this process, highlighting the
benefits and pitfalls.

BACKGROUND
The Ferrara University Hospital Trust is a public
healthcare provider based at the SantAnna
Hospital situated in the Emilia-Romagna region
of northern Italy. It employs 2,628 members
of staff, comprising 476 physicians, and trains
graduate and post-graduate students from
the affiliated University of Ferrara School
of Medicine. The hospital itself houses 626
beds for inpatients and 85 for those receiving
day-hospital care in 2013 there were 19,406
admissions (excluding healthy newborns) and
7,029 day-hospital patients.
In the spring of 2001, the hospital performed a thorough self-audit, prompted by the
sudden availability of theories and tools for
self-assessment (EFQM, 1999a, 1999b). This
process had its cultural background in 1997,
when the Emilia-Romagna Regional Administration began putting together an accreditation
system for healthcare structures and funding
local Continuous Quality Improvement projects.
It was also fuelled by the creation of new national
databases to collect clinical data in various
medical fields, namely the Joint Replacement
Register, the ICU national database, the Heart
Surgery Register (REAL), among others.
The main finding of the self-assessment
survey conducted was the critical lack of suitable
performance evaluation tools (section 9 in the
EFQM model), given that the prime directive of
a hospital is not measured in terms of financial

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 17

performance, although balancing the books is


obligatory, but in the standard of care it provides.
Nevertheless, hospital management teams were
able to identify a key quality in its staff, namely
that they should all undertake to provide efficacious, up-to-date, evidence-based healthcare,
in part by regularly updating their respective
skill-sets and striving for continual improvement (Mission Statement). This directive was
then translated into a series of shared clinical
performance indicators for incorporation into
the global management and auditing strategy.
A bottom-up approach was decided upon, the
aim being to create a tool for monitoring the
performance of the hospital as a whole based
upon that of its separate departments and their
operating units (OUs).
Since this initial commitment to using clinical indicators for assessment and improvement
of service provision (in 2001), local and national
legislation have prompted the introduction of
other goals, namely to ensure that hospital conformed to external evaluation criteria (2004),
to compare its internal performance to that of
other facilities (2005), to increase accountability towards external stakeholders (2009), and,
finally, to use cyclical performance review to
determine hospital policy.
In Italy, and in this context, the English
term accountability is taken to mean the
duty to document and report what has been
done for the benefit of those who charged us
with the task and/or who provided us with the
resources (Morosini & Perraro, 2001, p. 5).

USING CLINICAL
INDICATORS FOR INTERNAL
IMPROVEMENT (2001)
In the spring of 2001, all the OUs in the hospital
were charged with defining their own clinical
indicators. Meetings were held the same autumn
to illustrate to directorate staff the main international systems used to evaluate clinical performance and the theories behind them, and to stress
the importance of choosing suitable evaluation
criteria (in terms of quantification and scientific

relevance). Each directorate head was provided


with a form to help them identify potentially
appropriate indicator(s) through metadata, and
within 18 months 100% of internal OUs had
defined at least one clinical performance indicator, after convening to research the databanks
and relevant international literature. The newly
chosen and approved indicators were collated
and distributed to all staff at a facility-wide
conference to ensure maximum transparency
and information sharing.
The following year the OUs were set the
objective of harvesting the collected data and
sending it to the central Quality Department
after fixed monitoring periods (every 3, 6 or
12 months, depending on the indicator(s) in
question). During this period certain issues
with the chosen indicator(s) were raised by
some Units, who found objective difficulties in
data collection, and these Units were given the
opportunity to redefine indicators that would
be easier to monitor and/or more relevant. By
February 2004, 80.5% of indicators had been
defined and were being monitored with the
prescribed regularity.
In 2003, an initial bulletin containing the
first set of figures was published and distributed
to all the staff. The bulletin also laid down the
mission for the following year, namely to determine the acceptable performance standard,
or threshold value, of each clinical indicator.
Unfortunately, this target also proved problematic, in that the reference literature of the time
did not always contain an accepted threshold,
which therefore had to be defined on the basis
of those reported by other healthcare facilities
or calculated with reference to internal historical
data. Both managerial and medical staff continue
to be actively involved in this ongoing process,
aided by the Quality Department, which helps
identify the relevant literature upon which the
new threshold values are based. To this end a
standard form has been drawn up that must be
completed in full. As part of this process, and
with a view to the abovementioned commitment
to continual improvement, the hospital also
introduces various new clinical performance

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

18 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

indicators (and retires those that are no longer


relevant) as deemed necessary (See Table 1).

GOING ONLINE
When the paperwork began to accumulate to
unmanageable levels, the data collection system
was computerized, and the clinical indicators
used to measure each OU within each directorate, together with the pertinent metadata,
were made accessible online. This immediately
proved to be a winning formula, as all users
can access the information without the need to
update their software. Users not connected to
the hospital network can also access the site to
consult the performance indicators by means
of a dedicated public IP address connected to
the server.
The choice of technology used to realize the
project was mainly influenced by the existing
software, predominantly Microsoft-oriented
(both client and server), which led to ASP.net
and IIS server being adopted as the natural
mediums for website construction. This meant

that neither the existing MS-SQL server database nor the other infrastructure and affiliated
support services (back-up, security and data
monitoring) would need to be changed.
The website was set up with XHTML
1.0 Transitional, the state-of-the-art software
at that time, in addition to several third-party
components used to increase the dynamism of
the interface. A restricted group of administrators can use it to enter and verify performance
data, which they can also process as necessary,
further to the generation of pertinent reports
and graphs. System administrators are able to
define the indicators, assign user roles, define
the cost-centre hierarchies, and perform all
other housekeeping tasks necessary to ensure
that the system functions correctly. As well as
these administrators, over 350 members of internal staff (medical, nursing and clerical) have
restricted access to the site for the purposes of
entering and checking performance data before
the prescribed monitoring deadlines. Indeed,
each indicator has a fixed monitoring period,
and at the end of a pre-defined grace period,

Table 1. Tool for constructing an indicator


Name of indicator
Operating Unit
Cost centre
Product/Process measured by indicator
Rationale of indicator/quality measure
Numerator (How numerator is defined?)
Denominator (How denominator is defined?)
Type of indicator (numerator alone; numerator/
denominator; numerator/denominator expressed as a
percentage)
Standard reference value
Source for standard reference value
Length of monitoring period (three-monthly, six-monthly
or annually)
Year of first data collection
Person responsible for processing & issuing data
Person responsible for analysing & approving data

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 19

a reminder notice is automatically generated


and sent by e-mail to the OU in question and
any other users involved. The reminder is sent
every few days until the required fields have
been completed and checked.
The operations of entry and validation of
each indicator are performed by two different
individuals, so that the data entered into the
system is checked twice before being stored on
the database. Log tables store the history of each
operation performed by each user on each piece
of data. Once it has been entered and checked,
performance data is then converted into graphs
that can be accessed by every registered user (the
data pertaining to each OU is freely available to
the registered staff of other OUs within the hospital network). Non-conformity is highlighted,
and breakdown tables and other information are
available for consultation.

CONFORMING TO EXTERNAL
EVALUATION CRITERIA (2004)
External evaluation and accreditation is an indispensable part of the Italian national healthcare
system, as non-conformity will affect funding.
National laws passed in 1992 and 1997 gave
the 21 Italian Regional authorities the task of
supervising healthcare authorization and accreditation, and in the Emilia-Romagna Region
this came into effect in two stages in 1998 and
2004 (Regione Emilia Romagna, 1998, 2004;
Presidente del Governo della Repubblica Italiana, 1997; Governo della Repubblica Italiana,
1992). Now, in brief, the Regional Healthcare
Agency (RHA) checks the relevant documentation and ensures that the healthcare facility
in question possesses the legal requisites for
funding, issuing a certificate to this effect.
After a trial phase, in 2005 the EmiliaRomagna RHA began to make periodic inspections of the healthcare facilities within its remit,
and therefore also Ferrara University Hospital.
The regional accreditation system involves
monitoring two different types of indicators:
general (applicable to all Departments), and
specific for each OU. The former are similar

to the recommendations of the Joint Commission Internationals QPS2-4 (JCI, 2014), but
for many disciplines the specific performance
indicators are chosen or defined by regional
think-tanks. In order to comply with the new
laws, integrating the obligatory indicators into
our performance monitoring system and linking
each indicator to a specific product or process
within each directorate or OU, our database had
to be upgraded. The upgrade was completed in
2008, and 591 specific indicators were added,
either at this time or since, as the OUs prepared
for entry into the accreditation systems.

PEER COMPARISON OF
PERFORMANCE: IQIP (2005)
As the performance monitoring system was
designed from the bottom up, a surfeit of
parcelled indicators were generated, none of
which, however, reflected the performance
of the hospital as a whole. Furthermore, only
a small minority enabled direct comparison
with other hospitals in Italy and abroad. These
considerations prompted us to adopt an internationally recognized system of performance
markers, namely the International Quality
Indicator Project (IQIP). This was piloted in
Maryland, USA, in 1985 at the behest of several
hospital managers who wanted to integrate clinical performance indicators into their financial
monitoring system. The project later expanded
to cover healthcare facilities throughout the
United States and overseas, and roughly 600
hospitals, 150 of which are outside the US
(United Kingdom, Austria, Germany, Switzerland, Portugal, Japan, Taiwan and Singapore),
have adopted this system. The IQIP database
therefore houses the largest set of international
data on clinical performance and outcome indicators, making it an ideal point of reference
for peer-to-peer comparison.
The majority of indicators in the IQIP database are hospital-wide, enabling comparison
between hospitals but not peer OUs. Hospitals
are ranked with respect to the mean data provided by the 600 participating facilities, and

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

20 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

within sub-groups comprising those of similar


characteristics (peer groups). Participating
facilities can enter their data via a restrictedaccess web interface every three months, and
the entries are processed and validated by the
system. Each healthcare facility is given a
unique identifier code in order to protect their
anonymity, and performance reports are issued
within 20 days of data entry. This rapid feedback
is a key feature of the project, as members can
compare their performance with national, European and international standard and divulge
this information to its staff without a significant
time lapse (Wienand, 2008).
Although hospital-wide, the IQIP indicators cover various aspects of healthcare, and,
once enrolled, a facility can choose which of
the indicators to measure. The idea behind the
IQIP project is that standard measures can be
used worldwide, but that comparison is only
meaningful if geographical, economical and
other distinguishing factors are taken into account. However, it goes without saying that any
significant variation in measured values with
respect to the reference values (such as mean rate
for European facilities) needs to be thoroughly
investigated and addressed without delay.
Ferrara University Hospital signed up for
the IQIP project on a trial basis in 2005, taking

advantage of a grant provided by the Regional


Council, and was the first healthcare facility to
do so in Italy. As such, we were charged with
evaluating the strengths and weaknesses of the
system as regards its methods and organization. This successful experiment led to Ferrara
University Hospital being appointed national
coordinator for other Italian hospitals signing
up to the IQIP project by 2008 there were 8.
Table 2 shows the IQIP indicators that Ferrara
University Hospital is currently using to measure itself against other hospitals worldwide.

INCREASING
ACCOUNTABILITY TO
EXTERNAL STAKEHOLDERS
As regards public accountability, Italian law
(Governo della Repubblica Italiana, 2009)
states that all public bodies must perform an
annual organizational and professional review
and publish their findings on their website,
making them accessible to the general public.
Among the areas to be evaluated is the quality
of service provision, and a think-tank appointed
by the RHA identified a set of 7 hospital-wide
indicators to be applied to all the hospitals in the
Region (See Table 3). To this list Ferrara Uni-

Table 2. IQIP indicators


Indicator

Number of
Measures
(N=67)

Time Period

Inpatient Mortality

15

2005-2013

Neonatal Mortality

2005-2013

Perioperative Mortality

2008-2013

Management of Labor

2005-2013

Documented Falls

2005-2013

Pressure Ulcers in Acute Inpatient Care

2010-2013

Deep Vein Thrombosis and Pulmonary Thromboembolism Following Surgery

2008-2013

Length of Stay in the Emergency Department

24

2005-2013

Patients Leaving the Emergency Department Before Completion of Treatment

2007-2013

Cancellation of Scheduled Ambulatory Procedures

2005-2013

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 21

Table 3. Indicators in the Ferrara University Hospitals social balance sheet


Hospital Wide Indicators
Proportion of patients receiving an operation for fracture neck of femur within 2 days of admission
Primary C-sections
Proportion of surgery patients undergoing laparoscopic cholecystectomy
Patients admitted with STEMI: proportion of angioplasties (PTCA) performed within 1 day
Unscheduled re-admission within 15 days of discharge
Mortality within 30 days of admission for Acute Myocardial Infarction
Mortality within 30 days of admission for Stroke
Directorate

Efficacy

Safety

Appropriateness

General Medicine

Complete diagnosis

Pressure ulcer prevalence

Number of day hospital


(>4 hospital admissions)

Specialist Medicine

Proportion of dialysis
patients stabilized on Hb
>11 grams%

Infection rate in patients


with long-term CVC

Number of failed
US-guided fine-needle
aspiration biopsy (FNA)
thyroid biopsies

General Surgery

Peri-operative mortality

Major stroke after carotid


endarterectomy

Number of failed USguided core biopsies

Specialist Surgery

Prevalence of periodontal
disease patients and
those with high-risk of
developing the condition
displaying bleeding on
probing <30% within 6
months from non-invasive
periodontal treatment

Number of postadenotonsillectomy
haemorrhages requiring
corrective surgery

Number of admissions
for acute endoopthalmitis
after cataract surgery

Reproduction and
Growth

% pregnancies miscarried
due to invasive testing
(amnio, CVT, PUBS)

Mortality in VLBW
newborns (birth
weight<1500 grams)

% of level 3 care live


births deemed high-risk
during pregnancy or at
birth

Accident and Emergency

Mortality in patients
admitted to Coronary Unit
with primary or secondary
diagnosis of AMI

Sepsis in ICU patients


fitted with CVC

Accuracy of
fibrobronchoscopy

Neuroscience /
Rehabilitation

3-month mortality rate


in stroke patients treated
with rT-PA within 3 hours
of onset

Number of post-surgical
infections

Number of Neuro-ICU
patients admitted to longterm care facilities

Laboratory Medicine
and Diagnostic Imaging

Intraoperative histological
diagnosis quality

Number of adverse
reactions to transfusion

Evaluation of
appropriateness and
completeness of
laboratory exam

versity Hospital added a further 24 indicators, 3


for each of the 8 directorates, which it selected
from those already being monitored internally.
This project was devised and implemented

between the beginning of 2010 and July 2011.


The staff of each directorate was involved in
the identification of the 3 clinical performance
indicators aimed at measure appropriateness, ef-

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

22 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

ficacy and safety of care, respectively. To make


these indicators comprehensible to the general
public, a work group comprising staff from the
Quality, Public Relations, and Strategic Control
departments was set up. The relevant data collected were published in the hospitals annually
published Social Balance Sheet, in which a key
section describes the rationale behind the choice
of each measure for the benefit of a lay public.

INTEGRATING MEASUREMENT
FINDINGS INTO HEALTHCARE
POLICY (2010)
After 8 years of performance monitoring, Ferrara University Hospital decided to assess how
performance data could best be integrated into
the decision-making process. Reasoning that
the true activity of evaluation has to incorporate the bestowing of a value upon the observed
statistical performance profiles (Kazandjian &
Wienand, 2008, p.578) they conducted a survey
of high-level managers from the two healthcare
trusts in Ferrara (University Hospital Trust and
Local Healthcare Trust). These Healthcare
Managers, Hospital Directors, Risk Managers,
Quality Managers, and Heads of Directorates
were interviewed to gather their opinions on the

importance and practicalities of the indicators,


in particular those on safety (as a test case). Interviews were conducted by a specially trained
psychologist and administered by means of a
semi-structured questionnaire (See Table 4).
Feedback from this questionnaire indicated
that:



10 out of 22 managers (45.4%) said that


they received no figures concerning patient
safety;
8 out of 22 (36.4%) did not discuss the
issue with other management staff;
Overall, 14 out of 22 (63.6%) did not discuss the data with other managers, despite
the actual availability of figures;
Oddly enough, 6 managers claimed to
receive no data but said that they did, in
fact, discuss the figures with other highlevel staff.

For this reason, in autumn 2010, the new


CEO and newly appointed Medical Director
redefined the hospitals strategic planning policy
so that clinical processes would henceforth be
based on measurable parameters. In this way
the mass of data collected would be exploited
in an analytical approach to decision-making,

Table 4. Survey on the use of indicators by the top management


Ferrara Local
Healthcare Trust
(FLHT)**

Ferrara University
Hospital (FUH)*

Item

Yes

No

Yes

Total (N=22)

Yes

No

Yes

Are you regularly


provided with clinical-risk
and patient-safety data?

Have you ever discussed


these figures with other
hospital managers?

10

10

* For FUH, the following data were available: documented falls, pressure ulcers, adverse drug reactions (some UOs),
medical errors, surgical site infections, incident reports, complaints, litigations, inpatient mortality, neonatal mortality,
perioperative mortality, management of labor, deep vein thrombosis and pulmonary thromboembolism following surgery,
length of stay in the emergency department, patients leaving the emergency department before completion of treatment,
cancellation of scheduled ambulatory procedures. Other available data were safety indicators from trusts database.
** For FLHT, the following data were available: falls, pressure ulcers, lung infections, surgical site infections, Deep
Venous Thrombosis, incident reporting, RCA reports, Complaints, Litigations, Clostridium/Acinetobacter/Legionella
infections.

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 23

actively involving all tiers of Ferrara University


Hospital management staff. In conformity to the
Regional Accreditation scheme, it was decided
to follow a bottom-up, rather than a top-down
model when defining hospital- and departmentwide targets. Hence the current strategic policy
of the Ferrara University Hospital is not only
derived from regional performance targets, but
is also based on analysis of local efficiency,
quality and safety data.
Analysis and planning are now seen as
parts of a single, cyclical process that feeds
on the data gathered from periodical hospital
performance evaluation as input for strategic
decision-making. This new system of policymaking comprises four major steps (See Figure
1) with a view to integration of clinical measures
and accountability in every field of hospital
management (financial, organizational and
clinical performance indicators).

Step 1: Management Review


Analysis of the data and discussion by the
top-level hospital management represents

the keystone of the new policy-making strategy. Performance data are put together by the
technical staff and considered by the management, together with other issues raised from
above (regional government healthcare policy
guidelines and triennial healthcare plan) and
below (directorates and OUs). In particular,
the Strategic Control department is charged
with providing data on financial performance
(efficiency and cost-effectiveness), the appropriateness of services and prescriptions
provided, the complexity of the cases treated,
and a review of the medical records. In this
context appropriateness is taken to mean
that a particular intervention is efficacious
and indicated for the person who receives it
(Morosini & Perraro 2001, p.19). The Quality
Department provides feedback on the clinical
performance indicators, the results of the clinical
audits, and customer satisfaction questionnaires.
The Public Relations Department details any
complaints or suggestions received, the Risk
Manager any incident reports, nosocomial
infections or sentinel events, and the Nursing
Direction news of any falls or pressure sores.

Figure 1. New policy for evaluation and planning

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

24 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

Step 2: Setting Targets


The findings from the Management Review and
the regional healthcare regulations are incorporated into the individual directorate annual
goals, setting out their indicators, targets and
means of measurement. The monitoring staff
are identified by name, and information regarding the monitoring procedure, time-frame,
and publication of findings (intranet, indicator
database) is provided.
Goals are set for each of the 5 areas: financial performance, appropriateness, clinical
performance, customer satisfaction, and safety.
Each is weighted and linked to performancerelated rewards, upon agreement between the
unions and the general management. Before
publication, the general management presents
the proposed targets to the heads of directorates,
who put them before their directorate committees, who may in turn suggest modifications.
Once any counter-motions have been collected,
final negotiations take place between the heads
of directorates and the general management.
Once the annual targets have been established
and approved, the goals become the reference
point for future measurement, evaluation and
improvement.

Step 3: Monitoring, Analysis


and Corrective Action
Once the planning phase is over, the subsequent months are dedicated to the collection
and analysis of performance data, using this
as a basis for continual improvement. As part
of clinical performance data measurement and
monitoring, systematic and reliable means of
collecting it are set out, and the procedure to be
followed once non-conformity is detected is laid
down. Staff are also informed what they should
do if they encounter any problems during the
course of their work, with a view to using this
information to improve the service. Customer
satisfaction questionnaires, complaints, praise,
and suggestions are all fed back into the system.
Those responsible for collecting and collating all
this information are mainly charged with such

by the general direction. Within each department, this function is performed by delegates of
the Heads of Directorates, e.g., internal quality
control and/or data monitoring staff.

Step 4: Feedback
At least one review meeting per year is held
within each Directorate (in conformity to the
Regional Accreditation scheme). These meetings are open to all external stakeholders, and
have the function of reporting on the balance
sheet and action plan to the principal stakeholders. The hospital management reports to
local politicians, University representatives,
and other associations, by means of the Social
Balance. In this way Ferrara University Hospital
can be held fully accountable, and integrate
stakeholder feedback into hospital policy along
with the measures of clinical, financial and
organizational performance monitoring.

COUNTING THE COST


Thirteen years from the inception of the clinical
performance monitoring system, a total of 768
indicators are monitored on the Trusts database
(509 process, 13 volume and 246 outcome) and
67 through IQIP. Setting up and maintaining the
system has had a certain cost in terms of human
resources, IT expenditure, training courses and
conferences. In total hundreds of seminars and 5
training workshops were held to familiarize the
relevant staff with the internal auditing system,
while the IQIP system involved organizing 5
training courses and 9 meetings for Italian users. Furthermore, Ferrara University Hospital
held two international conferences on clinical
performance indicators.
In terms of paperwork, 2 booklets and 2
protocols were printed for staff, as well as the
annual Mission Statement, which is published
every spring when the clinical performance data
for the preceding year becomes available. To
handle all this work and keep the system functional and up-to-date, we currently employ one
manager (5 hours per week), one engineer (5

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 25

hours per week), and one statistician (5 hours per


week). Software maintenance is contracted out.

required targets, together with the action taken


to remedy each situation:

REAPING THE REWARDS

1. The indicator Correctness of Yellow


Code Attribution in Triage, monitored
by Emergency Room staff, was defined
as number of yellows code attributed in
triage and confirmed in discharge / number
of yellows code attributed in triage. Once
the variation was identified, during 2010
and 2011 the OU implemented a thorough
clinical audit to determine whether the
National Triage Training Group (NTTG)
2010 guidelines were being adhered to. To
improve the service in this respect, triage
staff were given special training pursuant to
the NTTG 2012/2013 guidelines, and, due
to the specific nature of the non-conformity
reported, a further investigation into the
handling of chest pain in triage was undertaken. The beneficial effects of these
interventions are evident from the data
trend reported in Figure 2;

Naturally, since we started monitoring clinical


performance 13 years ago, there have been
incidences of non-conformity in several of the
768 indicators. In these occasions it is accepted
practice for the OU heads to seek the advice of
the Quality Department, and to determine, using
basic statistical tools (confidence intervals and
run charts), whether these represent a problem
and therefore need to be dealt with. On occasion
more sophisticated analysis is required, and
clinical auditing tools (Benjamin, 2008; Dixon,
2009) are therefore employed. Once a problem
area has been identified, corrective action can
be taken immediately and spontaneously by
the department, or assigned as a performance
target for the following year. For exemplification
purposes we report here two recent occasions
on which clinical indicators failed to meet the

Figure 2. Correctness of yellow code attribution in triage

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

26 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

2. The indicator Major Complications in


Caesarean Section, monitored by the Obstetrics and Gynaecology OU, displayed a
considerable rise with respect to the preceding years, an anomaly detected during their
annual review (2011). In response, the OU
Manager conducted a thorough analysis of
each major complications case, discussing
them with the medical staff involved. The
decision was the taken to institute special
training for the junior surgical staff, and
Figure 3 shows how this action resulted
in the values returning to normal.

REVIEWING INDICATOR
QUALITY
Not all the performance indicators monitored
by the Trust have been created in a rational sequence. Some date back to the pilot scheme of
2001, and may not be entirely up-to-date, some
were created ad hoc, while others were taken
from specialist registers, and an even greater

number from regional think-tanks with a view


to accreditation. Due to the fundamental role
the indicators now play in the hospitals strategic policy-making, it was therefore deemed
appropriate to critically review them, choosing
a sample of 99 as a test case. To determine the
quality of each indicator, the following criteria
were analysed:



Scientific criteria behind the choice of


indicator and its target;
Policy criteria linked to the indicator;
Data collection methodology criteria;
Statistical criteria used to determine the
discriminatory value of the indicator.

These evaluation criteria were inspired by


the 22 proposed by Lakhani et al. of the National
Centre for Health Outcomes Development
(NCHOD) on the basis of their meta-analysis
of 16 studies aimed at assessing the quality of
clinical indicators (Lakhani, Olearnik & Eayres,
2006). We also took inspiration from Rhew et

Figure 3. Major complications in caesarean section

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 27

al. (2001), who emphasized the importance of


basing clinical indicators on scientific evidence,
and Morosini, who stressed that denominators
should be numerically consistent to prevent
confidence intervals being too wide (2004).
With these 3 sources in mind, we drew up an
indicator quality evaluation form (See Table 5),
which we subsequently used to review a total
of 99 clinical indicators, of which 50 were designed to measure processes and 49 outcomes.
As regards the scientific criteria, 33.3%
of the indicators analysed were based on
high-quality, up-to-date literature, but of the
remainder, 41.4% were either not derived from
a specified source, were not referable to any
scientific data, or could not be evaluated due
to paucity or poor clarity of information, and
a further 15% cited the OU work group as the
source. Indicators are generally monitored once
annually, and measurement protocols are clearly
defined in 78.9% of indicators. 67.7% of the
indicators used suitable terminology in their
description, and in 97% of cases, the potential
for change is in the hands of the professional
staff (See Table 6).
Evaluation of statistical criteria included
calculation of the Confidence Interval (CI) from
indicator numerator and denominator data. This
analysis shows that for 60.6% of indicators, the
CI was narrow, and the estimated values are
therefore likely to be accurate. There were at
least four periodic measurements provided in
46% of cases, which allowed their trend with
respect to the standard to be measured over time.
Overall, the analysis of the 99 indicators
yielded useful information that enabled systematic evaluation of the quality of the indicators.
This highlighted the need for action to bolster
the scientific strength of the majority of indicators, i.e., to base them on evidence reported in
the literature. As this criterion is fundamental to
their quality, it is suggested that the poor quality
indicators and their rationales are reviewed to
include easily consultable references from the
literature. Indeed, although 68% of indicators
were described in a clear and comprehensible
fashion, 32% were not and therefore require
clarification. Another issue that needs to be ad-

dressed is the number of indicators (18%) that


have a numerator equal to 0, perhaps grounds
for suspending their monitoring or increasing
the monitoring period.

THE NATIONAL CONTEXT


As reported in a previous article (Wienand,
2010), there are three types of clinical indicators
currently in use in Italy, namely:


Locally devised (for improving internal


accountability);
Imposed by accreditation models (especially those pertaining to specific disciplines);
Proposed by international quality control
systems.

The strengths and weaknesses of these


three indicator types are summarized in Table 7.
Unfortunately, Italy seems to be expending
the majority of its efforts trying to re-invent
the wheel. Despite the great number of sound,
reliable, well-documented clinical indicators
available tried, tested and validated by experts
all over the world these are rejected in favour
of a home-made self-audit strategy deemed
more suitable for addressing specific local and
professional concerns. Complicating an already
fragmented picture, the Italian government,
like the rest of Europe, has begun a drive to
develop its own mandatory indicator system
to be applied free of charge.
Spearheading this drive in Italy is the
National Agency for Local Health Services
(Agenzia Nazionale per i servizi sanitari
regionali - AGENAS), which liaises directly
with the Italian Ministry of Health in matters
of public health strategy, and has been charged
with developing and coordinating a nationwide
system of clinical indicators. According to
AGENAS, their mission is to use the data collected in an annual national survey (Programma
Nazionale Esiti - PNE) to improve the quality of
healthcare provision by setting up clinical and
organisational auditing processes that should
become normal practice.

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

28 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

Table 5. Internal criteria for reviewing indicators


Name of Indicator
OU/Dept./Direction
Scientific Criteria
1. Is there a reference that demonstrates a correlation between
quality of care and the process or outcome indicator in question?
Please state first author, title, journal, year of publication, first
page number
a. What level is the evidence?

A= RCT, meta-analysis or systematic review of RCTs


B= intermediate
C= no evidence
D= contradictory evidence

2. Are the specification terms used precise? (e.g., units of


measurement, clinical terminology)

Yes
No

Policy Criteria
1. What clinical or financial relevance does the indicator have?

High volume
High cost
High risk
Other:

a. What aspect of care does it investigate?

Appropriateness
Efficacy
Safety
Other:

b. Is there potential for change?

Yes
No

c. Could the indicator provide perverse incentives?

Yes
No

Methodological Criteria
1. Is the indicator measureable with the resources available?

Yes
No

a. How long is the monitoring period?

Number of months:

b. Is the monitoring protocol clear?

Yes
No

Statistical Criteria
1. What is the confidence interval of the last value measured?

Numerator:
Denominator:
Confidence interval:

a. Are control charts etc. used to check variability?

Yes
No

b. If the indicator is an outcome measure, is risk stratification or


adjustment performed?

It is not an outcome measure


Risk stratification
Risk adjustment

c. If the indicator is based on sampling, is the sample


representative?

It is not a sample measure


Sampled measure:
Total population:
Sample size:

Criteria groups according to Lakhani et al., 2006


Criteria 1, 1a, 2, 3 and 4 from Rhew et al., 2001
Criteria 3b, 3c, 4a, 4b, 5a, 5b and 5c from Lakhani et al., 2006
Criterion 5 from Morosini, 2004

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 29

Table 6. Overview of in-house indicator quality (sample of 99 out of 768)


Scientific Evidence Used in Creation of Indicators

Up-to-date, good quality evidence

33

33.3

Material from scientific associations

5.1

Benchmark from other facilities

5.1

Internally devised

15

15.2

Source not cited or of poor quality

41

41.4

Clear and precise

67

67.7

Some unclear or imprecise terms

32

32.3

Present

96

97

Absent

Four-monthly

4.0

Six-monthly

26

26.3

69

69.7

78

78.9

21

21.1

>6

21

21.2

<= 6

60

60.6

Incalculable

18

18.2

Clarity of Terms Used

Potential for Change

Monitoring Period

Annually
Clarity of Measurement Protocol
Yes
No
Confidence Interval for Last Recorded Measurement

Currently the PNE system involves


measuring 97 indicators (18 mortality) using
administrative data from all the public and
private facilities that comprise the Italian
National Health Service. The PNE assesses
the following variables: short-term mortality,
surgical procedures, waiting times, short-term
readmission, complications after certain surgical procedures, and hospitalisation for various
conditions. For more meaningful comparison,
data are adjusted to take into account factors
such as age, gender, chronic comorbidities etc.,
and their statistical significance is determined.
PNE has focussed almost exclusively on
hospital care, and publishes rankings on an
annual basis.

One drawback of this system is that findings of the PNE are released after 15 months.
Furthermore, like many other performance
measurement systems, the PNE collates data
from administrative data, which detail the
patients conditions and the treatment administered by the healthcare provider, classifying
diseases according to the ICD-9-CM coding
system and allocating them to diagnosis-related
groups (DRG). However, the DRG system was
set up with financial concerns in mind, and
continues to evolve along these lines. Using
such a system to evaluate clinical performance
is therefore misguided, however appealing it
may seem. Indeed, although the data is copious
and freely available, seemingly lending itself

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

30 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

Table 7. Characteristics of clinical indicator types used in Italy


Home Made Indicators

Indicators for Accreditation

International Indicator Systems

Bottom-up, shared w/professionals

Developed by specialists/clinicians,
shared w/ professional societies

Developped by clinicians and


methodology experts

Context specific

Specific for accreditation model

Generic

Important for local policy, strategy

Important for regional policy

Single indicators may be locally


important

Actionability

Mandatory, possible perverse


incentives

Possible improvement actions

Rarely based on scientific evidence

Sometimes based on scientific


evidence

Based on scientific evidence

Frequent problems about data


sources, quality of data

Some problems about quality of


data

Data collection may be expensive,


reliability, validity granted

No comparison w/others

Possible comparison w/other


facilities in the region

Comparison w/many other facilities


around the world

Note: First presented at the international workshop Indicators for improving healthcare quality and safety, Ferrara
(Italy), 2008, May

to in-depth analysis like PNE, the literature is


full of warnings against this type of approach
(Iezzoni, 1997).
Moreover, as aptly stated by Lilford and
Pronovost, ranking hospitals on the basis of
mortality rates is a bad, though stubbornly
persistent, idea for the following three reasons
(2010):

Hence, it is no surprise that the correlation


between hospital mortality rates and their quality measures is very low (Pitches, Mohammed
& Lilford, 2007).

1. It is by no means a given that a difference


in clinical process quality translates into a
statistically significant result;
2. As shown in the Harvard Malpractice Study,
avoidable deaths account for 0.25% of
hospital discharges, i.e., if the hospital
death rate is 5%, 20 of these deaths can
be considered avoidable, and there is no
rational reason for seeking out these cases
and using them to compare hospitals;
3. Greater differences in performance are
generally seen within a hospital (between
departments and OUs) than between hospitals. Unlike large commercial enterprises,
hospitals do not fail as a whole, but on
occasion have poor performance in specific
sectors.

The next steps in the consolidation of the Ferrara


University Hospital measurement and evaluation system are as follows.

CONCLUSION
Looking to the Future

Software Overhaul
Although the software in use has been modified and tweaked over the years, technological
choices made at the inception of the project limit
its potential for further development. Indeed, set
up when smartphones and tablets did not exist
(at least not as we know them today), and far
before HTML5 was created and standardized
by the W3C, integration of such advances into
our outmoded software remains problematic, if
not impossible, restricting the usefulness of our
web interface. With this in mind we are looking

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 31

into the best way to update the existing platform


in order to make use of the latest technology
and improve the usability of the system. Some
of the decisions already taken are:

To adopt HTML5 and Responsive Design


to update the website, so that it can adapt to
the specifics of the viewing device through
a purpose-designed framework such as
Bootstrap (created by Twitter);
To adopt ASP.Net MVC/Javascript/JQuery
technology at both client and server level
to streamline the new web pages, making
them more compatible and more dynamic;
To adopt HTML for e-mail communications
from the system to increase their visual
impact (they are currently text only) using
a purpose-designed framework like Ink to
ensure their compatibility with all devices.

Systematic Indicator Review


It is our intention to extend the review of the 99
indicators to cover the total of 768 that make up
the system, working closely with the staff of the
individual OUs and departments. The aim of this
system-wide review is to replace any suboptimal indicators with those that meet the quality
criteria described above, something that should
be possible thanks to the enormous growth in
scientific evidence and data collection since we
embarked on the project in 2001. Indeed, it is
expected that reliable, tried-and-tested indicators that have been purposely devised by experts
will be available for all healthcare disciplines.
As part of this process, we aim to strike
the right balance between indicators of process
which are of greater interest to professional
staff and those of outcome which are more
relevant to the public and their representatives.
We also aim to ensure that outcome measures
are calculated on the basis of a sufficiently
large case sample, applying risk-adjustment
and stratification techniques to make the data
more meaningful. Furthermore, wherever possible, we intend to use statistical techniques
to analyse processes (control charts), thereby

enabling the staff to check performance more


rapidly than is currently possible.
In order that our indicators are as meaningful as possible, it will be necessary to carefully
identify and disseminate the values we adopt
as standard, whether they originate from the
scientific literature or are borrowed from
other healthcare facilities recognized for their
excellence. Last but not least, we must also look
closely at the way communication is handled,
particularly if the intended recipient has little or
no professional knowledge. After all, the driving
force behind the project is accountability to our
patients and the public, so it is vital that they
clearly understand not only the importance of
clinical indicators, but also the meaning of any
non-conformity. In the Emilia-Romagna region,
where our hospital is located, outcomes are not
linked to payment, and there are as yet no plans
to do so. Nevertheless, with a view to providing
the best quality of service for our patients, we
are committed to proceeding as if they were.

ACKNOWLEDGMENT
The authors gratefully acknowledge financial
support from Emilia-Romagna Regional Healthcare Agency (Programmi per lincentivazione
alla modernizzazione, years 2004 and 2008).
We would also like to thank Press Ganey
Associates, Inc. for their permission to publish
the titles of the indicators cited in Table 7.
All other tables in the paper are entirely the
product of the authors, who therefore grant their
permission for the publication of their contents.

REFERENCES
ACHS - The Australian Council on Healthcare Standards. (2013). Clinical indicator program. Retrieved
from http://www.achs.org.au/publications-resources/
clinical-indicator-program/
Benjamin, A. (2008). Audit: How to do it in practice. BMJ (Clinical Research Ed.), 336(7655),
12411245. doi:10.1136/bmj.39527.628322.AD
PMID:18511799

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

32 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

Collopy, B. T. (2000). Clinical indicators in accreditation: An effective stimulus to improve patient


care. International Journal for Quality in Health
Care, 12(3), 211216. doi:10.1093/intqhc/12.3.211
PMID:10894192
Dixon, N. (2009). Getting clinical audit right to
benefit patients (Fourth printing ed.). Romsey, UK:
Healthcare Quality Quest.
EFQM - European Foundation for Quality Management. (1999a). Determining excellence - Taking the
first steps: A questionnaire approach. (V 1.1 En ed.).
European Foundation for Quality Management.
EFQM - European Foundation for Quality Management. (1999b). EFQM excellence model - Public
and voluntary sector version (1st ed.). European
Foundation for Quality Management.
Governo della Repubblica Italiana. (1992). Decreto
legislativo. Riordino della disciplina in materia
sanitaria, a norma dellarticolo 1 della legge 23
ottobre 1992, n. 421. n. 502.
Governo della Repubblica Italiana. Decreto legislativo. Attuazione della legge 4 marzo. (2009). n. 15,
in materia di ottimizzazione della produttivit del
lavoro pubblico e di efficienza e trasparenza delle
pubbliche amministrazioni. n.150. (27-10-2009).
Grilli, R. (2001). Lo specchio dellassistenza. Il Sole
24 Ore Sanit management.
Iezzoni, L. I. (1997). Assessing quality using
administrative data. Annals of Internal Medicine,
127(8_Part_2), 666674. doi:10.7326/0003-4819127-8_Part_2-199710151-00048 PMID:9382378
JCI - Joint Commission International. (2014). Joint
commission international accreditation standards
for hospitals (5th ed.). Joint Commission Resources.
Kazandjian, V. A. (2004). Being safe is not different
from being right. KK Hospital Review Singapore,
7, 1517.
Kazandjian, V. A., Matthes, N., & Wicker, K. G.
(2003). Are performance indicators generic? The
international experience of the quality indicator project. Journal of Evaluation in Clinical Practice, 9(2),
265276. doi:10.1046/j.1365-2753.2003.00374.x
PMID:12787190
Kazandjian, V. A., & Wienand, U. (2008). Gli indicatori sono utili al miglioramento della qualit e della
sicurezza? Tendenze Nuove, nuova serie, 575-590.

Lakhani, A., Olearnik, H., & Eayres, D. (2006).


Evaluating the quality of clinical and health indicators. In Compendium of Clinical and Health Indicators Data Definitions and User Guide for Computer
Files (pp. 469475). London, UK: National Centre
for Health Outcomes Development.
Lilford, R. J., & Pronovost, P. J. (2010). Using hospital mortality rates to judge hospital performance:
A bad idea that just wont go away. BMJ (Clinical
Research Ed.), 340(apr19 2), 955957. doi:10.1136/
bmj.c2016 PMID:20406861
Mainz, J., & Bartels, P. (2006). Nationwide quality
improvement - how are we doing and what can we
do? International Journal for Quality in Health
Care, 18(2), 7980. doi:10.1093/intqhc/mzi099
PMID:16434508
Miller, M. R. (2005). Relationship between performance measurement and accreditation: Implications
for quality of care and patient safety. American
Journal of Medical Quality, 20(5), 239252.
doi:10.1177/1062860605277076 PMID:16221832
Morosini, P. (2004). Indicatori in valutazione e
miglioramento della qualit professionale. Roma,
Italy: Istituto Superiore di Sanit.
Morosini, P., & Perraro, F. (2001). Enciclopedia
della Gestione di Qualit in Sanit. Torino: Centro
Scientifico Editore.
Pitches, D. W., Mohammed, A. M., & Lilford,
R. J. (2007). What is the empirical evidence that
hospitals with higher-risk adjusted mortality rates
provide poorer quality care? A systematic review of
the literature. BMC Health Services Research, 7(1),
91. doi:10.1186/1472-6963-7-91 PMID:17584919
Presidente del Governo della Repubblica Italiana.
(1997). Decreto del presidente della repubblica.
Approvazione dellatto di indirizzo e coordinamento
alle regioni e alle province autonome di Trento e di
Bolzano, in materia di requisiti strutturali, tecnologici ed organizzativi minimi per lesercizio delle
attivit sanitarie da parte delle strutture pubbliche
e private.
Press Ganey Associates. (2010). International quality
indicator project (IQIP). Retrieved from http://www.
internationalqip.com/Index.aspx
Pschibilla, C., & Matthes, N. (2005). Quality indicator
project: Grundlage fr ein wissenschaftlich fundiertes nationales und internationales Benchmarking.
Hospital, 4, 23.

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 33

Regione Emilia Romagna. (1998). Legge Regionale


Emilia Romagna. Norme in materia di autorizzazione
e accreditamento delle strutture sanitarie pubbliche
e private in attuazione del DPR 14 gennaio 1997,
nonch di funzionamento di strutture pubbliche e
private che svolgono attivit socio-sanitaria e socioassistenziale. n.34. (12-10-1998).

Veillard, J., Champagne, F., Klazinga, N., Kazandjian, V. A., Arah, O. A., & Guisset, A. L. (2005). A
performance assessment framework for hospitals:
The WHO regional office for Europe PATH project. International Journal for Quality in Health
Care, 17(6), 487496. doi:10.1093/intqhc/mzi072
PMID:16155049

Regione Emilia Romagna. (2004). Delibera Giunta


Regionale Emilia Romagna. Applicazione della
Legge regionale 34/98 in materia di autorizzazione
e di accreditamento istituzionale delle strutture
sanitarie e dei professionisti. Revoca dei precedenti
provvedimenti. n.327. (23-2-2004).

Werner, R. M., & Asch, D. A. (2007). Clinical concerns about clinical performance measurement. Annals of Family Medicine, 5(2), 159163. doi:10.1370/
afm.645 PMID:17389541

Rhew, D., Bidwell Goetz, M., & Shekelle, P. G.


(2001). Evaluating quality indicators for patients
with community acquired pneumonia. Journal on
Quality Improvement, 27, 575590. PMID:11708038
Sheldon, T. (1998). Promoting health care quality:
What role for performance indicators? Qual Health
Care, 7, S 45-50.
Vallejo, P., Saura, R. M., Sunol, R., Kazandjian,
V. A., Urena, V., & Mauri, J. (2006). A proposed
adaptation of the EFQM fundamental concepts of
excellence to health care based on the PATH framework. International Journal for Quality in Health
Care, 18(5), 327335. doi:10.1093/intqhc/mzl037
PMID:16984895

Wienand, U. (2008). Gli Indicatori sono utili al


miglioramento della qualit e della sicurezza? Paper
presented at International Workshop Indicators for
improving healthcare quality and safety, Ferrara,
Italy.
Wienand, U. (2010). La sostenibilit degli indicatori
di performance clinica. QA, 20, 161-165.
Wienand, U., Adamo, C., Blancato, C., Favero,
L., Taglioni, M., Mon, E. et al. (2008). IQIP:
lInternational Quality Indicator Project negli ospedali italiani. LOspedale, 14-19.
Wollersheim, H., Hermens, R., Hulscher, M., Braspenning, J., Ouwens, M., & Schouten, J. etal. (2007).
Clinical indicators: Development and applications.
The Netherlands Journal of Medicine, 65, 1522.
PMID:17293635

Ulrich Wienand, Head of Research, Innovation Quality and Accreditation Office at the Ferrara
University Hospital since 2001, Lecturer in Healthcare Quality Assessment on the University
of Ferrara Masters degree course in Nursing and Obstetric Science since 2006, serving member
of Ferrara Ethics Committee since 2006, and Vice President of the same since 2014. Degree in
Medicine from the University of Ferrara, and Degree and PhD in Psychology from the Berlin
Freie Universitt. Italian National Coordinator of the International Quality Indicator Project
from 2005 to 2014. Scientific Director of the project: The role of auditing in identifying research
priorities, an advanced training course for clinical-audit and assessment-research facilitators,
financed by Emilia-Romagna Regional Council. Author of 108 publications, many of which focus
on clinical performance assessment.

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

34 International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014

Gabriele Rinaldi, Director-General of Ferrara University Hospital since 2010, serving member
of the Regional Research Programme Steering Committee, and the Emilia-Romagna Regional
Accreditation Standards Evaluation Group. Degree in Medicine from the University of Modena,
postgraduate specialization in Haematology and Biochemistry and Clinical Chemistry at
the University of Modena, postgraduate qualification in Executive Management and Directorship of Healthcare Facilities. Director of Pesaro Hospital Analysis Laboratory from 1999 to
2006, Head of the Directorate from 2001 to 2006, Medical Director at Siena and Pesaro, then
Director-General of Pesaro University Hospital from 2007 to 2010. Lecturer at LUISS Management School, Rome, on Healthcare Management specialization courses from 1996 to 2001,
and in Healthcare Management and Organization for various Italian Healthcare facilities
and Sicily Regional Council. Lecturer on training courses for Accreditation Surveyors for
Emilia-Romagna and Veneto regions. Author of over 80 publications and 30 presentations at
congresses and conventions.
Gloria Gianesini, Clinical Nurse at Ferrara University Hospital since 1996, charged by the
Director of Nursing with Good Practice Implementation since 2011. Lecturer in Research
Methodology on the University of Ferrara Masters degree course in Nursing and Obstetric
Science since 2011, and serving member of Ferrara Ethics Committee since 2014. Bachelors
degree in Nursing and Masters degree in Nursing and Obstetric Science from the University of
Ferrara, and postgraduate qualification in Evidence-Based Practice and Clinical Healthcare
Research Methodology from the University of Bologna. Awarded the title of Clinical Audit Facilitator in 2011, and currently studying for the Level II postgraduate qualification in Clinical
Research and Epidemiology (focus on monitoring, quality and statistics), at the University of
Ferrara Institute for Higher Studies.
Anna Ferrozzi, Collaborator of Research, Innovation and Quality Accreditation Office at the
University of Ferrara Hospital since 2009, especially for the management of the Clinical Performance Indicator Database. Degree in Management Engineering and license to practice the
engineering profession from the University of Bologna Engineering Faculty. Quality Management Systems Auditor ISO 9001. Consultant in quality systems management and development,
lending support to hospital departments and operating units in internal document management;
company planning and assessment consultant, helping to define specific quality targets.
Luca Poretti, Founder and CEO of XPSoft.it since 2001, founder and CTO of Qualitando since
2012. Degree in Computer Science from Bologna University. Experience in team management
and software development. Real-world experience of programming with different languages
and methodologies (Agile, Extreme programming, TDD, Scrum), from low level (C on embedded board) to high level (C#). Co-author of technical report, Scheduling Real-Time Tasks: A
Performance Study (UBLCS 93-10).

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Reliable and Quality E-Healthcare, 3(2), 15-35, April-June 2014 35

Giorgia Valpiani, Statistician and collaborator at the Research, Innovation and Quality Accreditation Office of the Ferrara University Hospital since 2011, handling healthcare quality
improvement, statistical process control, clinical auditing and clinical pathways analysis tools.
Bachelors degree in Statistics from Bologna University, Masters degree in Biostatistics from
the Universities of Bologna and Florence, PhD in General Medical and Services Science
from Bologna University. Lecturer in Applied Statistics and Evidence-Based Practice on
the University of Ferrara Therapist degree course. Worked on drug use and epidemiology for
several years at a contract research organisation. Author or co-author of more than 20 articles
in peer-reviewed international journals, as well as several book chapters.
Adriano Verzola, Head of the Performance Analysis and Programming Office of Ferrara University Hospital since 2010, coordinating management programming and control functions. Degree
in Medicine, postgraduate specialization in Nephrology, Preventative Medicine and Hygiene,
and postgraduate qualification in Organization Research in Healthcare Facilities from the
University of Ferrara. Since 2003 has worked for the Programming, Evaluation and Strategic
Control Service at the Ferrara University Hospital. Lecturer, researcher and thesis supervisor
for Bachelors, Masters and Level II postgraduate qualification courses at the University of
Ferrara. Author or co-author of 51 national and international publications.

Copyright 2014, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Você também pode gostar