Você está na página 1de 148

Benchmarking in

Dutch healthcare
Towards an excellent organisation

Drs. R.J.C. Poerstamper MBA


Drs. A. van Mourik – van Herk
Drs. A.C. Veltman
Benchmarking in Dutch
healthcare
Towards an excellent organisation

Robbert-Jan Poerstamper

Anneke van Mourik - van Herk

Aafke Veltman

© PricewaterhouseCoopers, Amsterdam 2007


Preface

What is the added value of benchmarking? What makes benchmarking a


success and what does the future hold? Just a few questions out of the many
that the HEAD Association (the Dutch Association of Finance Managers in
Healthcare) raised when we discussed a publication on benchmarking as part
of our sponsorship agreement.
Many HEAD Association members are experienced benchmarkers, and their
ideas for improvement and critical comments have helped us fine-tune our
benchmarking model over the years. We highly value the important lessons we
draw from their feedback, lessons which remind us that it is always possible to
make benchmarking better: more inspiring, more instructive and less taxing
for its participants.
We would like to thank everyone who provided us with material and who took
the trouble to read and comment on draft versions of this report. We're
particularly grateful for the invaluable feedback from our colleagues Jane
Duncan and Maura Kelly at PricewaterhouseCoopers in Ireland. A special word
of thanks is also due to the members of the steering committee and the
sounding-board group who helped to create this publication and whose names
you will find in our first appendix. Their input has helped to make this a
co-production of many interested parties.
This publication also provides an excellent opportunity to thank the very many
that have enabled us to engage in the benchmarking process: first of all our
customers, of course - initially the Ministry of Health, Welfare and Sport and
later also industry associations ActiZ and VGN and the care providers
themselves. And let's not forget all those people who have helped us improve
the quality of our benchmarking by participating in sounding-board groups or
by giving us their assessments. But most of all we pay tribute to the hundreds of
care organisations and many hundreds of thousands of employees and clients
who have provided data and sent in questionnaires. Thanks to them,
benchmarking in Dutch healthcare has become what it is today: a valuable
management tool used by increasing numbers of healthcare providers.
We'd also like to thank the team of translators and editors - Anita Graafland,
Willemien Kneppelhout and Tom Scott - who have worked so hard to ensure
that this English language version is not only accurate but also, we hope, a
pleasure to read.
The authors
Contents

Preface 3
Contents 5
Introduction 9
1 Why benchmark? 11
1.1 Rationale for benchmarking 11
1.2 Positioning 12
1.3 Learning and improving 14
1.4 Relationships between performances 15
1.5 Transparency and profile 16
1.6 Information for industry associations 16
1.7 Benchmarking and accountability 17
1.8 Reasons for not benchmarking 20
1.9 In conclusion 20
2 Benchmarking: comparing and improving 21
2.1 Our definition of benchmarking 21
2.2 Other definitions of benchmarking 22
2.3 Benchmarking and Total Quality Management 23
2.4 Definitions: differences and similarities 24
2.5 History of benchmarking 26
2.6 Benchmarking: increasingly embedded 27
2.7 Benchmarking as necessity 29
3 How to make benchmarking a success 31
3.1 When is benchmarking an appropriate tool? 31
3.2 Key success factor 1: Optimise learning 33
3.3 Key success factor 2: The benchmark model should
be broadly based 37
3.4 Key success factor 3: A multidimensional approach 38
3.5 Key success factor 4: High-quality tools 38
3.6 Key success factor 5: Do not leave everything to
external consultants 40
3.7 Key success factor 6: Aligning benchmark to regular records 40
3.8 Key success factor 7: Sensitive data handling 41
3.9 Key success factor 8: No compulsory benchmarking 42
3.10 Key success factor 9: Strength through repetition 43
4 Different types of benchmarking 45
4.1 Classification criteria 45
4.2 Classification by benchmarking objective 45
4.3 Classification by what is being measured 46
4.4 Classification by reference group: internal or external
benchmarking 48
4.5 Classification by level of organisation 49
4.6 Classification by use of normative standards 49
4.7 Classification by research process 51
4.8 Profile of a healthcare benchmark model 53
5 Benchmarking model for healthcare benchmarks 55
5.1 Input and strategic themes 56
5.2 Building blocks of benchmark surveys 59
5.3 The financial building block 59
5.5 Quality of care 66
5.6 Quality of the job 72
5.7 Social responsibility 73
5.8 Relationship between building blocks 75
5.9 Best practices 76
5.10 Explaining performance 79
5.11 Innovation 82
5.12 Reporting results 84
5.13 Benchmark strategic management information 86
6 The step-by-step benchmarking process 87
6.1 Benchmarking phases in the public sector 87
6.2 Keehley’s step-by-step plan 89
6.3 A phased approach to healthcare benchmarking 96
7 Healthcare benchmark:
notable features 103
7.1 Nursing, care and home care benchmark 103
7.2 Child healthcare benchmark 104
7.3 Healthcare administration agency benchmark 105
7.4 Benchmarking care for the disabled 106
7.5 Partial benchmarks in mental healthcare 110
7.6 Benchmarking the healthcare chain 111
7.7 Benchmarking Dutch hospitals 113
8 Innovations in benchmarking 121
8.1 Towards performance excellence 121
8.2 Excellence and innovation 123
8.3 Benchmarking outside the box 124
8.4 More research into cost-to-reward ratios 124
8.5 More benchmark partner involvement 125
8.6 More dynamic reporting 127
8.7 Continuous benchmarking 128
8.8 Simplified data supply 130
8.9 Introduction of XBRL 130

Appendices 133

A Steering committee and sounding-board group 135


B Bibliography 137
C Benchmark studies 145
D Dutch healthcare abbreviations and acronyms 149
E Endnotes 150
Introduction

‘Write a report on benchmarking,’ the HEAD Association commissioned


PricewaterhouseCoopers. ‘Benchmarking is a clear trend in Dutch healthcare
and there is a demand for more background information.’ Under our
sponsorship agreement, PricewaterhouseCoopers publishes a report on a
specific topic every year. In 2005 the spotlight was on social responsibility and
for 2006 the focus is on benchmarking. Benchmarking was selected for its
immediate relevance to many healthcare providers in the Netherlands.

Our target readership includes – aside from HEAD Association members, of


course – anyone who for policy-making or operational reasons wishes to engage
in benchmarking – e.g. directors of healthcare organisations, quality managers
and any interested readers outside the field of healthcare. As our key target
readership is in the healthcare sector, we have assumed that readers will be
familiar with a number of key healthcare terms, although we have strived to
explain typically Dutch phenomena, acronyms and abbreviations to our
English readers. However, we do believe that this report might also be of
interest to people in other sectors.

This report aims to enable its readers to make conscious decisions in


benchmarking: Is it worth our while to benchmark? How do we achieve the
maximum possible return?

Benchmarking is the process of systematically comparing performance as a


starting point for improvement, and involves collecting and reporting on data
from different organisations and organisational units. All participants can
compare their outcomes with those of others, and in particular with those of
their best-in-class peers. This helps to identify areas for improvement and
appropriate action.

The report focuses on the benchmarks that PricewaterhouseCoopers has


conducted with other consultancies and agencies in the Dutch healthcare and
related sectors over the past decade. It is not, and emphatically does not
purport to be, an academic study. That said, real-world experience is put in the
context of the literature on the subject.
For the purposes of this report, then, we have carried out extensive research
into the literature of benchmarking. The report’s focus on benchmarks in
which we have been involved ourselves reflects the fact that we’re familiar with
all their ins and outs, and are thus able to describe what went well, what did
not and what we ended up changing.

Our benchmarks – which we refer to in this report as healthcare benchmarks –


comprise a series of comprehensive projects in nursing, care and home care,
healthcare administration agencies and mental healthcare, while we also
include a few benchmarks on sub-areas such as treasury and invoicing. Our
large-scale benchmark model has also been applied in the Dutch vocational
education and training and housing corporation sectors.

Appendix C presents a comprehensive review of the benchmark studies and


those who commissioned them, detailing the number of participants and the
names of consultants and agencies we have worked with.
PricewaterhouseCoopers has been responsible for the financial building block
and for overall programme management in all these benchmarks. The Ministry
of Health, Welfare and Sport and the industry associations initially acted as
co-sponsors of the healthcare benchmarks, but since 2003 the industry
associations have been their sole sponsors.
1 Why benchmark?

This section describes the benefits of benchmarking for both benchmark


participants and any other parties involved, drawing on research literature and
our own and others’ experience with healthcare benchmarks.

The following sections will review varying definitions of benchmarking, but to


further understanding here we present our working definition:

Benchmarking is the process of systematically comparing performance as


a starting point for improvement.

A benchmark allows organisations to gauge their own position, creates a


learning curve to help improve performance and helps boost transparency,
profile and image.

1.1 Rationale for benchmarking

Positioning Learning and improving


‘We want to see how we’re doing ‘We want feedback on and insight
compared with others.’ into areas in need of improvement.’
‘We want confirmation that we’re ‘We want to use the benchmark to
doing better/worse than others.’ set priorities.’
‘We don’t want to fall below the ‘We want to learn from best
average.’ practice.’
‘We want to put our performance ‘We want to broaden our view.’
in perspective.’ ‘We want to share key success
‘We want to set target standards.’ factors.’
‘We want to identify areas of ‘We want to improve the
particular expertise on which to management of our organisation.’
focus.’
 Benchmarking in Dutch healthcare

Transparency
‘We want to communicate our
performance to our clients.’
‘We want to present a clear profile
to the outside world.’
‘We want to be externally
accountable.’
‘We want to boost the industry’s
image.’
‘We want to supply management
information to our industry
association.’

1.2 Positioning
Benchmarking allows comparison of your organisation’s performance and that
of other benchmark participants and provides insight into where you stand
relative to other organisations – e.g. do you rank among the leaders or the
laggards? Benchmarking helps to broaden your perspective and to make your
organisation less inward-looking.

Figure 1.1 gives an example of the kind of information that might arise from a
benchmarking exercise. It is taken from a report submitted by a participant in a
home care benchmark and in this instance relates to client assessment of the
care provider’s accessibility. The blue triangles indicate the scores of the
participating organisations, with the emanating lines demarcating the
confidence intervals. The figure’s horizontal line captures the average score. As
the diamond-shaped symbol shows, the relevant healthcare provider clearly
lags the average.
Why benchmark? !

9.5

9
Client assessment

8.5

8 Average score
Z-org 2004: 8.2

7.5 Your score 2004: 7.5

6.5
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81
Healthcare providers

Figure 1.1 Example of benchmark information: accessibility of the healthcare provider


Source: 2004 home care benchmark survey Benchmark thuiszorg

Benchmark participants will be presented with their scores on all issues in a


similar way, with a number of aggregate scores also provided. Participants can
thus identify precisely where their clients are more or less satisfied than those
of other care providers, how big the gap is and how they rank in competitive
league tables.

Our second example was taken from the benchmark study on nursing and care
homes, the Benchmark verpleeg- en verzorgingshuizen. Table 1.1 provides a review of
care provided per client in care homes, allowing care providers to compare their
performance with both the average and the best-performing organisations.

Table 1.1 Example of benchmark information: time spent on care in care homes in
minutes per client per day
Time Your organisation Average Best practice
Direct client-facing 77.36 68.64 108.00
Indirect client-facing 10.45 9.48 17.72
Non-client-facing 23.74 21.90 28.37
Total 111.55 100.02 154.09

Source: 2004/2005 nursing and care home benchmark study Benchmark verpleeg- en verzorgingshuizen
" Benchmarking in Dutch healthcare

1.3 Learning and improving


A benchmark typically shows up any areas for learning. Providing
comprehensive insight into your performance, it allows you to identify any
areas in need of improvement and actions that need to be taken. Note that
benchmarking is always a means and never an end in itself, and that
benchmark outcomes should always be tested against your own vision and
policies. If you have opted for a specific make-up of your workforce, for
instance, benchmark outcomes in this area are bound to deviate from those
recorded for other participants and need not be a reason for change.

Figure 1.2 provides an example of possible areas for improvement according to


clients in the Dutch home care industry.

Top 10 areas for improvement according to your clients


Organisation:
• Annual evaluation meeting 25.2 %
• Fewer personnel changes 24.8 %
• Improved accessibility of manager by telephone 15.8 %
• Improved backup for carers in case of illness etc. 15.4 %
• Greater account taken of client wishes 11.8 %
• Reduced waiting period for start to home care 11.3 %
• More convenience services + supplementary care 11.3 %
• Greater telephone availability 10.9 %
• More oral information 10.3 %

Provision of care:
• Greater focus on quality of life 7.8 %

Figure 1.2 Example of benchmark information: areas for improvement in home care,
as cited by clients (in percentages of clients citing relevant areas)
Source: 2004 home care benchmark Benchmark thuiszorg

If previous benchmarking has been carried out, a benchmark will also produce
a comparison over time. Figure 1.3, for instance, shows workforce assessments
of their working conditions. A ‘traffic-light’ system highlights performances
and immediately shows up areas where improvements have – or have not –
been made.
Why benchmark? #

Overall score Response

Energy boosters Work stressors

Wellbeing Wellbeing

Figure 1.3 Example of benchmark information: comparison of workforce


assessments 2002 and 2004
Source: 2004 home care benchmark Benchmark thuiszorg

1.4 Relationships between performances


A benchmark also provides insight into relationships between performances.
The 2004 home care benchmark, for example, showed a connection between
better client assessments and higher spending on management, but also that
this relationship disappeared when management costs exceeded around 6 per
cent of the overall spend. The same benchmark revealed positive work ratings
from staff in organisations where managers applied effective management and
had their business operations in order. The 2004/2005 home care study
suggested that the smaller the span of middle management control, the more
positive the clients were. These types of findings help set the outcomes of the
benchmark against the implications for other scores. A lesson to be drawn from
the first example could be not to cut down on management too rigorously as
this might jeopardise quality.
$ Benchmarking in Dutch healthcare

1.5 Transparency and profile


Care providers use benchmark outcomes in communicating with stakeholders
to improve their profiles and promote transparency. They do so because they
want to, but stakeholders also insist that they do. Benchmark outcomes feature
on the agendas of internal stakeholders such as client councils, works councils
and supervisory boards. External stakeholders such as regional client
organisations and financial backers are equally interested.

1.6 Information for industry associations


Aside from outcomes for individual care providers, a benchmark will always
produce outcomes that capture the state of play in the industry, or at the very
least among the benchmark participants. These serve as input for industry
associations when promoting the interests of their members. Figure 1.4 has an
example of industry information.

350

300

250

200

150

100

50

0
Total costs per child
Average total costs per child € 267
Figure 1.4 Example of industry information: breakdown of costs per child per year,
child healthcare (JGZ), 0-4 years
Source: 2005 JGZ financial benchmark

Industry associations have been known to use benchmark-derived financial


data when negotiating pricing and additional resources. Sometimes this
backfires: home care providers in the Netherlands at one point faced
retrenchments because their benchmark had showed up opportunities for
efficiency improvement. But industry associations also use their benchmarks
Why benchmark? %

to underline their willingness to be transparent and to show that they make no


secret of their strengths and weaknesses. By demonstrating in this way that its
members are prepared to self-reflect and work on their performance,
benchmarking contributes to an industry’s positive image.

Sometimes, benchmarking serves to prevent the government or other national


bodies from launching an investigation. For example, when the Netherlands
Association of Vocational Education and Training Colleges, or MBO Raad,
revealed that it had completed its first benchmarking study, it was not just
demonstrating its commitment to benchmarking, it was also telling the
government there was no need to press ahead with the information-seeking
exercise the latter had proposed.

Healthcare benchmarks may thus help to instil and increase trust in the health
sector, allowing governments, regulators and financial backers to adopt a less
1
interventionist approach. The Raad voor de Volksgezondheid en Zorg (RVZ) ,
the Dutch Council for Public Health and Care, has found that benchmarking
changes the nature and intensity of the relationship between government,
market and private enterprise.

1.7 Benchmarking and accountability


Clients are looking for information to help them pick the right care provider.
Insurers want to see proof that the organisation meets specific conditions –
often related to quality of care and financial health – before agreeing contracts.
The first bank asking about benchmark outcomes before extending a loan has
been spotted; anecdotal evidence has supervisory boards using benchmarks to
set targets for management; and benchmark outcomes are already being used
in official accountability statements to governments, regulators and
independent agencies.

Obviously, the line between communication and accountability is fuzzy. And


yet there is a fundamental difference between a benchmark used for
accountability purposes and a benchmark used to help improve performance
and encourage mutual learning. If used for the latter purpose, a benchmark
requires a safe environment guaranteeing anonymous results or exclusive
outcome-sharing with a self-selected group of peers. Only when participants
are not judged on the basis of their results will they be prepared to openly
discuss less brilliant outcomes.
& Benchmarking in Dutch healthcare

If benchmark results are used to hold them to account, organisations could shy
away from proper use of benchmarks, some argue. Organisations would hold
back information or make things look better than they really are. They would
display strategic behaviour and thus totally disrupt any learning curve.

Nonsense, others say. Any self-respecting organisation will compare and report
without reservations. Any organisation that does not do so is not really
prepared to learn and will go on the defensive if they do not like the scores. In
2
its report Presteren door excelleren (‘Performing by excelling’), the Dutch
government explicitly linked performance improvement and
transparency/accountability, arguing that benchmarking should serve both
purposes.

Yet others feel a distinction should be made between competitive industries


and sectors where competition is less of a feature. Competitive issues are
argued to be less suitable for benchmarking. Drawing on an example of two
companies that lived by the adage that competitors do not tell each other
3
anything and that proved singularly unsuccessful as a result, Watson
questions whether, at the end of the day, a competitive stance is the best way to
go. In fact, he finds that competing companies are increasingly tackling shared
problems together and openly debating them, improving their market
environment in the process. Watson even feels that benchmarking represents a
fundamental shift in thinking about competition. In the long term he sees
little gain in acting as competitors only.

4
The authors of Benchmarking in de publieke sector (‘Benchmarking in the public
sector’) would seem to consider learning and being held accountable as a
gradated difference. ‘A choice can be made for a broader or a narrower
perspective. A broader perspective implies measuring and improving, with the
learning curve a vital ingredient for benchmarking organisations. For the
rather more limited purpose of accountability, many public organisations can
stick to benchmarking in its narrowest sense: comparison with a benchmark as
a means of determining relative performance.’

Healthcare benchmarks span the whole range of these views. To an extent, one
key factor will be whether the benchmark in the relevant sector is still
developing. If it is, organisations may wonder if its outcomes are sufficiently
valid and reliable to be used for accountability purposes.
Why benchmark? '

We take the view that learning and accountability are fundamentally different
goals. That said, at least some of the information disclosed is exactly the same.
As accountability is required anyway, whether they benchmark or not,
healthcare providers had best make sure that the required data are
streamlined as much as is feasible and that definitions are harmonised. If they
do not, benchmark participants are in danger of having to disclose a
completely different set of data for accountability purposes – not exactly an
encouraging scenario, and one that would put a double burden on healthcare
providers.

The best way to go is obviously to collect the same data for both benchmarking
and accountability purposes wherever possible. Aggregation levels will
typically differ, with disclosures to regulators aggregated at high levels – e.g. at
the level of the organisation – while benchmarking requires lower levels of
aggregation as its outcomes are intended to feed into actions for improvement.
A basic set of data thus emerges for use towards both learning and
accountability, supplemented where applicable with data used for one of these
purposes only.

Whatever the approach, both benchmarking and accountability data should


basically be the same as the data needed for healthcare providers’ internal
strategic management. This makes for maximum alignment and keeps the
burden on the organisation to a minimum.

By making a change to its annual healthcare report (accountability), the


Vereniging Gehandicaptenzorg Nederland (VGN, Association for Care of
the Disabled in the Netherlands), has ensured that a key quality indicator
in the benchmark now also features in its annual disclosures. ActiZ (the
Dutch association for nursing, care and home care) and the Inspectie voor
de gezondheidszorg (IGZ) (the Dutch Healthcare Inspectorate) apply
standards of responsible healthcare for both benchmarking and
accountability purposes.
 Benchmarking in Dutch healthcare

1.8 Reasons for not benchmarking


This section has so far only discussed the potential benefits of benchmarking.
But what reasons could organisations have not to participate in benchmark
studies?

Doubts as to its use Practical impediments


‘We use different methods that ‘We feel it’s too much work.’
produce comparable results.’ ‘We think the costs of benchmark
‘We reckon we’re doing quite well.’ participation are too high.’
‘We already know our weaknesses.’ ‘We’ve just come out of a
‘Our organisation does not restructuring process.’
compare with others.’ ‘We’ve just come out of a merger.’
‘We don’t like washing our dirty
linen in public.’

Some organisations have reservations about the use of benchmarking, while


others see practical obstacles. Their doubts and concerns may be quite
legitimate. Organisations aware of the areas in need of improvement have little
more to expect from a benchmark. And not every juncture is the right time for
benchmarking: the situation after a restructuring or merger is often not the
most representative time at which to gauge how an organisation is doing –
albeit that some will want to benchmark precisely at this juncture to
determine the baseline situation. It is also true that benchmarking requires
investment: not just in the cost of participation but also in tackling the areas in
need of improvement. And, on very rare occasions, an organisation will indeed
be so unique that no others could offer any points for learning. But sometimes
there is something else lurking in the shadows: an organisation’s
unwillingness to change.

1.9 In conclusion
It is our experience that benchmarking can produce many rewards for
organisations, with gauging their position and identifying areas for
improvement being the most basic. Transparency, image improvement and
input towards policy-making are other benefits. Accountability and regulatory
disclosures are fundamentally different from learning and improving, but
aligning data sets is vital if organisations are not to face a double burden.
2 Benchmarking: comparing and
improving

This second section sets out the various definitions of benchmarking found in
the literature. ‘Comparing’, ‘learning’ and ‘improving’ typically crop up in
almost all of them. The section also touches on the relationship between
benchmarking and Total Quality Management and describes how
benchmarking has developed into a management tool that is widely used
across the world and has now also made inroads into the public and healthcare
sectors.

2.1 Our definition of benchmarking


The working definition we presented in the introductory section –
benchmarking as the process of systematically comparing performance as a
starting point for improvement – in fact derives from a rather more extended
definition that we have arrived at in the course of our benchmarking research
and surveys. This reads as follows:

Benchmarking is a continuous and systematic process for generating


strategic management information by equally measuring and comparing
both the efficiency and quality of performance, with the express purpose
of identifying starting points for the improvement of an organisation’s
own performance by adopting best practices.

In addition to comparing and improving, our broader definition includes


several other elements that are key: continuous, strategic management
information, a balance between efficiency and quality, and, above all, best
practice. All these elements will feature in this section, but here is just a taste of
what is in store. Continuous, for one, implies that benchmarking is not a
one-off exercise: organisations taking the trouble to make improvements will
want see the effects of their efforts over time. In our view, continuous also
refers to a system that allows organisations to start their benchmarking
process at any given time and retrieve their comparative data from a database.
Benchmarking in Dutch healthcare

Strategic management information implies that benchmark data should


provide clear information that allows management to do what it is there to do:
manage the organisation. And striking a balance between efficiency and
quality in our book means that a benchmark should always cover multiple
dimensions. After all, an extremely high-quality healthcare provider might not
present such a good picture, if its quality came at such a steep price that the
organisation’s continuity was at risk. Best practice of course implies
organisations that serve as examples to others. Implicitly, this also means that
benchmarking is more than a simple comparison with the average.
Benchmarking also means aiming higher.

2.2 Other definitions of benchmarking


How do others define benchmarking?

5
Spendolini defines benchmarking as a ‘continuous, systematic process for
evaluating products, services, and work processes of organisations that are
recognized as representing best practices for purpose of organisational
improvement’. Benchmarking involves continuous measuring of trends and
developments on the basis of a series of activities, with learning not just the
product of measuring (quantitative) but also involving investigating
(qualitative). Benchmarking is not restricted to specific types of activities or
organisations, and preliminary research should help narrow down the list of
suitable benchmark partners, i.e. those that excel in the product or process to
be studied. Learning should be action-oriented, that is to say, it should lead
somewhere.

6
Camp defines benchmarking as systematically investigating the performance
and underlying processes and practices of one or more leading reference
organisations in a particular field, and comparing one’s own performances
with these best practices, with the aim of identifying one’s own position and
improving one’s own performance.

Several authors warn of the dangers of indiscriminate imitation of best


7
practices. Edwards Deming , for one, reckons it is dangerous to copy and argues
that people should understand the background to what they want to do.
8
Watson feels the same. Their advice? ‘Adapt, don’t adopt.’
Benchmarking: comparing and improving !

9
Westinghouse calls benchmarking a ‘continuous search for and application of
significantly better practices that leads to superior competitive performance’.
Note that the Westinghouse definition is rather more ambitious than that of
most others. We will return to the use of benchmarking to achieve superior
performance in Section 8.

10
In Benchmarking in de publieke sector the authors describe benchmarking as
creating insight into the relative performance of organisations within a group
through comparison with a benchmark organisation. However, they also
observe that organisations typically aim for more, with benchmarking also
expected to contribute to improving the way institutions or companies
function. Performance should not just be measured and compared, but where
possible also improved. Benchmarking in the public sector is primarily seen as
a tool to measure and enhance effectiveness – i.e. are the right things being
done? – and efficiency – are things being done well and affordably? In other
words: the learning curve is key.

11
For Van Gangelen the learning aspect is so important that he includes it in his
definition: systematically investigating the performance and underlying
processes and practices of one or more leading reference organisations in a
particular field, and comparing one’s own performance with these best
practices, resulting in action-oriented learning.

12
The European Commission also includes learning in its definition, which is
the briefest we have found: ‘benchmarking is improving by learning through
comparison’.

2.3 Benchmarking and Total Quality Management


Benchmarking and Total Quality Management are related concepts. TQM can
be described as a way of managing an organisation that results in the
continuous improvement of all processes and thus meets or, better still,
13
surpasses the expectations of customers and principals.

TQM builds on three key principles: meeting customer expectations; managing


processes and continuous improvement. Benchmarking and TQM are
comparable approaches. Benchmarking could be argued to be a sophisticated
14
quality management tool and successful benchmarks often feature in TQM
15
strategies. Daft in fact sees benchmarking as one of the TQM techniques on a
" Benchmarking in Dutch healthcare

16
par with outsourcing and continuous improvement. He feels that
benchmarking is often a very useful boost to energy and direction in a TQM
17
programme.

Benchmarking and TQM differ in that benchmarking focuses on key issues and
best-in-class comparisons, while TQM covers all aspects of an organisation and
18
may also be totally internally focused.

19
Bendell sees the current interest in benchmarking as ‘a natural evolution
from total quality management’ and in fact takes TQM one step further. TQM
focuses on a set of minor inefficiencies in need of improvement, but small
incremental improvements are not enough in this day and age, Bendell
reckons. Global competition requires quantum changes that are only
achievable through benchmarking.

2.4 Definitions: differences and similarities


Definitions of benchmarking would seem to agree on a number of points.
Measuring, comparing, understanding, learning and improving keep cropping
up. And so do terms like systematic and continuous, clearly putting
benchmarking in a different bracket from one-off corporate comparisons.
Many definitions include terms like gauge, reference or best practice:
comparing one’s own performance with a better one. In fact, the term
benchmark itself literally refers to such a gauge or reference: a benchmark is a
feature in the landscape used as a point of reference by a land surveyor. The
term is also used to indicate sea levels, as Figure 2.1 shows.
Benchmarking: comparing and improving #

Figuur 2.1 A benchmark

The differences in benchmarking definitions typically involve the areas being


benchmarked: performance or process, efficiency and/or quality. Like Watson,
we tend to see these differences as different types or generations of
20
benchmarking rather than as fundamentally different definitions.

Our definition of healthcare benchmarks differs from most other definitions


presented in this section on one notable point: its multidimensional nature,
21
combining both efficiency and quality. We see the multidimensional or
integrated nature of healthcare benchmarks as absolutely crucial, as this
avoids a one-sided approach. Using a one-dimensional benchmark increases
the risk of launching actions for improvement that do indeed enhance the
benchmarked performance but only at the expense of other areas – leaving the
organisation no better and perhaps even worse off. We therefore consider the
multidimensional element an integral part of our benchmarking definition.
$ Benchmarking in Dutch healthcare

2.5 History of benchmarking


Benchmarking first emerged in the 1950s and 1960s, before really taking off in
22
subsequent decades. Watson attributes the rise of benchmarking to Frederick
Taylor, a proponent of corporate comparisons as early as the late 19th century.
23
Bullivant and Watson break down the development of benchmarking into five
phases:
• Phase 1: reverse engineering (1950-1975). Dissecting and analysing
competitors’ products to identify technical advances and then copy them.
• Phase 2: competitive benchmarking (1976-1986). Learning from both the
processes and products of the competition.
• Phase 3: process benchmarking (1982-1988). Roughly coinciding with Phase
2, this phase is primarily about a difference in emphasis: learning from
best-in-class performers. In other words: the search is on for the best
product or process, regardless of whose it is.
• Phase 4: strategic benchmarking (1988 - today). The focus of learning has
now shifted from processes to fundamentally changing performance –
which makes this type of benchmarking strategic.
• Phase 5: global benchmarking (1993 - today). Virtually simultaneous with
Phase 4, this phase again reflects a minor difference in emphasis. In global
benchmarking, organisations seek to learn from players doing essentially
the same thing but in a totally different external setting, e.g. in another
part of the world and/or culture.

Benchmarking may have started in the private sector but it has long since made
the transition to the public sector, at local, provincial, national and European
level.

As for the healthcare benchmarks at the heart of this report, we would rate the
following quotations as revealing of their development. In 1998 a very tentative
research question read: ‘Is it possible to develop a single, integrated benchmark
model for nursing and care homes and if so under what conditions?” (Request
for a feasibility study on a benchmark for nursing and care, September 1998).
Benchmarking: comparing and improving %

In 2006, a mere eight years later, the request was to set up a sector-wide and
forward-looking benchmark investigation: ‘Develop a continuous benchmark
for the totality of nursing, care and home care, drawing on state-of-the-art ICT
and complying with other information trajectories.’

The change in who is giving the assignment is also interesting. Things started
out like this: ‘The Government has tasked the Ministry of Health, Welfare and
Sport to launch an investigation in 1998 into the possibilities for
benchmarking in all sectors governed by the Algemene Wet Bijzondere
24
Ziektekosten (AWBZ, Exceptional Medical Expenses Act).’ The ministry may
have taken the initiative, but industry associations quickly developed into
co-sponsors, to end up as these benchmark studies’ sole sponsors. The
benchmark subsequently changed from a subsidised project into an activity
paid for by healthcare providers themselves. In some instances the government
is still acting as the driving force behind benchmarks in sectors that have had
no sector-wide benchmarking. To date, the Dutch child healthcare system (JGZ,
covering 0-19 years of age) has no comprehensive industry-wide benchmark,
and we are seeing the government provide the first push by way of a project
structure and subsidies. But it is doing so within a broader framework, as part
of its Beter Voorkomen (Prevention is Better) programme that also
encompasses financial accountability.

2.6 Benchmarking: increasingly embedded


Benchmarking is evolving into an ever more firmly embedded tool with an
increasingly broader horizon in terms of reference groups. Benchmarking has
25
become a popular tool, with Bendell even referring to its use as ‘booming’.
26
Bain & Company periodically investigates the use of management tools across
27
the world, with benchmarking featuring very high in its rankings. In 2004 it
came third among the 21 most used tools, beaten only by strategic planning
28
and Customer Relationship Management (CRM). In 2002 benchmarking even
ranked second. The tool is also highly regarded: in 2004 its satisfaction rating
was significantly above the average for other management tools.
& Benchmarking in Dutch healthcare

Customer segmentation 72%


Outsourcing 73%
Benchmarking 73%
CRM 75%
Strategic planning 79%

60% 65% 70% 75% 80% 85%

Level of satisfaction with various tools

Outsourcing 3.89
CRM 3.91
Customer segmentatin 3.97
Benchmarking 3.98
Strategic planning 4.14
3.5 3.6 3.7 3.8 3.9 4 4.1 4.2 4.3 4.4
Figure 2.2 Most used management tools in 2004
Source: Bain & Company, 2005 management tool survey

29
Bain & Company also finds benchmarking all over the world, with the one
exception of Asia, where the tool is clearly less popular. Ignoring Asia, the use
of benchmarking would move up a slot in Figure 2.2, making it second only to
strategic planning. In Europe, a hefty 88 per cent of respondents used
benchmarking as a management tool.

30
Benchmarking featured high on the list of tools surveyed by Bain & Company
for over a decade, leading the consulting firm to conclude that it is not a fad but
a consistently used instrument. Accenture’s 2006 global survey into the use of
31
benchmarking within public administration finds that government and
government bodies are increasingly reporting the use of benchmarking as a
tool.

A sure sign of its growing popularity in the public as well as the private sector
was the creation of the International Benchmarking Network under the
auspices of the Organisation for Economic Cooperation and Development
(OECD). An informal experts group, the network’s objective is to monitor
benchmarking developments in public sector organisations and to gather and
disseminate such information. The network has a particular focus on types of
32
international benchmarking. The group first met in Paris on 21 November
Benchmarking: comparing and improving '

1997, and its planned activities include maintaining a database of web links on
benchmarking in the public sector.

33
The authors of Benchmarking in de publieke sector believe that the United
Kingdom and the United States have a clear edge in benchmarking, especially
in the public sector. In the Netherlands, by contrast, benchmarking is still seen
as something out of the ordinary.

2.7 Benchmarking as necessity


Various studies suggest that benchmarking is becoming a necessity in both the
private and the not-for-profit sectors. In the face of fierce competition or the
battle for funding, organisations are increasingly having to improve their
efficiency and/or quality – or die. Benchmarking serves as a means to help
achieve such improvements and at the same time demonstrate that the
organisation is indeed seeking to improve.

34
In its survey of benchmarking in the public sector, Accenture observes that
the objective of the benchmarking exercise (performance improvements
35
and/or cost-cutting) often derives from heightened outside pressure. Bendell
lists three developments driving benchmark studies: global competition,
prices/publicity and the need for breakthrough projects. To survive, companies
will have to match or exceed best practice at their competitors all across the
world. And winning awards brings more kudos, too: think the Malcolm
Baldridge Award in the United States, for instance, or the European Quality
Award for Business Excellence. Holland’s equivalent is the Nederlandse
36
Kwaliteitsprijs en -onderscheiding (the Dutch Quality Award).

Lastly, it has become imperative that organisations make improvements that


bring real breakthroughs in production processes, service processes, products
or services. Unchanged or shrinking public sector budgets and rising
healthcare demand are combining to necessitate major improvement. Higher
expectations in the community are also playing a part. People want value for
money, not just in the private sector in return for disposable purchasing
power, but also in the public sector in return for tax revenues.
3 How to make benchmarking a
success

Benchmarking has a lot to offer, and we would argue that it has its place in
Dutch healthcare. Like any other management tool, benchmarking of course
also comes with its preconditions and pitfalls. This section lists them and
suggests solutions to potential challenges, reviewing such issues as:
• preconditions for benchmarking
• optimising learning
• the need for a broad-based benchmark model
• the use and purpose of external consultants
• the added value of a multidimensional approach
• tool quality requirements
• aligning data gathering with general administrative duties
• the ethics of benchmarking
• the importance of voluntary participation
• the importance of repeat benchmarking

3.1 When is benchmarking an appropriate tool?


Is benchmarking a panacea for improvement? Needless to say, the question in
itself supplies the answer: No, it isn’t. If benchmarking is to be appropriate, a
range of preconditions will have to be met. Its limitations are reviewed at
37
length in Benchmarken in de publieke sector but are pointed out just as often in
studies about private-sector benchmarking.
• Benchmarking is not suitable for all issues. If relatively simple matters are
at stake that can easily be solved internally, it will not be necessary to
engage in a time and energy-consuming, cost-intensive benchmarking
exercise.
• Benchmarking is no ‘quick fix with instant payback’.38
• Benchmarking always requires follow-up and is not a method for
implementing improvement. Benchmarking alone will not do the trick.
• Benchmarking makes demands on an organisation.39 Key preconditions are
40
senior management commitment and a culture and structure that are
conducive to benchmarking.
! Benchmarking in Dutch healthcare

Benchmarking, then, is the appropriate course of action in dealing with


complex issues with no obvious solutions. And benchmarking will really only
come into its own if an organisation’s culture and structure meet certain
41
requirements. De Vries and Van der Togt identify the following elements in an
organisation’s culture as conducive to benchmarking:
• a focus on external measures (customer requirements or the performance
of best-in-class organisations) instead of internal priorities
• aiming for the very best
• a willingness to change
42
• a willingness to learn or unlearn

43
According to De Vries and Van der Togt , a structure conducive to
benchmarking typically displays the following features:
• a focus on processes and operations and not on people, jobs or parts of the
organisation
• a pre-established TQM system
• a framework encouraging information-sharing
• a team-driven approach, training facilities (benchmarking needs to be
taught) and pre-established monitoring mechanisms

44
Paraphrasing the words of a PrimaVera Working Paper, benchmarking
requires a learning organisation. This paper’s authors also identify a number of
preconditions if benchmarking is to be successful. First is that the structure of
the organisation allows differences. Second, that its culture encourages
learning and experimenting with organisational change. And lastly, the
organisation needs to have a vision that puts changes in perspective, i.e. where
are we taking the organisation?

Senior management commitment is absolutely crucial. Its support should go


further than merely allocating time and resources: senior management needs
to be involved closely in the entire process and communicate the importance of
benchmarking.

In the absence of any willingness to change, benchmarking will be no more


45
than a one-off investigation without any teeth. Watson even argues that an
organisation resisting change is not yet ripe for quality. In his view,
organisations ready for quality see change as an exciting challenge. Quality
readiness is no force of nature, he feels: it can be managed.
How to make benchmarking a success !!

And what if the organisational culture is not favourable to benchmarking? Is


46
that an excuse? Van Gangelen’s case studies indeed suggest that it is
sometimes used that way. But in a recent study into the success of mergers,
KPMG argues that managers use culture as an easy excuse for not doing their
47 48
jobs. Grotenhuis’s dissertation arrives at a similar conclusion: ‘The effect of
culture on the success [of a merger] can be managed.’

3.2 Key success factor 1: Optimise learning


When all is said and done, the success of a benchmarking exercise hinges on
the degree to which the organisation actually manages to implement change.
Surprisingly, benchmarking does not always produce learning.

49
Accenture , for one, finds that nearly all companies and public organisations
come out of a benchmarking study knowing what issues they score less well on,
but that only four per cent of them have any inkling as to how to adjust their
50 51
procedures and systems subsequently. Van Gangelen says more or less the
same: ‘Benchmarking turns out to make a crucial contribution to obtaining
insight but does not appear to inspire action-oriented learning.’ He considers
that there is no evidence that benchmarking leads to the implementation of
new knowledge in the organisation, citing potential reasons such as limited
capacity for change in the organisation, other strategic priorities and cultural
aspects such as disposition to change. Lack of transparency hides a fear of being
held accountable, he argues.

52
Reviewing twelve benchmarking initiatives, Kishor Vaidya et al. also find no
trace of any measures for change but conclude, somewhat to their surprise it
would seem, that this does not seem to affect the tool’s popularity.

In our healthcare benchmarks we have also heard it said that not much is being
done with benchmark outcomes. In one employee survey we asked staff that
had taken part in the previous survey whether it had brought about any
change. Half of respondents felt little or nothing had been done with their
views.
!" Benchmarking in Dutch healthcare

So what prevents optimum learning?

The literature on benchmarking has little to say about the absence of learning.
But why would an organisation invest so much in a benchmark to then do
nothing about it?

We can only put forward a number of hypotheses. One would be that learning
and improving imply change. And change is not something people do easily,
even when they understand that it is necessary. Watson may argue that
organisations unwilling to change are not ripe for quality, but that is by no
means to say that change is easy.

53
Senge is convinced that less is being learned than should be possible not
because of lack of will or good intentions, but because of our internal map of
reality. He explains that new insights are not applied because they do not fit in
with our views of the world and reality. These views – or what he calls our
‘mental models’ – prevent us from thinking and acting in any other ways than
those in which we are used to thinking and acting. The learning organisation
would do well to devote a great deal of attention to these mental models and
discuss them at length, he recommends.

54
Argyris has also done a lot of research into organisations’ capacity for
learning, more specifically into the capacity for learning of management
teams – or rather the absence thereof. He even goes as far as to call this the
‘skilled incompetence’ of teams of people who are experts at not learning.
Managerial and professional behaviour creates defence mechanisms and then
clothes these with various types of argument. A management team that really
wants to learn is not just interested in the reality of the organisation but also in
the actual nature of the management team itself.

55
His findings gel with what Kets de Vries identifies as the need to ‘fight with
the demon’ in his observations about organisations. He reckons a key condition
for achieving improvement is being able to handle the irrationality in
organisations and managers, seeing a ‘world of difference’ between identifying
and analysing symptoms, and really getting to the bottom of the problem. The
real task, he argues, is to ‘shatter illusions’.

If benchmark outcomes are not what the organisation expected them to be, it
will be tempted to blame the benchmark survey. And sometimes rightly so, as
How to make benchmarking a success !#

we know from experience. But occasionally the Not Invented Here syndrome
kicks into action: ‘It wasn’t us who invented the indicator or method, so we
doubt its validity.’

Fundamental change principles

When looking into the question of how to encourage learning and improving,
we soon hit on a number of fundamental change principles. Change experts
call these the need for change, the willingness to change and the capacity for
change – the three key tenets of any change programme.

The first condition for any change is that the need for change is recognised, and
– in this instance – that the benchmark is agreed to be a tool that could
potentially help in learning and improving. Recognising the need for change is
a largely rational mental process. Benchmark outcomes are also rational:
scores that reveal whether one performs better or worse than others.

But whether the need for change actually leads to change then depends on a
willingness to change – a willingness to think outside the box and take a risk.
Now this is a much less rational process. Don’t forget, we are now talking about
the willingness to change of people who will actually have to implement the
change. And they are not necessarily always the same people who have decided
to participate in the benchmark in the first place.

The third and final precondition of change is the capacity for change. Are the
people who need to change capable of changing? Do they have the knowledge
and expertise to apply fresh insights? Focusing on the benchmark: do they
know how to interpret benchmark outcomes? And how to translate these into
action? Again, we are talking rational aspects here, although the challenge of
change always brings to light qualities that are rather less rational.

Answers to these questions provide pointers to ways of encouraging learning


and improving. In terms of the rational aspects:
• Ensure that participants understand early in the process how the
benchmark works and are sufficiently aware of its potential for learning
and improving.
• Ensure that participants take time early in the process to think about the
consequences of participating in the benchmark. Prepare them for
potential outcomes and their implications or consequences.
!$ Benchmarking in Dutch healthcare

Ensure that participants are adequately advised of how the outcomes may
be interpreted and translated into improvement measures.

As we have seen, optimum use of benchmarking’s potential for learning makes


demands on an organisation, demands that translate into questions to
potential participants:
• Think first about what you expect from the benchmark. What kind of
information are you most eager to find? What are you especially curious
about? Which organisations would you like to be benchmarked against?
Which organisation or organisations would you say display best practice
today?
• Encourage your organisation to set benchmark objectives: how good do we
want to be?
• What are we hoping to achieve with the benchmark? Draw up a plan stating
what you intend to do with the benchmark outcomes.
• Think about how you will handle feedback. The golden rules of
communication apply to benchmarks, too. For ‘receiving feedback’ these
golden rules are:
– Do not perceive feedback as a personal attack.
– Do not immediately go on the defensive.
Find out what the message means and what the exact significance of the
feedback is.
– Continue to treat the messenger with respect.

Show senior management’s commitment by freeing up time and allocating


resources to the benchmark. Let’s not forget that management set the
standard, not just in participating in a benchmark but even more so in
implementing its findings.
How to make benchmarking a success !%

Embedding

The crunch is how a benchmark’s findings and conclusions are embedded in


the organisation. The benchmark will yield its biggest rewards if the
organisation translates its insights into concrete actions, carried out by
employees in their day-to-day work processes. Actions, that is, that feature in
regular planning and control cycles and in Deming’s famous
Plan-Do-Check-Act Cycle. This, of course, is the ultimate test for any
benchmark.

Act Plan

Check Do

Figure 3.1 Deming Cycle

3.3 Key success factor 2: The benchmark model


should be broadly based
Willingness to change is key if any learning is to happen, but a broadly based
benchmark is also crucially important. It is our experience that a benchmark
model needs to be developed in close consultation with the industry
organisation and a working group of organisations acting as a sounding board,
and that the model so developed needs to be appropriately communicated to
all participants. It is through this approach that we ensure that the model
reflects the real world. The building blocks that make up our healthcare
benchmarks, for instance, derive from the INK model that many healthcare
providers in the Netherlands are familiar with. The authors of Benchmarking in
56
de publieke sector also list a broadly based model as a key success factor, advising
that the approach to and implementation of a benchmark should be planned
in consultation with all organisations involved. After all, it is these
organisations that will have to provide the data. Moreover, they often have a
keen insight into who is performing well or less well – they know the story
behind the numbers.
!& Benchmarking in Dutch healthcare

3.4 Key success factor 3: A multidimensional approach


A key feature of our healthcare benchmarks is their integrated
multidimensional approach: investigation into efficiency should always
include quality, and vice versa. We have said it before: working cheaply may be
simple, but if cheap means low quality the organisation will not be a good
reference point for other organisations, or not to our minds at least. The proof
would seem to be that best-in-class organisations typically score highly on
individual building blocks but do not always command the highest scores: too
much emphasis on one element invariably detracts from others.

57
Benchmarking in de publieke sector sees an integrated approach as a key success
factor for benchmarks: ‘Always opt for an integrated approach comparing both
financial and non-financial indicators and explicitly taking into account the
resources – financial and otherwise – available to the individual organisations.’

We have repeatedly and successfully drawn national and international


attention to our benchmarking approach. In a publication entitled Improving
58
the performance of health care systems , the OECD reviewed eight projects in four
different countries. The title it chose for its discussion of the Dutch home care
benchmark was ‘Benchmarking for home care: yes, it is possible.’ Touching on
its multidimensional aspects, the review calls the relevant benchmark an
encouraging example. Section 4 captures the strengths of the
multidimensional approach in greater depth, discussing building blocks and
their interrelationships.

3.5 Key success factor 4: High-quality tools


It may seem frivolous even to mention, but the quality of the research tools is of
course crucially important for both the usability of the outcomes and for the
backing enjoyed by the benchmarking process. Logistics and content are
equally important here: organisations should be able to devote their energies
to the results of the benchmark and spend as little time and energy as possible
on technical hiccups or distractions in terms of content.

More specific success factors here, we reckon, are a fact-based approach and
optimum use of ICT. As the term implies, a fact-based approach primarily
means that we stick to the facts. Facts and figures can be validated and
How to make benchmarking a success !'

objectively considered. Which is also why we use a scoring system. We calculate


scores per question if at all possible, but at the very least per theme and
individual section of the benchmark. This allows us to clearly pinpoint an
organisation’s position among its fellow participants and to identify best
practices.

In the early years of healthcare benchmarking there was certainly some debate
as to whether client views could be captured in a quantitative gauge. Some
pressed for a more qualitative approach, but in practice this is hardly possible
and not really necessary either. It is now generally accepted that client views
can indeed be adequately captured in a score, e.g. by having clients agree or
disagree with specific statements or list how often they have had specific
experiences (the latter is now generally agreed to be the most exact line of
questioning). This type of survey can always also include a question that allows
the client to pick the most urgent from a list of improvements or, if benchmark
participants so desire, an open text box allowing clients to raise their own
59
issues. Of course, the same also applies to employee surveys.

‘Doesn’t this fact-based approach provide a false sense of security?’ we are


sometimes asked. ‘Is there really much to choose between a care provider
scoring 7.34 and one scoring 7.36?’ Fair questions. This is why we tend to use
more general rankings, a typical one being three categories with A containing
the top quartile of best-scoring organisations, category B the middle 50 per cent
and C the bottom quartile of least good scores. It is an arbitrary breakdown, of
course, but so is any other. To avoid any false sense of security we will never
identify a single best practice. There are always several, and performance is
compared to average scores on these best practices.

Research tool quality is also reflected in ICT usage. A database is set up for every
benchmark, storing all data that feed into the analyses. ICT also comes into
play in feedback reports, which are automatically generated and include the
relevant organisation’s data. In addition, our more recent benchmarks feature
a web-based tool for the sign-up procedure.
" Benchmarking in Dutch healthcare

3.6 Key success factor 5: Do not leave everything to


external consultants
Should you benchmark yourself or should you have external consultants come
in and do it for you? Views on this issue vary widely, and you would not expect
an entirely objective perspective in a report written by one such consultant. So
let us stick to what others have to say on the subject: ‘Consultants can be a great
help. But they should not do it for you. Benchmarking should be done by your
organization, for your organization and to improve your organization,’
60 61
Bendell argues. Keehley takes much the same view.

In the Dutch healthcare benchmarks, research consultants relieve


organisations of much of the work: the questionnaires are provided, the
answers are analysed and the organisations receive feedback reports. But that
does not detract from Bendell’s exhortation that it is the organisation that
needs to do something with the benchmark and not the consultants. Getting
the organisation ready for benchmarking, thinking about one’s own
performance and processes and, most importantly, devising and
implementing improvements – these are and will remain tasks for the
organisation itself.

3.7 Key success factor 6: Aligning benchmark to


regular records
The degree to which organisations see benchmark participation as a burden –
i.e. an investment of time – matches the degree to which the data have to be
supplied specifically for the benchmark. If the data can be retrieved directly
from the organisation’s records, participation becomes much less of a burden.

62
The authors of Benchmarking in de publieke sector recommend: ‘Embed periodic
benchmarking in regular quality improvement activity within the
organisation. Make the necessary internal and external data gathering part
and parcel of day-to-day operations. Align your regular administration of
financial and non-financial data with your benchmark partners.’ We concur.
Aligning data flows requires full attention. But aligning should never come at
the expense of quality. It will not do simply to benchmark with data that
happen to be there. This would sharply reduce the chances of unearthing
interesting interrelationships or finding useful clues for change.
How to make benchmarking a success "

3.8 Key success factor 7: Sensitive data handling


Benchmarking requires openness. Organisations have to be willing to share
information. How to make organisations trust one another? And how to
prevent benchmarking from resembling corporate espionage, with companies
making off with each other’s secrets?

US literature on the subject often calls this the ethics of benchmarking. ‘What
you do not wish for yourself, do not do to others’ is Bendell’s key ethical
principle of benchmarking. Ethical guidelines to benchmarking have been set
down in the Benchmarking Code of Conduct as developed by the American
63
Productivity & Quality Center and in the European Benchmarking Code of
Conduct, developed by companies and institutions working together in the
Performance Improvement Group.

In our work with healthcare benchmarks we also sometimes come across


concerns about secrets getting out. Organisations refuse to share certain
information, such as product selling prices. Needless to say, this primarily
happens in benchmarks covering large groups of participants and not
involving selection of benchmark partners by the organisations themselves.
We have found that there is a much greater willingness to share if a benchmark
involves a small group of organisations working closely together in workshops.

64
Benchmarking in de publieke sector observes that it is easy for organisations in
benchmarks to make things look better than they are. We are often asked about
this, too. ‘Don’t organisations abuse the benchmark? How do you know
whether you are getting genuine information?’ Our rejoinder is that the
benchmark was designed by and for the participating organisations, and that
the responsibility for supplying true and accurate information lies squarely
with them. Years of experience have taught us that strategic behaviour only
happens once in a while – experience that derives from consistency checks
included in the benchmark, among other things. Of course, these checks are
not infallible, but they do give a very good indication. Besides, it would not be
very logical for organisations to supply misleading information: they would be
investing a lot of time, money and energy in a benchmark whose outcomes
they could not trust if they supplied incorrect data or thought that others did.

Transparency is not the same as public disclosure. Sharing information with


other organisations is different from disclosing information to an insurer, for
" Benchmarking in Dutch healthcare

instance, and finding the benchmark’s data in the media is something else
again. Organisations do voice serious concerns about public disclosure,
arguing that this will encourage strategic reporting and lead to loss of control.
Numbers can take on a life of their own, are prone to misinterpretation and
might create an unjustifiably negative picture. Moreover, public disclosure
could mean competitors making off with secrets after all.

In the Dutch healthcare benchmarks it is standard practice for the


organisations themselves to decide whom they wish to disclose individual
benchmark outcomes to – the researchers definitely do nothing of the kind.
But sometimes reality catches up with views on public disclosure: the Dutch
Healthcare Inspectorate on its website discloses individual quality scores by
organisation – the same quality scores that feature in the benchmarks. In the
literature on benchmarking, the dangers of public disclosure and the potential
loss of support would seem to be a particular concern of authors working on
benchmarking in the public sector.

3.9 Key success factor 8: No compulsory


benchmarking
65
The authors of Benchmarking in de publieke sector also discuss whether
benchmarking should be compulsory in some sectors. ‘In the Netherlands it
seems to be rather too soon to start thinking along these lines. Another
communication tool might make for an apter contribution to quality control
than benchmarking; perhaps a special Public Sector Quality Award, based on
the Business Excellence Model.’

We share their reservations. An obligation to benchmark every two years, say,


might run into so much resistance that any prospect of benchmark learning
would be lost and the benchmark miss its objective altogether. A better way to
go about this would be to turn a benchmark into such a success that
organisations feel they are missing out if they are not participating.

Of course, there is no denying that social pressure on organisations to


benchmark is increasing, with government, financial backers and clients
expressly asking for benchmark information. Not joining when most others in
the industry are doing so may seem almost suspect, and is sometimes
emphatically publicised. In 2006, for example, the Consumentenbond (the
How to make benchmarking a success "!

Dutch consumers’ association) published a series of surveys into the quality of


healthcare in the Netherlands, with its magazine devoting a great deal of space
to the fact that a number of hospitals had not participated. In fact, hospitals
that had not participated in any survey were the subject of a separate text box,
where they were ‘outed’ by name.

3.10 Key success factor 9: Strength through repetition


Benchmarking is not a one-off event. The literature bears this out:
benchmarking’s biggest impetus often comes from repetition that helps to
66
show up its effects. Section 8 has more on the ultimate in benchmark
repetition, the continuous benchmark.
4 Different types of benchmarking

Benchmarking comes in different shapes and sizes, ranging from


one-dimensional to multidimensional and from small internal to
industry-wide benchmarks. Each individual type is not necessarily suitable for
all purposes. Choosing the most effective type of benchmark requires insight
into how it can be used. This is addressed in this section.

4.1 Classification criteria


The literature shows that benchmarks are classified on the basis of the
following criteria:

• the benchmarking objective


• the nature of what is being measured
• the internal or external orientation of the benchmark (reference group)
• the level of organisation to which the benchmark applies
• use of normative standards

We will add a sixth criterion to the above:

• the research process

Organisations planning to participate in benchmarking should decide for each


of the above criteria what they consider to be important in a benchmark. In
doing so, they will create an organisation profile with which they can go in
search of a benchmark that matches it.

4.2 Classification by benchmarking objective


Benchmarking can be classified by the objective of the benchmarking exercise.
In this respect, De Vries and Van der Togt make a distinction between
benchmarks that focus on internal efficiency and those aimed at external
effectiveness. Based on our own experience, we can add two further
distinctions: benchmarks aimed primarily at determining one’s position (in
which case general outcomes suffice) and benchmarks that seek to improve
business practices (in which case more specific pointers are needed).
"$ Benchmarking in Dutch healthcare

4.3 Classification by what is being measured


Benchmarks can also be classified by what is being measured. Benchmarking
can, in principle, measure a whole host of things: simple or highly complex;
factors that can be measured quantitatively or those that can only be described
in qualitative terms. An example of classifying by what is being measured,
which is more gradated than a principle-based classification, is to classify by
scope. Whereas benchmarking may seek to compare the total performance of
an organisation with that of other organisations, in which case allocating
weightings to the various components is a tricky exercise, it may also simply
seek to compare individual aspects of organisations, such as their complaints
procedures or telephone availability. And whereas relatively simple indicators
tend to suffice in the latter case, complex analytical techniques, such as
econometric methodologies, are needed in the first.

When classifying by what is being measured, the most important distinction is


that between performance benchmarking and process benchmarking. Cowper
67
and Samuels define performance benchmarking as: ‘Comparing the
performance of a number of organisations providing a similar service.’ They
take process benchmarking to mean: ‘Undertaking a detailed examination
within a group of organisations of the processes which produce a particular
output, with a view to understanding the reasons for variations in
performance and incorporating best practice.’

The distinction between process and performance is made by various


68
authorities. In this context Grayson distinguishes two different approaches:
competitive benchmarking and process benchmarking. Competitive
benchmarking compares the performance of businesses that compete with one
another in the same market. Process benchmarking focuses on the
functionality of comparable processes, enabling comparisons to be made
between dissimilar companies in different industries.

Our experience with healthcare benchmarks is that comparing on the basis of


process alone does not adequately meet the needs of the benchmarking
partners. They want confirmation that the process in question has actually
resulted in strong performance, and therefore wish to know how the
organisation has performed. That is why benchmarks in healthcare focus on
performance, and use processes only to explain how the performance was
69
achieved. Watson also implicitly applies this approach: processes serve as an
Different types of benchmarking "%

explanation for performance. He argues that the explanation given may be


completely different from what the partners in benchmarking expected, which
makes the benchmark all the more interesting. An example of a surprising
outcome given by Watson is a survey by McKinsey into the commercial profits
generated by new technological products. Until that time (1983) the world
believed that the success of a new product was determined first and foremost
by the degree to which development costs had been kept under control. Yet the
McKinsey study yielded a different outcome, namely that products whose
development costs had overrun the budget (in some cases substantially) but
that had been launched according to schedule appeared to be far more
profitable than products that had remained within budget but were launched
later than planned. So in this industry a timely product launch was found to
contribute more to profits than cost-control.

During the first years after the introduction of healthcare benchmarks there
was considerable debate about the term ‘performance’. Many healthcare
providers felt that it did not apply to them and was a term coined in the
corporate world that was not appropriate to the healthcare sector. This debate
has since subsided. The question now is what the performance, or outcome, of
healthcare providers actually is. Healthcare benchmarking to date has focused
only on output, which is to say on the amount of care. The industry – in
particular non-curative healthcare – still has a long way to go when it comes to
unambiguously measuring the outcome of care. How can you uniformly
measure an improvement in well-being on a national scale, for example, and
how can you prove that this improvement is the result of the care provided?
That is why the focus on the quality of care in existing healthcare benchmarks
serves to replace a lack of measurable outcomes.
"& Benchmarking in Dutch healthcare

4.4 Classification by reference group: internal or


external benchmarking
Harrington & Harrington make a distinction between internal and external
benchmarks and identify five types of benchmarking:

• Internal benchmarking. A comparison between locations, districts,


business lines or university departments within a single organisation. A
variation on this type of benchmarking, internal benchmarks compare
partners within a single collaborative venture.
• External competitive benchmarking. An organisation examines the
products or services of a competitor, without this other party being
involved. An example of such benchmarking would be a car manufacturer
that buys a car made by a direct competitor and subsequently takes it apart
to examine how it has been made.
• External industry benchmarking. An organisation compares itself with
other organisations in the same industry that are not direct competitors.
70
Odenthal and Van Vijfeijken refer to a similar kind of benchmarking as
‘functional benchmarking’: comparing one’s performance or processes
with those of organisations in similar circumstances, such as schools with
comparable numbers of students, with a comparable denomination or of a
similar type. The main question in such comparisons is: who performs
better under the same circumstances?
• External generic benchmarking (cross-industry). An organisation compares
itself with organisations in the same industry and in other industries.
71
Odenthal and Van Vijfeijken state that this type of benchmarking is
particularly suitable for comparing indicators that are not sector-specific,
such as the overhead percentage, or matters relating to facility
management or personnel policy.
• Combined internal and external benchmarking.

Healthcare benchmarks are in the last group, where an external industry


benchmark, i.e. a benchmark against other organisations in the healthcare
sector, is combined (if the organisation in question so wishes) with a
benchmark against departments or districts within their own organisation.

A further analysis of the literature shows that many researchers are in favour of
including organisations from other industries in the benchmark. Odenthal
72
and Van Vijfeijken observe that looking beyond existing boundaries can yield
Different types of benchmarking "'

new perspectives and solutions. Education could, for example, learn from the
way in which the health sector operates, for instance in the areas of knowledge
management, rostering and planning. Another example is a hospital
benchmarking exercise in the United States where the patient admission
process was compared with the way in which airlines and hotels run their
73
check-in counters.

4.5 Classification by level of organisation


This classification is based on the level at which data are gathered and
analysed. Does the benchmark relate to a department, a division or the
organisation as a whole? In our experience, the organisation as a whole tends
to be too general a level for the identification of specific areas for
improvement. In this respect the level of a specific location, division or
department appears to be more suitable. That said, the organisation as a whole
is important, too, particularly if policy is made at that level.

4.6 Classification by use of normative standards


Benchmarking methodologies can also be classified by the standard used (the
74
benchmark). Klages makes a distinction between benchmarking that uses
predefined normative standards (e.g. criteria for quality awards such as the
European Quality Award and the Dutch Quality Award) and benchmarking
that uses non-normative empirical data.

This distinction has proved to be essential in healthcare benchmarks. Eleven


years ago, our initial idea was to benchmark on the basis of definable
normative standards; we believed that this would lead to greater objectivity. In
practice, however, we could not find any standards (indicators) that were both
generally accepted and easily measurable. We also found that existing
indicators had been developed from theory and could not always be used in
practice, and – more importantly – were not always recognised. Theoretical
indicators could lead to spurious best practices which may be impossible for
‘flesh and blood’ organisations.

After all, if indicators that are not generally accepted are used in a benchmark,
the organisations involved could negate the outcomes by referring to an
invalid standard: the Not Invented Here (NIH) syndrome.
# Benchmarking in Dutch healthcare

In view of the above, we have so far opted for comparisons based on


non-normative empirical data in the application of healthcare benchmarks. In
other words, the organisations compare their operations with those of the best
organisations included in the benchmarks.

This also has its drawbacks, of course. One could point out that there may still
be room for improvement even in the best-performing organisations in a
particular industry. So is this the best possible solution? We see it as a process.
For the time being, there is still much to be gained by taking the
best-performing organisations in an industry as the point of reference. And the
best performers themselves usually need to put in considerable effort to
75
remain the industry leaders. That said, we are aware that this approach has its
limits. Organisations aiming for true performance excellence tend to look
beyond the performance of their own industry. They wish to excel, either by
formulating ambitious standards or by emulating organisations abroad or
organisations in other industries.

In a report about benchmarking in the public sector in the United Kingdom,


76
Cowper and Samuels draw a conclusion that was formulated as follows by
Professor Helmut Klages: ‘The strategy to “let the figures speak for themselves”
produces the desired learning effect only if the comparisons discloses levels
well below the norm. One can assume that the learning effect decreases to zero
at an average level of performance, and speculate further that it may become
77
negative above the norm.’ That is why we believe that the benchmark of the
future (see Section 8) should include a normative, cross-industry and/or
international benchmark. With respect to the identification of standards, the
following warning should be heeded: select standards that enjoy widespread
support, that are easily measurable and that can be incorporated in an
integrated approach.

In some cases the adoption of standards may be closer than one might think.
Standards for responsible healthcare set specifically for the healthcare sector
are expected to apply soon. Care providers, clients, health inspectors and other
parties are jointly developing quality standards that need to be met by all
organisations and to be tested from time to time. The standards combine scores
78
taken from client surveys with scores for indicators used by the Dutch
Healthcare Inspectorate. Experience to date has shown that these standards are
highly suitable for use in benchmarking.
Different types of benchmarking #

4.7 Classification by research process


In addition to the five classification criteria distinguished so far, we would add
a sixth criterion, which we have called the research process. The main focus
here is on the role played by the benchmark participants in designing the
research process. In one type of benchmark, which we have called the
fact-based benchmark, the participants use existing benchmarking tools; they
compare themselves with objective facts and there is little contact between the
participants and the research designers or researchers. In another type of
benchmark, which we have called the interactive benchmark, the participants
themselves develop their own benchmark and are in close contact with
external experts, if any.

Healthcare benchmarks come under the heading of fact-based benchmarks.


They use a fixed set of tools (analytical model, questionnaires, score
calculations and reports) and fixed benchmark items. Questionnaires and
reports are drawn up in cooperation with the client and with a group of
organisations that act as a sounding board, but as a rule the benchmarking
partners are not directly involved in designing the questionnaires and reports
and there tends to be little direct contact between participants and
researchers.

We will illustrate this by describing the benchmarking process in the


healthcare sector. Organisations considering taking part in a benchmark can
download pertinent information or attend an information session. If an
organisation decides to take part, the next step is the sign-up procedure. The
organisation will then provide details using a web application. Whilst it can
make a few choices, it will generally follow a fixed format. There will be no
contact with a researcher, unless the organisation gets in touch with the
helpdesk, which will always be available. The same applies to the stages during
which data are gathered and analysed. Contact with the researchers takes place
exclusively through the helpdesk, with the exception of the client survey,
where researchers will actually visit the organisation if the client is unable to
respond in writing. The organisations will provide benchmark data, which will
be validated and analysed by the researchers. In the final stage, the benchmark
reports will be sent to the participating organisations, or they will be advised
that they can download the reports. After the results have been published, the
organisations concerned are invited to take part in a workshop in which several
organisations discuss the results. The only times when the organisations
benchmarked are in contact with the external research team are during the
# Benchmarking in Dutch healthcare

information sessions at the start of the process, via the helpdesk and during the
workshop at the end.

Interactive benchmarks are based on a totally different research process. Here,


the benchmarking partners are in close contact from the very start and the
external experts – if any – act as process managers and supervise the sessions.
The benchmark teams jointly establish the benchmark, design the questions,
determine which data should be gathered, carry out the analyses and discuss
the results.

It goes without saying that these two types of benchmarks place different
requirements on organisations. The first requires that the organisations
benchmarked largely give shape to the learning process themselves. An
internal team will be required to initiate an awareness-raising process during
the benchmark survey, embed the benchmark results in the organisation and
see to the implementation of quality-improvement measures. The second type
of benchmark, the interactive model, requires participants to be more closely
involved in developing the benchmark from start to finish.

In the fact-based benchmark, performance is measured against objective,


validated and representative data. In the interactive benchmark, it may also be
measured against non-representative data or qualitative descriptions.

In the first type of benchmark, performance can be measured against a


previous benchmark; this is rarely the case in the second type.

Whereas the first type of benchmark is better suited to large-scale surveys with
several dozen or even hundreds of partners (in which case jointly developing
the benchmark is practically impossible), the second type is better suited to
small-scale surveys.

And whereas the fact-based benchmark may disregard the learning needs of
individual participants, the interactive benchmark could lead to repeatedly
‘reinventing the wheel’ and possibly less-than-optimum solutions, such as
insufficiently ambitious benchmarks, which would diminish the learning
effect.

There are also practical reasons for choosing a particular type of benchmark.
Whereas large-scale surveys may include several dozen or even several hundred
Different types of benchmarking #!

partners, small-scale surveys tend to be limited to about ten. So if industry


associations strive for broad participation, this only leaves the option of the
fact-based benchmark in view of practical and cost considerations.

4.8 Profile of a healthcare benchmark model


This section has discussed six different ways of classifying benchmarks. By
using these classification criteria, we can create a profile for each benchmark.
PricewaterhouseCoopers’ healthcare benchmark model, which we will discuss
in more detail in the next section, has the following profile.

Classification criterion Healthcare benchmarks


Objective To improve efficiency and effectiveness.
What is being measured First and foremost, performance. Processes only to
the extent that they explain performance.
Reference group External industry benchmark, i.e. comparing with
other organisations in the same industry, in
combination with an internal benchmark.
Level of organisation Level of organisation (company/foundation) as a
whole, but also departments and products.
Use of standards To begin with no standards: performance is
compared with that of best performers. If standards
are used they must be generally accepted by the
industry.
Research method Fact-based benchmark. Participants tend not to be
directly involved in benchmark tools and content.

Table 4.1 Profile of PricewaterhouseCoopers benchmark model


Source: PricewaterhouseCoopers
5 Benchmarking model for
healthcare benchmarks

In this section we will address more closely the benchmarking model used in
recent decades in the benchmark surveys we have carried out together with our
clients and other partners in the health sector. This means that this section is
based on practical experience.

A crucial characteristic of the model is that it is built up of several building


blocks, or dimensions. We shall refer to this principle as the multidimensional
approach, and we consider it to be a basic premise.

The key features of the model are financial performance and quality. With this
model the survey results yield strategic management information, which the
organisations can use to adjust their strategy or use of people/resources in order
to improve performance. As far as we are concerned, the model will continue to
form the basis of good benchmarking, which does not mean that there is no
room for improvement. For further information on this, see Section 8.

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information
Figure 5.1 Benchmark analysis model
Source: PricewaterhouseCoopers; various benchmark studies.
#$ Benchmarking in Dutch healthcare

We will discuss each component of the model in turn in the following


subsections.

5.1 Input and strategic themes

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

The model should be read from left to right. The left shows the organisation’s
input, or starting point. The organisation will have to take account of its
environment (or, more precisely, the environment ultimately determines the
organisation’s goal) and its historical context. And the organisation employs
resources: capital, equipment and of course, being in the health sector, people.
The organisation will then identify its strategic themes on the basis of this
starting point, which so far has little to do with benchmarking. Which
product-market combinations do we wish to deliver? How do we deal with a
tight labour market? How can we remain financially sound? How can we
become the very best? The answers to these questions will largely determine
the way in which people and other resources are employed. In other words: the
strategic themes determine an organisation’s structure and work processes.

The benchmark can provide useful information for the organisations’ strategic
themes. Consequently, we often start a benchmark survey with a session in
which representatives of the partners are asked to name a number of themes
about which information could be gathered. In the 2004 home care benchmark
the tight labour market was named as one of the strategic themes. That is why
this benchmark paid particular attention to identifying the factors that
influence the motivation of existing employees and the appeal of the
organisation as an employer.
Benchmarking model for healthcare benchmarks #%

Strategic position

Given the importance of strategic themes in influencing the choices


organisations make, we proposed including their strategic position in the
benchmark survey. Our hypothesis was that an organisation’s performance
was related to the strategic position it had opted for and, this being so, it would
not be correct to treat the performance of different organisations in the same
way.

We were able to test this hypothesis in the 2004 home care benchmark. We
opted for a classification into three possible strategic positions, based on a
79
theory developed by Treacy and Wiersema. These researchers make a
distinction between the position of client leader, product leader and cost
leader. A client leader focuses on optimum customer satisfaction, with service
and long-term customer loyalty being key concepts. A product leader offers
sophisticated and innovative products (e.g. ICT in healthcare) while a cost
leader focuses on offering the lowest possible price. Treacy and Wiersema hold
that successful companies choose to pursue one specific position.

In the benchmark we opted for a construction in which the position could vary
per product. We could, for example, imagine that home care organisations
would choose the position of client leader for the product ‘personal care’, but
the position of cost leader in the case of ‘housekeeping services’.

We assumed that the client leaders in the benchmark would score highest on
the building block ‘client assessment’, that cost leaders would score highest on
the financial building block, and that product leaders would have a bigger
investment budget for product development.

The organisations that took part in the benchmark completed a short


questionnaire to determine their position. In our analyses, we related the
benchmark performance to these positions. The questionnaire was completed
88 times, by 73 organisations (some organisations completed questionnaires
for more than one product).

The findings of the study did not confirm our hypothesis. First of all, no more
than 20 per cent of all home care providers actually opted for one of the three
positions; of those that did, all chose the position of client leader. A large
majority of the organisations did not make a clear choice, or chose two or three
positions.
#& Benchmarking in Dutch healthcare

Secondly, no correlation was found between the position opted for and the
performance delivered. No more than five of the 17 organisations that made a
clear choice in favour of client leadership scored above average in the client
survey.

These findings can be interpreted in different ways. First of all, it could be that
the three positions distinguished and/or the questionnaire are not yet
80
sufficiently geared to the healthcare sector. Treacy and Wiersema did not
develop their theory specifically for the healthcare sector, and not even for the
public sector. Secondly, it may be that most organisations have so far chosen
their strategic position implicitly and have not related it to the structure of
their organisation or their work processes. Several benchmark partners
advised us that this instrument had helped them in the awareness-raising
process. A third interpretation is that whilst they may have chosen a position
explicitly, its implementation in the organisation was still in its infancy.

That said, there was also evidence indicating that the strategic position of
organisations did affect their performance. Three of the five best performers in
the benchmark survey had chosen a clear position, which is much higher than
the average of 20 per cent referred to above. We may not, of course, draw any
conclusions from such small numbers, but this does give us grounds to suggest
that including the strategic position of organisations in the benchmark can
add value.

The hypothesis that the strategic position opted for by organisations influences
their performance has not been rejected. The time may not yet be ripe to
include this aspect in the benchmark, but that time will no doubt come.
Benchmarking model for healthcare benchmarks #'

5.2 Building blocks of benchmark surveys

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

Let us return to the analytical model. The central part of the figure depicts the
building blocks. These building blocks constitute the key elements of the
benchmark survey, namely the performance aspects measured. In defining the
various areas measured, we decided to follow as closely as possible the INK
management model designed by the Dutch Quality Institute. As this is a
well-known model in healthcare, its outcomes are familiar to participants and
easily applicable to their own operations.

The building blocks examined differ per benchmark. In all cases, however, a
benchmark should include several building blocks, and therefore several
dimensions.

5.3 The financial building block

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

The financial building block has so far formed part of all healthcare
benchmarks. An organisation’s financial performance is based on cost, equity
$ Benchmarking in Dutch healthcare

and financial position. Financial considerations always play a role in


healthcare and the literature shows that the cost side of things is usually
included in healthcare benchmarking. This dimension is excluded only in a
few small-scale benchmarks in the public sector. Our first benchmarking
studies equated financial performance with efficiency, which we took to mean
costs in relation to production. In the healthcare sector this means the amount
of care delivered per euro. And whereas this does not cover all aspects of
healthcare delivery – after all, healthcare providers also deliver meals, laundry
services and financial overviews – the definition was considered to suffice at
the time by the benchmarking partners, as the delivery of care is the core
competency of healthcare providers.

The definition of financial performance has been extended since the 2004
home care benchmark survey, as efficiency is not the only aspect of financial
performance that providers must deliver on. Even an organisation that delivers
much more care per euro than other providers, for example, has a problem if
costs exceed income in a given year. A similar problem would arise in the long
term if an organisation’s financial position were not sufficiently sound. That is
why the financial building block was extended to include the components
‘financial position’ and ‘net profit/loss’. The components are weighted in order
to calculate a total score. The financial position, for example, accounts for 10
per cent of the total score, net profit/loss makes up 20 per cent of the score and
efficiency 70 per cent. Other weightings are also possible, but the degree to
which a care provider itself is able to influence performance should also be
borne in mind. If, for example, the government or another financial backer
permits only limited capital accumulation, or chips in immediately in the
event of losses, an organisation’s results are not necessarily a good indicator of
performance.

Total score financial performance

20% 10% 70%

Financial position Efficiency


Net profit/loss (DEA)
(budget ratio)

Figure 5.2 Weighting of financial building block components (example)


Source: PricewaterhouseCoopers; various benchmark studies
Benchmarking model for healthcare benchmarks $

How do we calculate the amount of care delivered per euro?

Let us return to efficiency. How do we calculate the amount of care delivered


per euro? The answer to this question could provide organisations with
valuable insights into their financial management and possible cost savings. In
order to give a complete answer, we first need to answer three sub-questions:
how much care is delivered, what are the costs involved, and how can the
amount of care be responsibly matched to costs?

In answering the first sub-question, we came across a practical problem. If we


really want to know how much care is provided, we need to measure it. Time
registration, a method used in outpatient care, works well in that context, but
a different kind of time-keeping is needed in inpatient care. During the early
years of healthcare benchmarks, time was measured by registering the care
and treatment provided, a method developed by Customers Choice and known
in the Netherlands by the acronym ZBR. All carers/nursing staff were issued
with a palmtop computer on which they had to indicate which activity they
were carrying out for which client(s). They had to register this information
every twenty minutes using pre-coded answers. The activities entered were
automatically clustered into activity groups and AWBZ functions (long-term
care functions as specified under the Exceptional Medical Expenses Act) with
the measurements spanning a period of a week. The large number of
observations made during the course of a week enabled the calculation of
reliable averages. Despite anecdotal evidence that staff found this an
interesting experience, the benchmarking partners indicated that they felt it
to be a heavy burden. There was a clear wish to gain insight into the amount of
care delivered using a simpler method.

It is for this reason that the most recent benchmarks for care of the disabled
and for nursing, care and home care (VVT) no longer require that time is
registered (although care providers may do so on a voluntary basis). The
amount of care delivered is now measured through production targets or
registered production, which may be related to staffing levels. Today (2007) this
is much easier than it was some time ago. We are better able to keep tabs on the
amount of care delivered than we were a few years ago, now that we work with
so-called ZZPs, or care complexity modules, rather than nursing days. Another
development is that the amount of care provided is limited given the
requirement that the care delivered should be matched to need. Some
$ Benchmarking in Dutch healthcare

providers are even experimenting with electronic patient files in which the
care provided is registered daily.

In terms of research quality, however, we would like to query the suitability of


this solution. In residential care, production targets and registered production
are expressed in terms of ZZPs, but the amount of care actually provided can
differ substantially within a ZZP. Time measurements conducted in nursing
and care homes in the past have shown that some organisations were able to
deliver many more minutes per client per day than others, even after adjusting
for the complexity of care. It is unlikely that these differences have completely
disappeared.

Benchmarking organisations could learn a great deal from comparing their


own production with that of others. In the past, time measurements enabled
comparison of time spent on each activity, showing up clear areas for
improvement. Another advantage of time-keeping was that it provided a
measure of total productivity, as non-client-facing activities were also recorded
in the ZBRs. This enabled organisations to monitor such issues as whether an
above-average amount of time was lost waiting for a lift to arrive, or on
transporting equipment. These pointers for improvement disappear when
measurements are based on care complexity modules.

We would therefore like to urge benchmark participants to examine whether


the benefits of time measurement make up for the burden of keeping time. Do
participants know exactly how much time is spent on measurements of this
kind, and do they know in advance whether their employees would actually
find it a burden?

Taking stock of costs

In each benchmarking exercise, taking stock of costs is tailored to the specific


industry being measured. Even so, the following basic procedure applies.

The first item to be measured is the personnel costs per contract hour per group
of employees or level of expertise. In other words, the salary costs, special
allowances etc. are related to the length of the working week (or ‘contract
hours’) as stated in the employee’s contract of employment. Time
measurements are then used to calculate the percentage of contract hours
spent on client care. After adjusting for group-based care, this gives the
Benchmarking model for healthcare benchmarks $!

personnel costs per hour of client care. The costs are calculated for each level of
expertise. These costs are multiplied by an extra amount for overhead and
materials, broken down by level of expertise. A calculation of this kind shows
up the actual costs for each client-facing hour, which in turn can be used to
calculate the costs per hour per AWBZ function by determining, with the aid of
the ZBR, the share of each level of expertise in each function. The costs per care
complexity module could also be calculated in this manner.

Amount of care in relation to costs

Calculating efficiency gives the cost per AWBZ function (or ZZP) per time unit.
The price level can then be explained by each of the underlying factors and any
relationships between these factors. In this way the benchmark can be used to
provide pointers for improvement in an organisation’s financial performance.
The cost price model can also be used independently of the benchmark, for
instance to calculate the effects of a new budget or of certain cost measures.
Figure 5.3 presents the model for nursing and care.

Figure 5.3 Cost price model for nursing and care

(See figure on next page)

Source: PricewaterhouseCoopers; nursing and care benchmark


Uniform
Cliënt
cost units
Care need Total costs Organisation
in keeping with
Capital expenses & Material & other costs Personnel costs Nzi schedule
costs related to buildings
Costs related to Food Personnel costs
premises/buildings Hotel management and
Regional Depreciation General costs senior staff
determination Patient/resident- Personnel costs
Assessment Board AWBZ entitlement general and
of need related costs
(RIO) administrative staff
Personnel costs
hotel staff
CAO = Collective Labour Agreement
Personnel costs
healthcare workers

Functions Category (time unit) Healthcare workers


10 FWG = Healthcare job evaluation system
9
8
7 Personnel costs healthcare workers
25 25
6 Costs per contract hour Share of client-facing hours per client-facing hour per level of
20
5 per level of expertise per level of expertise

hours
16 Staff payroll department
4
10 Direct client-facing hours costs per contract hour personnel costs
l Gross salary
3 = healthcare workers
l Shift work bonus productivity per client-facing
2 Indirect client-facing hours
l Net social security

nursing

housekeeping
personal care
support activities
activation
treatment
accommodation
1 Staff insourcing at group level personnel
contributions including l Sick leave costs
0 savings premiums costs per contract hour personnel costs Insourcing healthcare
l Leave/holidays
l Net pension = healthcare workers factor workers per
productivity per client-facing

HVZ
PV
VP
OB
AB
B
V
HVZ
PV
VP
OB
AB
B
V
l Training
contributions client-facing
l Holiday allowance l Other work hour
Outsourced to third parties

Contract hours
l Overtime l Meetings
personnel costs
costs per contract hour
specific

l Other allowances l Personal time healthcare workers


=
productivity per client-facing
Non-client-

l Travelling time

number of client-
number of clients
facing hours Cost price per function per time unit
Activities (day/night) per function
Care delivered
number of client-facing
for example: hours per function p (price)
Group matrix
tion

HVZ
PV
VP
OB
AB
B

l cleaning private quarters Client


moda-

HVZ: Max. number of clients HVZ x hours


accom-

Function in homogeneous group


Day Night PVA x hours
PV: l washing Personnel costs
Care 10 48
Nursing 4 24
VP x hours healthcare workers €
VP: l reserved procedures, etc. Assistance 3 24
OB x hours
Weekly pattern
Assignable costs
OB: l assisting clients with daily activities OB x hours €
Mon Tue Wed Thu Fri Sat Sun
l assisting groups of clients, etc.
AB x hours Product and
7am to 10am
l intervention in crisis situations 10am to 4pm 7am to 11pm service
AB: 3pm to 11pm AB x shifts
11pm to 7 catalogue
l assisting groups of clients, etc. 11pm to 4am
Overhead €
B x hours
B: l treatment
V x yes/no
tion

l accommodation
HVZ
PV
VP
OB
AB
B

V:
moda-
accom-
Benchmarkanalysemodel voor zorgbenchmarks $#

Efficiency measurement entails more than cost price calculations per function
per hour. It should also include a comparison of different organisations,
bearing in mind the differences in care complexity among clients. One hour of
care given to clients requiring highly complex care may be more expensive
than an hour given to clients with less complex needs, for instance if more
highly skilled staff are used. This means that cost prices should not be
indiscriminately compared with one another.

We solved this problem of comparison in the benchmarks for nursing and care
homes by grouping the organisations into clusters based on the complexity of
81
care. Cluster classification was used mainly in measuring efficiency, but the
results of the client survey were also reported per cluster.

The same problem was found in the home care benchmark study. Cost prices in
organisations that focus on delivering housekeeping services care rather than
nursing cannot be simply compared with organisations in which the situation
is the reverse. Here, too, we were able to use cluster classification, although its
value has diminished over the years. These days, many home care providers
offer comprehensive care packages that are very similar. Things have changed
substantially since the days when family care providers offered a package that
differed markedly from that offered by the home nursing organisations.

We calculated the cluster classifications used in the benchmarks with the aid
of the DEA method. DEA stands for Data Envelopment Analysis and is a method
used to automatically calculate which clusters of care providers offer a similar
product mix, and subsequently to calculate which organisations offer most
care per euro. These organisations are found on the efficiency frontier, as
shown in Figure 5.4. Care providers to the left of this line are less efficient.
$$ Benchmarking in de zorg

Number of
hours of 150 14
11
housekeeping 15
12
services 13 Efficiency
per NLG 100 frontier
10
100 8

Home care provider


9 7 5

6 4
50

1 2
0
50 100 150
Number of hours of care per NLG 100

Figure 5.4 Data Envelopment Analysis


Source: PricewaterhouseCoopers; various benchmark studies

The literature also recommends grouping similar organisations into clusters.


82
Kaczmakre and Metcalfe suggest adjusting the outcomes for product mix
differences as an alternative to calculating the outcomes per cluster. In our
view, a drawback would be that the influence of the product mix becomes less
visible and that the benchmarking organisation may take inappropriate
measures as a result.

5.5 Quality of care

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

Healthcare benchmarks invariably consist of one or more quality building


blocks in addition to the financial building block. This is also the case in other
performance benchmarks.
Benchmarkanalysemodel voor zorgbenchmarks $%

The quality building blocks in healthcare benchmarking always represent the


quality of care. In other sectors this element of benchmarking may be referred
to as the quality of service or the quality of the products, but the principle is the
same.

From the very beginning we have evaluated the quality of care using client
assessments. We had to defend this choice at first, not so much to the home
care providers themselves or to industry associations, but to other
stakeholders. What do clients know about the quality of care, they would
argue. He or she (and in particular she) may notice whether the window sills
have been thoroughly dusted, but who are they to judge whether the
intravenous drip was properly sterilised? All parties meanwhile agree that
client assessments form an essential part of quality assessments and that
quality involves more than just medical or technical nursing skills. Measuring
client assessments is indispensable in an industry in which clients can
sometimes even switch to another care provider if they are not satisfied with
the quality offered. That is why client assessments have been included as a
fixed parameter in the standards for responsible care referred to earlier.

What to do if clients are unable to judge the quality?

Can we speak of a client’s judgement if the client himself or herself is unable to


assess the situation, as in the case of clients suffering from dementia or the
mentally disabled? May we then replace the client assessment by someone
else’s assessment, such as that of a relative or carer? And who is to determine
whether a client is competent to judge?

If at all possible, we want to know what the client’s own assessment is, even if
this means that individual interviews need to be conducted or that
questionnaires need to be especially adapted. We sometimes address
additional questions to a client’s family, but only if the client is truly incapable
of answering any questions. In the existing nursing, care and home care
benchmark this is done with the aid of the client surveys used to measure
responsible care. Client surveys used to assess care of the disabled are
developed as part of the benchmarking exercise. The benchmark participants
indicate whether clients are competent to judge. Whilst we are aware of the
drawbacks of this approach – do organisations assess the situation in a
comparable fashion? – we see no alternative as things stand now. And whereas
83
there are methods to measure the quality of care by observing clients , these
$& Benchmarking in de zorg

methods also make use of third parties (observers), and they are
time-consuming and expensive. That said, we should keep a close tab on
developments in order to further improve this aspect of healthcare
benchmarking.

Interviews or questionnaires?

Effective measurement of client assessments is a tricky business, even if the


clients are fully capable of stating their views. Each method seems to have its
pros and cons. Written surveys are easy to conduct, but the drawback is that the
questionnaire could be completed by someone other than the client. Whilst
this drawback does not exist when clients are interviewed, a danger of
face-to-face interviews is that the client may be influenced by the interviewer.
On top of that, interviewing is extremely time-consuming and costly. Costs can
be reduced by interviewing clients in groups, but the danger of this is that
clients influence each other. Telephone interviews are not as time-consuming
(no travelling time for the interviewer, no time needed to introduce the
interviewer), but here we find that clients are not always keen to be
interviewed by telephone. We have so far not used Internet surveys in
benchmarking care of the disabled or nursing, care and home care, as we
assume that many clients in these sectors would have problems taking part.

Where possible, we have therefore opted for written questionnaires in our


healthcare benchmarks (home care clients, family), and for individual
interviews in situations where response to a written questionnaire would
probably be too low (residents of nursing and care homes and the mentally
disabled). To ensure that the interview results are as objective as possible and
comparable, the quality of the interviewers is continuously monitored.

We only use an Internet tool for relatives of clients, and then only on a limited
scale and in addition to the written questionnaires.
Benchmarkanalysemodel voor zorgbenchmarks $'

The Internet

The possible uses of web surveys need to be explored further. Clients and their
families alike are becoming increasingly comfortable with the Internet, and
developing user-friendly survey tools should no longer be a problem. Providing
group instructions on location in nursing and care homes could enable
residents to subsequently complete the questionnaires by themselves, and
software is currently being developed for the disabled to go online. It may be
somewhat crude to say that there is a bright future for digital client surveys, as
personal contact will always be the preferred choice, but we should at least
ensure that the costs of written questionnaires or face-to-face interviews do not
deter us from taking stock of clients’ opinions at regular intervals.

In the housing corporation benchmark, for example, the entire client survey
was digitised (see Figure 5.5). The pilot for this benchmark was recently
completed.

Figure 5.5 Log-on screen client survey housing corporation benchmark


Source: Pilot housing corporation benchmark
% Benchmarking in de zorg

Content of client survey

Whereas the contents of client surveys are tailored to the specific industry
being benchmarked, some issues feature in all surveys. Surveys can usually be
divided into questions relating directly to the care provided and questions
relating to how the care is organised. The first type of questions addresses the
way in which care providers treat clients (listening to clients, showing respect)
and the actual care provided (such as the quality of personal care, meals or
daily activities). In short, they reflect how clients experience the care they
receive. The second type includes questions such as whether the number of
staff suffices and whether the healthcare providers are easily accessible.

Examples of statements used in the client survey of the nursing and care
home benchmark:
• The personal grooming I receive here is very good.
• Residents receive adequate assistance when they need to go to the
toilet.
• I’m happy with the way they use the hoist.
• My care plan was drawn up in consultation with me.
• Staff always comply with arrangements made.
• If I ask my carers to do something, they sometimes give me the feeling
that I am a burden.

Despite the importance of client assessments to quality measurement, some


benchmarks also use other methods to measure the quality of care. In some
cases, carers themselves are also asked to give their opinions on the quality of
care. The nursing and care home benchmarks, for example, use care-specific
84
quality indicators based on the Resident Assessment Instrument, RAI. The
indicators used included medication and fall incidents. In the most recent
nursing, care and home care benchmark the quality of care is derived from how
benchmark participants score on the standards for responsible care. These
scores are based on multiple measurements of the quality of care: a client
survey and an evaluation by the Dutch Healthcare Inspectorate.

We expected to find a clear correlation between the client’s assessment and the
benchmarking organisation’s score on care-specific quality indicators, as one
would think that aspects such as bedsores or being strapped down would
influence a client’s opinion. But our analyses consistently found either no
Benchmarkanalysemodel voor zorgbenchmarks %

correlation or only a weak one. The number of organisations with a high client
assessment score and a low score on care-specific indicators, or vice versa, was
far too large for the two to be correlated. This suggests that these two indicators
measure quite different aspects of quality. It is useful to take note of the fact
that the care-specific indicators are not what clients themselves find
important. They look at the way in which they are treated by their carers and
take their professional expertise for granted. Watson calls this a basic
requirement: clients only consider it to be a dimension of quality if care
providers fail to deliver. Clients see a serious medical error as a severe lack of
quality, but when choosing a care provider other aspects appear to play an
important role.

Shortly after we began benchmarking, we saw the application of a quality


control system as a measure of the quality of care. In other words: do
organisations that have quality procedures in place offer better quality than
those that don’t? Analysis has shown that there was no correlation here either,
either general or specific. The only correlation found was between the use of
quality control systems and care-specific indicators in the 2003 nursing and
care home benchmark study.

We asked ourselves whether this could be explained by the way in which


quality control was surveyed, namely with a questionnaire directed solely at
management. Might management paint too rosy a picture of the degree to
which paper measures are applied in practice? We never found out. The debate
about the role of quality control has been superseded by more recent insights
into the factors that make an organisation successful. We will come back to
this in more detail in Section 8.
% Benchmarking in de zorg

5.6 Quality of the job

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

The second quality-related building block in most benchmarks is ‘quality of the


job’. As employees are the most important production factor in the healthcare
industry, it is crucial that the organisation for which they work is a good
employer.

In the early days of benchmarking, we used existing employee surveys to


measure the quality of the job. As these surveys were not developed specifically
for benchmarking purposes, they were not always fully aligned with the other
benchmarking dimensions. This could create problems, for instance in surveys
in which total scores could not be calculated, or where the scores for different
themes could not be compared with one another. Another drawback was that
some of these surveys included questions about aspects of quality on which
individual organisations had no influence, such as wage levels. In other words,
questions of this kind did not measure an organisation’s performance, which is
in fact the essence of benchmarking.

Despite these drawbacks, employee surveys have always proved to be very


useful in benchmarking. The survey findings have consistently yielded
practical recommendations and were in many respects correlated with other
building blocks. Participants in the most recent healthcare benchmarks have
decided to include employee surveys as an integral part of the benchmarking
exercise from the very start.

Employee surveys make use of questionnaires, usually written ones. Now, in


2007, the possibility of conducting surveys via the Internet has, of course, been
addressed, yet so far benchmarking participants have indicated that not all
Benchmarkanalysemodel voor zorgbenchmarks %!

their employees are able to complete an online survey, either at work or at


home. By contrast, the pilot housing corporation benchmark has opted for an
Internet version, additionally offering employees the possibility to request a
paper version. In a standard introductory letter, employees are provided with a
log-on code giving them direct access to a questionnaire. This easy-to-follow
approach encourages them to take part in the – much cheaper – Internet
survey. The most recent nursing, care and home care benchmark also offers
participants the option of completing either a paper version or an online
version.

5.7 Social responsibility


Some benchmarks have included a third dimension of quality (not included in
the figure): the quality of the care provider as a player amid its stakeholders. In
other words: quality in the eyes of society, which we will refer to here as ‘social
responsibility’. This building block addresses questions such as: ‘How do we
work in conjunction with other organisations?’, ‘How do we relate to our
financial backers?’ and ‘How do we contribute to the employment situation in
our region?’

The social dimension of quality is grounded in the belief that organisations in


general and publicly funded organisations in particular should act like socially
responsible businesses and should, by the same token, have an added value for
society.

The social responsibility building block has been used twice in healthcare
benchmarks: once in the 2002 home care benchmark study and once in the
pilot benchmark for healthcare administration agencies.
%" Benchmarking in de zorg

Examples of statements and questions in the social responsibility building


block are:
• The home care provider always does what has been agreed (five answer
85
categories).
• The healthcare admfinistration agency is well-informed about specific
regional circumstances.
• What is the average number of days between the actual admission of
the client and the own contribution as paid by the client? (Facts)86
• What percentage of applications for the awarding of government
grants was submitted on time? (Facts)87

This building block appeared to be difficult to measure in the home care


benchmark study. Our initial idea was to ask stakeholders to complete a
written questionnaire comprising questions that would reflect the care
provider’s social responsibility. The questionnaire made a distinction between
different types of stakeholders, such as financial backers, other healthcare
providers, client organisations and municipal authorities. If, for instance, a
financial backer is somewhat negative about an organisation’s performance in
this respect, it might mean that the organisation is a tough negotiator when it
comes to production targets. And if a client representative body is somewhat
negative about the care provider, it tends to indicate that something else is the
matter. While the outcomes yield more valuable strategic management
information if a distinction is made between different types of stakeholders,
breaking them down into categories may result in a very small number of
stakeholders per category – in some cases a mere handful, or even no more than
one. As this obviously compromises the anonymity of the answers, some
stakeholders decided not to respond at all and others informed us that they
would prefer to have a bilateral meeting with the care provider rather than
completing a questionnaire.

We were therefore unable to calculate a social responsibility score for


individual organisations. However, a national analysis (1,312 useable
questionnaires) indicated that respondents had substantially more positive
views about some issues than about others. For example, questions on efforts to
deliver good ‘chain care’ produced much higher scores than those on the
degree to which organisations informed stakeholders about their policies.

The pilot benchmark for healthcare administration agencies did yield


information about individual agencies, but brought to light another issue. It
Benchmarkanalysemodel voor zorgbenchmarks %#

showed quite clearly that healthcare administration agencies scored less well
on a number of indicators. However, as these items were the same for all
healthcare administration agencies in the pilot, the question that arose was
what added value there would be in including these same indicators when
launching the benchmark country-wide. The healthcare administration
agencies themselves felt there was a big chance that this would yield the same
findings. In other words: the area for learning had already been identified.

And so it was decided in both cases not to include social responsibility in future
benchmarks. That said, the notion that society’s views matter has not been
discarded. The housing corporation benchmark has included social
responsibility as a building block, and the vocational education sector has also
decided to include this building block in their next benchmarking exercise.

5.8 Relationship between building blocks

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

Once the results of the individual building blocks in a benchmark study are
known, the relationships between the various building blocks can be analysed.
Are employees with a higher average salary more positive about their jobs than
employees with a lower salary? If clients rate the way in which they are treated
by staff poorly, could that be explained by the fact that the organisation
employs many low-skilled workers? Analysis of the relationships between the
building blocks has yielded a wealth of possible relationships. Whereas some of
the correlations found are weak and merely indicative, others are significant
and have been confirmed by regression analysis, and are consistent over time
and across industries.
%$ Benchmarking in de zorg

Clients were found to be more positive about the quality of care than were
healthcare staff (and more positive than the clients’ contact at the care
provider). As a rule, however, one can say that organisations that score
well on quality of care also score well on quality of the job. We found a
relationship between the organisation’s financial performance and several
aspects of quality of the job. This would suggest that people feel better if
they work for organisations that perform well financially. Conversely,
healthy staff (the quality indicators measured related mainly to matters of
health) may be expected to contribute to financially sound business
operations.

Some benchmarks showed a relationship between good financial


performance and good quality of care, but this was, admittedly, not always
the case. Efficiency and quality are a good match, but need not necessarily
go hand in hand.

5.9 Best practices

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

The main question that arises after the analysis has been performed is who the
benchmarked organisations should try to emulate. If a healthcare provider
wishes to improve the quality of care, it would seem logical for it to emulate the
organisations that perform best in the client survey. Similarly, if it’s an
organisation’s efficiency that needs to be improved, the logical choice would
seem to be to compare oneself with the organisations that score best on this
item.

Yet however logical these conclusions may seem, they do not appear to hold
true, as organisations are not one-dimensional. Could it be that a very high
score on one building block takes its toll on other building blocks? We found
that this was indeed the case. Whereas almost all best performers score well on
Benchmarkanalysemodel voor zorgbenchmarks %%

individual building blocks, they tend not to rank among the very best. An
exceptionally high level of efficiency tends to go hand in hand with
middle-of-the-road quality, and vice versa. True best practices are found in
organisations that have succeeded in striking a balance between the building
blocks, providing them with a good overall score. One could compare it to
all-round speedskating championships: the all-round winner is usually found
among the top five for most of the individual events, but need not win over
every single distance in order to become the all-round champion.

In short, best practices are determined by the total score. In our reports, we
usually visualise this in a matrix presenting the different scores (see Figure 5.6).
The matrix has three dimensions, as it uses colours in addition to showing the
position. The organisations in the top right corner of the figure score well on
the financial building block and in the client survey, but only the care
providers given in light blue also performed well in the employee survey. The
blue organisations in the top right corner (BP) are best-practice organisations.
The care providers given in light blue and black are not, because they score less
well in the employee survey.

8.6
Client assessment

8.4

8.2

8.0

7.8
4.5 5.5 6.5 7.5 8.5 9.5
Total score financial survey

Figure 5.6 Matrix of total scores home care benchmark


Source: 2004 home care benchmark
%& Benchmarking in de zorg

We consider the above method of identifying best practices to be a very


effective one. In the early years of benchmarking, there was considerable
debate about the question of where to draw the line. This line is arbitrary by
definition, as benchmarks are not based on absolute standards. In some cases
there is a natural divide, for instance where a group of organisations scored
high and the next best group scored substantially lower. More often, however,
the line is drawn for pragmatic reasons, for example because the group of best
practices should be neither too big (as participants would no longer be able to
set themselves apart), nor too small (in which case the make-up of the group
would likely be too one-sided). We have, however, always set certain limiting
conditions. An organisation may never, for example, be identified as a best
practice if it scores below average on a particular building block.

Identifying best practices using the above method also has a drawback: it is
only effective with a sufficiently large group of participants. And the greater
the number of building blocks included in the exercise, the more participants
are needed for there to be enough organisations that score well on all building
blocks. Given this drawback, the identification of best practices has not always
been given the highest priority.

It is also for this reason that it has been suggested to make do with ‘good
practices’ (a broader group of good performers) or to work with the best-scoring
organisations per building block after all.

From a research perspective, we would advise participants always to carefully


consider whether it would be wise to discard best practices. Ambitious
organisations that compare their performance with ‘good practices’, for
example, run the risk of not achieving their full potential. And the risk of
making comparisons with high-scoring organisations per building block is that
the approach becomes too one-sided. We expect that the theory of performance
excellence (see Section 8) will offer a solution to this in the future. Once we
know which aspects of an organisation lead to performance excellence, it is
easier to determine whom to emulate in order to achieve the desired level.
Benchmarkanalysemodel voor zorgbenchmarks %'

5.10 Explaining performance

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

‘Explaining performance’ (or business operations or process management, or


whatever you want to call it) features at the bottom of the model discussed in
this section. In short, this part of the model addresses the question of how the
organisation has achieved its financial and quality performance.

Whereas taking stock of business operations has been a part of healthcare


benchmarking for years, actual performance was not usually measured. Our
rationale for doing so was as follows: if an organisation measures its
performance against that of others and concludes that they are performing
better, it will want to know how these others have succeeded in doing so. In
other words, it will want to know how their performance can be explained.
Performance can be explained in part by aspects of performance or facts that
feature in the same building block. An example would be the finding that
clients are not very positive because they feel they are not sufficiently
consulted in the development of their care plan. This information may be
enough for an organisation to take appropriate measures.

Conversely, an organisation may need to gain insight into the business


operations of other organisations in order to know what actions for
improvement are needed. Are clients’ opinions about the quality of meals
related to the cooking method used? Do care providers that perform well
financially have access to more extensive management information than
others? Is it true that best-practice organisations are less hierarchical than
others? A related issue, although not strictly an aspect of business operations,
is the question of whether cultural differences could explain differences in
performance.
& Benchmarking in de zorg

The home care benchmark study, the benchmark studies in nursing and care,
the pilot benchmark addressing care of the disabled and the pilot benchmark
for healthcare administration agencies have all made efforts to gain better
insight into business operations and culture. The questionnaires used for this
purpose range from a small set of questions about aspects considered to be
promising to hundreds of questions addressing such issues as healthcare
processes, facility management, financial policy, organisational set-up, etc.

Blood, sweat and tears have been put into the questionnaires, not only to
design them but also, no doubt, to complete them. For weren’t we all
determined to find an explanation for good performance? New questions were
invented and reinvented, drawing from the most recent literature or yet
another round of interviews with well performing organisations. The expertise
of specialist agencies was called in, indicators were developed and tested,
hypotheses were formulated. And so it was rather frustrating to have to
conclude that the results of these efforts were meagre. Or, to put it more
accurately, that the results did not yield a clear picture. The findings showed
that good performers did not all have the same pattern of business operations.
The expected relationships between performance and processes failed to
materialise, correlations found in one benchmarking study did not reappear in
the next, or certain processes were found to be linked not only to performance
excellence but also to exceptionally bad performance.

In one of the benchmark studies, for example, a significant positive correlation


was found between use of a relatively limited amount of management
information and good performance. Did this warrant the conclusion that a
focus on the big picture and management that rises above the detail lead to
better performance? We thought so, for a while, until we found out that the
correlation between a limited set of indicators and poor performance was just
as strong. It was only the organisations halfway down the league table that
used more extensive information.

Still, we did manage to bring to light a number of relationships over the years
(see box). But this gave rise to yet another issue: if we know which factors
explain good performance, what reason would there be to continue to
investigate this?
Benchmarkanalysemodel voor zorgbenchmarks &

Examples of relationships between performance and business operations


Organisations with a limited number of management layers score better
88
on quality of the job.
• The bigger the healthcare teams, the less likely it is that clients will be
consulted in drawing up a care plan.89
• The bigger the healthcare teams, the less positive the client
90
assessment.
• The more insight one has into the cost price, the less cost-cutting takes
91
place in care and services.
• The more volunteers, the more negative clients are.92

A finding in all benchmarks was that strong guidance and commitment by


middle management is related to positive client assessment, positive staff
assessment and often also to good financial performance. The presence of
shared values was also found to be crucial.

We found the best example of correlations in the 2004 home care


benchmark in the area of planning, by which we mean planning the
arrangements made by healthcare staff and clients. Whereas the employee
survey included questions about the employee’s opinions on planning and
rostering, the client survey inquired whether the arrangements made
were complied with and how clients felt about this. The process
management building block inquired into the planning system – whether
it was manual or automated, centralised or decentralised.
We found the following:
• If staff are negative about the planning system, clients are negative
about compliance with arrangements made.
• Compliance with arrangements made is an important factor in clients’
overall assessment of the quality of care.
• If staff are negative about the planning system, they report sick more
often (which in turn has a negative effect on the organisation’s
financial performance).
• A decentralised planning system is correlated with higher staff
productivity. This might be explained by the fact that it is easier for
staff to respond to unforeseen circumstances. Whereas this increases
the pressures at work, it does not appear to negatively affect their
work enjoyment and health.
& Benchmarking in de zorg

As far as we are concerned, the debate about business operations has


changed course as a result of new insights into performance excellence. Not
only do these new insights shed a fresh light on the factors that affect good
performance, they also offer the possibility to identify the degree to which
these factors are at play. This enables us to determine how far advanced an
organisation is on the road to performance excellence. We shall discuss
these insights in more detail in Section 8.

5.11 Innovation
In developing the new nursing, care and home care benchmark, ActiZ (the
Dutch association for nursing, care and home care) decided to introduce the
building block ‘innovation’. ActiZ sees the capacity to innovate as a crucial
factor for success and survival. As discussed in Section 8, this notion is
corroborated in the contemporary literature. We are currently co-designing an
instrument to measure the innovative power of organisations, enabling
benchmark partners to learn from each other in this area too.

The influence of external factors

One of the questions that should be addressed in a benchmarking exercise is


what influence external factors have on the performance of benchmark
participants, the reason being that it is felt to be ‘unfair’ not to take account of
circumstantial factors, particularly if the organisations benchmarked have no
influence on these factors. There would be no point in emulating an
organisation that succeeded in performing well merely because its external
circumstances were more favourable. An organisation located in an area
known to be unsafe, for example, will have to spend more money on security
than one located in a relatively safe area. Or, in a city known for its critical and
articulate residents, it will be more difficult to win a positive client assessment
than in a rural village.

In our healthcare benchmarks we have taken account of external, fact-based


factors from the very start. We did this by making various cross-sections (if
there were a large enough number of participants). This enabled us to chart
client assessments by region and to examine whether there were any
differences between urban and rural areas. Further, in calculating the scores
for the quality of care we adjusted for a number of client characteristics, such
Benchmarkanalysemodel voor zorgbenchmarks &!

as age, sex and whether or not the client was a resident of a care home or
93
nursing home.

The analyses of the circumstantial factors yielded a number of interesting


findings:
• In most benchmarks we found a relationship between the size of the
organisation and its performance: as a rule, we can say that small
organisations (stand-alone or part of a group) performed better, both in
financial terms and in terms of the quality delivered. However, this
relationship became more diffuse, or even disappeared altogether, in later
benchmarks.
• Organisations in urban areas score almost consistently lower on all
building blocks than organisations in rural areas.
• Organisations of a particular religious persuasion perform better than
others.
• Clients of care homes rate the quality of care more positively than clients of
nursing homes.

In summary, the ‘external’ factors found to be related to performance were the


level of urbanisation and the size, persuasion and type of organisation. It
therefore makes sense to take account of these factors when comparing
organisations – at least, when comparing aspects of performance on which the
organisation has no influence. This does not mean, however, that nothing can
be learned from unlike organisations. If small organisations are found to
perform better, for example, large organisations might do well to organise
their operations on a smaller, more human scale. And the good scores found
among organisations with a distinct religious persuasion remind us of the
discussion about strategic positioning: clear strategic choices that are also
implemented in practice seem to be conducive to good performance. This could
be a valuable finding for all organisations, even if the number of organisations
with a specific persuasion is on the decline.
&" Benchmarking in de zorg

5.12 Reporting results

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

The right side of the benchmark model shows that the benchmark outcomes,
the data measured and analysed, are reported at various levels. The central
level is that of the benchmarked organisation itself. In practice, defining this
level may not be as simple as it might seem. Does it refer to a group comprising
several locations or to an individual location? And can a large group of
organisations be compared with a foundation or trust that runs a single, small
organisation? These are questions that need to be answered prior to the
benchmark survey to avoid disappointment afterwards.

Comparing information at the aggregate level is not always useful, especially


not for large organisations. One might, for example, query the value of the
finding that the average score in the employee survey is 7 if two of the
individual organisations score no higher than 5. In this case, the organisation
would probably want to have information per location, or maybe at unit level.
Healthcare benchmarks therefore usually offer the possibility of reporting at a
lower level, at least for the quality tools.

The discussion tends to focus on financial data: participants sometimes


indicate that it is difficult or even impossible to break down data by location or
department. In practice, however, the allocation of resources to locations
and/or departments does influence the performance of the organisational
entity concerned, and it is therefore important that this information is
available.

A special application has been developed for the most recent benchmarks for
care of the disabled and for nursing, care and home care (VVT), enabling
Benchmarkanalysemodel voor zorgbenchmarks &#

participants to specify their organisational set-up and to indicate the desired


level of reporting (which in turn determines the level at which they need to
provide the data). ActiZ is developing an application to benchmark individual
products in the nursing, care and home care benchmark.

Individual benchmarking participants and their sub-entities are usually not


the only parties to receive a report on the benchmarking results. From the very
start, we made a habit of reporting aggregate results to the healthcare industry
as a whole, or at least to all participants. The data presented in these generic
reports cannot be attributed to individual healthcare providers, nor should
they be. The purpose of generic reporting is to present the state of play in the
industry as a whole, preferably comparing the outcomes with those found in
previous benchmarking exercises, of course.

The industry level is also the level at which relationships between the building
blocks, or relationships between performance and business operations, can be
identified. This is not always clear to all parties concerned. Individual
benchmarking participants tend to expect that industry-wide reporting will
yield an ‘integral analysis’ of their own organisation, but this is usually not
possible. The finding that a positive assessment by clients of their daily
activities is related to their carers’ positive assessment of their work, for
instance, is only statistically valid if the number of observations is sufficiently
large. Conclusions about individual care providers can only be drawn for very
large organisations that have access to data specified per organisational entity.

Generic industry reports contribute to the transparency of the industry


concerned and are often widely distributed.
&$ Benchmarking in de zorg

5.13 Benchmark strategic management information

Strategic analyses

Input Building blocks Results

Environment Financial Industry


performance

Strategic Identification of Data


Resources Organisation
themes best practices measured

Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance

Benchmark
strategic management
information

The last dimension of the benchmark model – presented at the bottom of the
figure – shows that participants can use the benchmark results as strategic
management information for their own organisation. This information will
enable participants to consider possible actions for improvement. If
improvements are needed, the care provider may decide to make adjustments
to its strategy or to existing solutions addressing strategic issues, or it may
decide to employ its people and resources in a different way. Benchmarking is
therefore a cyclical process.
6 The step-by-step benchmarking
process

This section looks into the different phases of benchmarking. A review of the
literature reveals general agreement that benchmarking should at least
94
include preparation, data collection and analysis, and follow-up. While some
researchers present more extensive step-by-step plans than others, they all
distinguish:

• a preparation phase to identify what to benchmark, find benchmarking


partners and develop models and instruments
• an implementation phase during which data are gathered, validated and
analysed
• a phase during which findings are reported and discussed

Most researchers see benchmarking as a cyclical process in which benchmark


participants use their findings to implement improvements and subsequently
to measure the effects via a new benchmarking exercise. Some authors cite the
Deming Cycle: plan, do (research), check and act (improve). De Vries and Van
der Togt are in favour of a cyclical model comprising four main steps –
preparation, research, planning and implementation of actions, and aftercare
95
– which in turn are subdivided into a total of nine substeps. Harrington and
96
Harrington divide the benchmarking process into five phases: planning,
internal data collection and analysis, external data collection and analysis,
proposal for improvement and implementation of improvements. These
97
phases are made up of a total of twenty activities. Bullivant’s approach
consists of twelve steps divided into three phases: planning, analysis and
98
action. And Keehley has designed an eleven-step plan.

6.1 Benchmarking phases in the public sector


We shall first address the phases described in Benchmarking in de publieke sector.
The authors begin by describing a design phase in which the partners discuss
the purpose of the benchmarking exercise, the indicators to be compared, the
analytical models and the parameters (funding, engaging third parties,
publications, etc.).
&& Benchmarking in Dutch healthcare

The design phase consists of three activities. The first is to identify goals and
parameters. When identifying the goals, the following elements should be
addressed: learning versus accountability, an integral approach versus a partial
approach, and a general description of the indicators to be compared. The
parameters include such things as the research costs and how these costs are to
be divided, whether or not to engage third parties during the implementation
phase, the role of other stakeholders (financial backers, regulators,
government bodies) and the manner in which the findings should be
99
disclosed.

The second activity during the design phase is the identification of the
indicators against which benchmarking takes place. A common framework has
proven to be a good starting point for a discussion of the indicators to be
selected. For example, the much-used INK management model referred to
earlier, designed by the Dutch Quality Institute, offers a framework that
encompasses the organisation’s performance, client assessments, employee
assessments and the views of society at large, as well as organisational
parameters concerning leadership, organisation of the primary process, HRM,
100
strategy and policy, and resource organisation.

The third activity during the design phase entails designing a comparative and
101
explanatory model to determine the depth of analysis. The preparation phase
is followed by an implementation phase consisting of data gathering, analysis
and reporting. The last phase in the benchmark process is follow-up, when
participants are supported in their efforts to work towards improvement and
the benchmark is evaluated.
The step-by-step benchmarking process &'

6.2 Keehley’s step-by-step plan


102
The most extensive benchmarking method is Keehley’s step-by-step plan. His
benchmarking process consists of eleven steps and starts only after an
organisation has selected a process, product, service or function to be
benchmarked. We shall look into his step-by-step plan and give
PricewaterhouseCoopers’ views on each phase.

Step 0: Select the processes to be benchmarked

Keehley sees the selection of the processes to be benchmarked as the very first
step, which should not be taken lightly. In doing this, the organisation should
take account of the degree to which it is ‘ready’ for benchmarking. It should
address questions such as ‘How experienced are we in benchmarking?’, ‘Do we
have a learning organisation culture, or do we suffer from We Are Different or
Not Invented Here syndromes?’ Other factors are the strategic importance of
the activities to be benchmarked and possible external pressure from clients,
competitors or politicians. It is judicious to prioritise the processes to be
103
benchmarked using explicit criteria.

PricewaterhouseCoopers’ perspective
We would argue in favour of opting for an integrated benchmark rather than
selecting individual processes to be benchmarked. Selecting one or a few issues
is only useful if these issues exist more or less in isolation and are barely related
to other issues. An example would be a benchmark of an organisation’s
treasury activities (for further details, see Section 7). We would also make a case
for addressing an organisation’s performance in the broadest sense. Of course,
' Benchmarking in Dutch healthcare

this would also entail a selection, as it is not possible to include all aspects of an
organisation’s performance in a benchmarking exercise. A selection that is
made as part of an integral approach is different, however, in the sense that it
entails a selection of an organisation’s core activities, or the (temporary)
exclusion of issues for which information is lacking.

Step 1: Determine the purpose and scope

After having selected what is to be benchmarked, Keehley’s first step in the


actual benchmarking process is to determine the purpose and scope of the
project. The organisation concerned should explain why they want to
benchmark and what they hope to achieve. It is crucial that all parties involved
discuss and document the reasons for benchmarking and the expected results.
They should also meticulously define the scope of the project in terms of
breadth, depth, time and resources, so that participants do not lose sight of the
original brief during the course of the exercise. If one fails to do so, there is a
danger of taking on board other processes that are also worth examining, and
before you know it the project becomes unmanageable and loses its focus. The
purpose and scope of a project can be recorded in a document that will serve as
a contract between the organisation benchmarked and the benchmarking
team.

PricewaterhouseCoopers’ perspective
In our view, two aspects should be distinguished: involving all relevant actors
in the organisation, and staying focused. The first element is addressed in more
detail elsewhere in this report. As for remaining focused, we share Keehley’s
view that this should feature prominently during the preparations. It is vital
that the scope of the project is spelled out in a plan of action, for instance, as
well as in agreements with individual participants. Only then is it possible to
keep control of the process and avoid disappointment afterwards. This does not
mean, of course, that we should stick rigidly to what has been agreed at the
start of the benchmarking exercise. Each benchmarking study should be a
voyage of discovery for the participants and for ourselves. The need to
incorporate innovations in a benchmark or to respond to changing legislation
means that there will always be things that have to be reinvented. In such cases
parties would do well to give each other the space to discard categorisation that
has no basis in reality, say, or to re-examine that ‘one and only’ promising
explanatory factor one more time.
The step-by-step benchmarking process '

Step 2: Know yourself, or: understand your own processes

Keehley underlines the need to make adequate preparations and take stock of
the existing situation. When this step has been completed, the benchmark
team will have a detailed flowchart of its own processes and some idea of
potential performance indicators, bottlenecks and solutions. This
introspection will yield a project plan for the remaining benchmark project.
The plan details what needs to be done when, and by whom.

PricewaterhouseCoopers’ perspective
As the healthcare industry tends to use ready-made instruments, some of the
work in this step does not always need to be done. Even so, it is important in any
organisation that a dedicated team focus on the benchmarking exercise so that
the organisation will not be taken by surprise when it is asked to provide
information or, at a later stage, by the benchmark findings themselves.

Step 3: Search for potential benchmarking partners

This step begins with the organisation drawing up an extensive list of potential
partners. Important sources of information are research literature, personal
contacts, benchmark databases (such as the Benchmarking Exchange of the
American Society for Quality Control) and the services of consultants
specialised in benchmarking. The initial list will then be trimmed using
carefully chosen selection criteria, leaving only the ‘perfect few’. One would,
for example, need to check whether a potential partner actually has a best
practice and whether the best practice is not only proven to be successful, but is
also transferable and repeatable in the sense that it is not linked to unique
104
circumstances. Other criteria should also be borne in mind, such as the
degree to which the potential partner is similar to, or differs from, one’s own
organisation. Organisations initiating a benchmarking study for the first time
would do well to restrict the exercise to internal or similar partners. As a rule,
one could say that the more experienced an organisation is in benchmarking,
the more capable it will be of benchmarking against unlike partners. Note that
real breakthroughs are most likely to happen when unlike partners are
compared.
' Benchmarking in Dutch healthcare

PricewaterhouseCoopers’ perspective
In healthcare benchmarking the search for partners is usually initiated by the
industry associations: they announce to their members when the next
benchmark study is due to take place, provide information and recruit. By
contrast, the benchmarking exercise for housing corporations opted for the
establishment of an independent benchmarking body charged with recruiting
the benchmarking partners.

Step 4: Select indicators

This step comprises selecting the performance indicators to be used to compare


one’s own benchmarked processes with the performance of the benchmarking
partners. As was the case with the selection of potential partners, a list of all
possible indicators is drawn up and subsequently pared down to manageable
proportions. The indicators should at least cover the most important steps in
the benchmarked process, as well as the key influencing variables. In selecting
the performance indicators, the benchmarking team should align the
organisation’s own information needs with practical considerations such as
cost, time and the willingness of the partners to gather information – for they
are not endlessly willing to do so.

PricewaterhouseCoopers’ perspective
In healthcare benchmarking, selecting indicators is a process that necessarily
involves only a representative selection of the participants (brought together in
a sounding-board group). Whereas we agree that the list of indicators should
stay manageable, it should not become too short. Organisations truly striving
to launch actions for improvement should not expect to be able to identify such
actions on the basis of rough outcomes. Too small a number of indicators could
lead to disappointment (‘We already know this’) or to distorted findings (‘We
didn’t take into account the outcomes that were not AWBZ-related because
that would have been too cumbersome’).

Step 5: Collect internal information

An organisation needs to review its own performance before asking other


organisations to provide information about how they are performing. In this
step, the performance indicators selected in the previous step are therefore
applied to the process, function or service benchmarked.
The step-by-step benchmarking process '!

PricewaterhouseCoopers’ perspective
In healthcare benchmarking, information relating to all participating
organisations is requested simultaneously to allow for comparison. Keehley
suggests that, in future, each organisation should be able to decide for itself
when it wishes to embark on a benchmarking exercise, assuming that it has
access to a database of recent comparative data. We fully support Keehley’s
idea, in the sense that it is important for organisations to closely examine their
own processes.

Step 6: Collect information about partner organisations

Information about performance indicators is gathered about every


organisation included in the list of selected potential partners. The
performance of one’s own organisation is then compared with the information
about the potential partner organisations. Whereas the comparisons do not
need to be exhaustive in every respect, the more comprehensive the
information provided, the better the comparison will be. Such comparisons
will show up which partners perform significantly better than others. If the
best-performing organisations also meet the other selection criteria, they will
become the benchmarking partners – provided, of course, that they are willing
to participate and to assist the benchmarking organisation in the more
laborious data collection phase that follows. Additional information can be
gathered with the aid of questionnaires or telephone interviews, or – better
still – by visiting the partner organisations. Such site visits, provided they are
well-prepared, deepen knowledge and understanding of the findings. Going on
excursions ‘just to kick the tires’ adds very little value.

PricewaterhouseCoopers’ perspective
As things stand now, data collection and data analysis in healthcare
benchmarking are coordinated and carried out by research agencies in
consultation with the principal and the sounding-board groups. Participants in
healthcare benchmarks typically come into contact with each other later on in
the process – usually after the results have been released.

Step 7: Analyse the gap

In the previous steps, the benchmarking participant gathered information


about its own organisation as well as provisional data about potential partners
and additional, in-depth information about the actual partners. Step 7
'" Benchmarking in Dutch healthcare

addresses the latter category of information. These data allow the


benchmarking team to identify performance gaps and best practices. The
analyses have a quantitative and a qualitative component. After a performance
gap has been quantitatively identified, a qualitative analysis is performed to
answer key questions such as: what does the benchmark partner do in the same
way as we do, and what does it do differently?

105
Keehley gives the following tips:
• Avoid paralysis by overanalysing. There is a danger of getting bogged down
in peripheral details.
• Keep an open mind for the unexpected. If you focus on the differences and
exceptions, you will be more likely to detect opportunities for fundamental
improvement of your own processes.
• Take a systematic approach to selecting recommendations. Select various
excellent practices on the basis of explicit criteria.

PricewaterhouseCoopers’ perspective
The participants in healthcare benchmarking are given feedback on these
analyses. But the real pointers for improvement only come to the fore in
consultations with other organisations. That is why healthcare benchmarking
exercises are often concluded with a series of workshops involving a limited
number of participants, or at least with an invitation to take part in such
workshops.

Step 8: Implement measures to close the gap

Once best practices have been identified and selected and management has
adopted the recommendations of the benchmarking team, the main focus of
the project is to incorporate opportunities for improvement. The start of this
phase is the development of an action or implementation plan that provides an
answer to the key questions: who does what and when? Resistance to change
can be reduced by ensuring the active participation of all parties concerned in
the implementation process.

One should not assume, however, that simply adopting a best practice will
automatically lead to the desired results. Keehley argues that a copycat
approach tends to result from superficial internal research and anecdotal
106
external research, also referred to as stop-and-shop benchmarking. Cloning
the practices and processes of model organisations can have dramatic
repercussions that widen the performance gap rather than narrowing it.
The step-by-step benchmarking process '#

PricewaterhouseCoopers’ perspective
Strictly speaking, implementing actions for improvement may not be a part of
the benchmarking process. That said, it may well be the most important phase
for the organisations concerned. Appropriate timing of actions for
improvement is just as important as a sound approach. Actions for
improvement that fit logically into the planning and monitoring cycle are
more likely to become firmly embedded in the organisation than actions taken
independently of the cycle. It is more efficient, of course, to have access to the
benchmarking findings when the budget and policy plans are being drawn up.

Step 9: Monitor the results

Improvement measures need special care and attention shortly after they have
been introduced. Organisations are always at risk of slipping back into old,
familiar practices. It is therefore important that, during the implementation
phase and the initial stages of change, the participants monitor whether the
measure is being introduced as planned and whether the desired results are
being achieved in practice. Do the performance indicators reflect
improvements?

PricewaterhouseCoopers’ perspective
See step 11.

Step 10: Adapt based on the results

The benchmarking process needs to be adapted from time to time, as the


introduction of new practices often changes the organisation and the
environment in which it operates is not static. Re-evaluating the process
essentially means carrying out a mini-benchmarking exercise to measure the
effects of the changes implemented on everyday practice and on progress
towards the goals set.

PricewaterhouseCoopers’ perspective
See step 11.

Step 11: Restart search for best practices

Benchmarking is an ongoing process. As soon as a change has been


implemented, there is scope for further improvement. Perfect organisations do
not exist, and as organisations gain more benchmarking experience, the search
'$ Benchmarking in Dutch healthcare

for best practices can be extended to include lesser-known organisations and


more complex processes.

PricewaterhouseCoopers’ perspective
We have noticed that the call for continuous benchmarking is becoming ever
louder. Participants want a better understanding of the impact of the measures
taken on their performance and position. Whilst most benchmarking partners
are in favour of a frequency of once every two years, there are also partners who
would like to repeat the benchmarking exercise, or at least one or several
aspects of it, more frequently. This is one of the reasons why we are now
designing a system in which the organisations themselves can determine the
frequency and starting dates. We have been commissioned to do so by ActiZ.

6.3 A phased approach to healthcare benchmarking


We will now discuss the phases we generally apply in healthcare
benchmarking. These coincide with those described in Benchmarking in de
107
publieke sector and with Keehley’s step-by-step plan, adapted for our purposes.
We have summarised the phases in Figure 6.1.
The step-by-step benchmarking process '%

Further develop- Gathering, Analysing data Reporting Discussing


ment of the structuring and the findings the findings
benchmark validating data
model

Figure 6.1 Benchmarking phases


Source: PricewaterhouseCoopers; various benchmark studies

The five phases in the figure can be clustered into three main phases:
preparations (phase 1), the research itself (phases 2, 3 and 4) and actions to be
taken in response to feedback on findings (phase 5). The latter phase feeds into
the actual implementation of improvements. As the actions for improvement
no longer strictly form part of the benchmarking process, they have not been
included in the figure. Needless to say, bringing about organisational
improvement is the very reason for benchmarking in the first place.

Phase 1: Further development of the benchmark model

We tend to speak of further development as we are no longer starting from


scratch, and are able to build on experience gained in earlier benchmark
studies. That said, every exercise has new elements that need to be
incorporated in the model. The sounding-board groups in which participants
from the organisations benchmarked are represented play a central role in this
'& Benchmarking in Dutch healthcare

phase. They contribute to the development of innovations, evaluate new


models and questions, and if necessary test new instruments.

The sign-up procedure, during which partners register for participation in the
benchmark, concludes the first phase.

Phase 2: Gathering, structuring and validating data

Once the benchmark model and its constituent building blocks have been
developed, participants will begin to gather information for the benchmark.
This can be done in a variety of ways, such as:
• conducting interviews using questionnaires (in particular in the client
survey)
• asking client and employee contacts to complete questionnaires
• asking the organisation benchmarked to provide information
• requesting data about the organisation from other information sources

All data are saved and structured in a dedicated benchmark database. Using
sophisticated ICT and well-structured data can simplify analysis considerably.

In the early days of healthcare benchmarking, much time was spent on


defining the information needed in terms of whether organisations were able
to provide this information. Reliable and comparable data can only be supplied
on the basis of clear-cut definitions, but even if clear and accepted definitions
are available, this may not always be possible. If the participants are unable to
retrieve information directly from their own records, gathering the requested
data can be a very time-consuming affair and in some cases they may even be
unable to provide the information at all. The question that arises in such cases
is what to do in the next benchmarking exercise. Stop requesting this
information altogether? The answer usually depends on how important the
organisations rate the information for their own processes. In our experience,
they tend to say they will put in an effort to provide the information in future
benchmarking studies, particularly data relating to the financial building
block – probably because they feel they could not afford to do without this
information.

108
Sanford Berg (University of Florida) does not mince his words: ‘If managers do
not have the date required for such comparisons, then one must question what
they are actually managing.’
The step-by-step benchmarking process ''

Phase 3: Analysing data

With the aid of software analysis tools and the assistance of research agencies,
the client and the sounding-board group, we first carry out analyses for each
individual building block and subsequently analyses that go beyond the level of
individual building blocks and address the relationship between the building
blocks and the identification of best practices. The analytical phases are what
make research agencies tick. We are always interested in knowing what the
outcomes are and hope to discover unexpected relationships and further
insights.

Phase 4: Reporting findings

In healthcare benchmarking we make a distinction between individual and


generic reporting. Industry-wide, generic reporting tends to consist of a
‘straightforward’ report in Word or sometimes in PowerPoint, which is
distributed digitally or in printed format.

Reporting to individual benchmarking partners is in a state of flux. We find


that benchmark participants are asking for general outcomes and the big
picture so that they can set priorities. They do not want to get bogged down in
detail. At the same time, however, they want strategic management
information in order to implement concrete actions for improvement, which
requires greater detail. If the level of aggregation in reporting is too high, the
outcomes lose all meaning (‘normal is simply the average of deviations from
the average’). The solution lies in adjusting the level of detail to the specific
target group: whereas a company’s board of directors is interested in highly
aggregated information, the division or company managers want more detail,
and the managers of individual teams or offices want even more detailed
information tailored to their own units (provided, of course, that the
information is clearly and accessibly presented).

In a separate development, individual reporting is evolving from


straightforward Word documents with text, tables and figures to web-based
reports in which recipients can click on any subject they would like to
investigate further, or even conduct their own analyses. Web-based reporting is
the only solution in the case of continuous benchmarking, where participants
are able to decide for themselves when they wish to carry out particular
elements of the benchmarking exercise.
 Benchmarking in Dutch healthcare

The reports are automatically generated and subsequently filled with


information from a benchmark database using a fixed format. The
client-specific information as well as data about other participants is presented
in tables and figures. Any additional notes are the same for all participants, but
109
specific outcomes may cause standard texts to pop up. Use of standard texts is
unavoidable, but is not always what the benchmarking participants are
looking for. They sometimes expect a report tailored to their specific
organisation. Although this is possible in theory, it would be an extremely
costly exercise. We make an effort to let figures and tables speak for
themselves, for instance by using colours: red represents a below-average score,
green an above-average score. An individual organisation’s relative position is
illustrated by marking it in a different colour (see Figure 6.2). This reduces the
need for individually tailored reporting.

Final score
10

4
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65
Ranking final scores on financial performance (FP) building block

Figure 6.2 Example representation of an organisation’s position


Source: 2004 home care benchmark study
The step-by-step benchmarking process 

The most future-proof method now available is reporting of information that


can be accessed by a benchmarking partner at any point in time and
subsequently updated with the most recent organisation-specific outcomes as
well as the most up-to-date national figures. Participants are presented with
general information, using tools such as dashboard or cockpit reporting, and
have on-screen access to more detailed information. This system allows the
incorporation of as many levels as desired. Users can opt either to retrieve
ready-made analyses or to carry out further analyses themselves. The tool
should offer the possibility of drawing up, saving and printing one’s own
reports.

In healthcare benchmarking, a tool of this kind is still being developed. A


precursor to this tool is an Excel application known as the click model, used to
report on the financial building block of the 2004 home care benchmark study.
As the data in this tool were not linked to a database, they could be used only
once.

PricewaterhouseCoopers Italy recently presented a more fleshed-out example


of a dashboard report for a one-dimensional benchmark. The tool – Esculapio –
presents historic trends and forecasts alongside current data.

Phase 5: Discussing the findings

The benchmark findings are discussed with the participants in nationwide


meetings as well as in small-scale workshops. If they wish, the benchmarking
partners can then initiate discussions or workshops themselves. These
discussions should always include an evaluation of the value and
user-friendliness of the benchmark. The findings typically help further
improve subsequent benchmarking exercises.
7 Healthcare benchmark:
notable features

All benchmarks have a number of distinguishing features – features that might


be a source of inspiration to other benchmarks. This section discusses key
aspects of benchmarking surveys in the Dutch healthcare sectors.
PricewaterhouseCoopers is or was involved in many of these surveys, with the
exception of the benchmarking exercises focusing on the hospital sector, the
mental healthcare association GGZ and the healthcare chain.

7.1 Nursing, care and home care benchmark


Figure 7.1 shows the benchmark model for the new nursing, care and home
care benchmark, known in the Dutch healthcare sector by its acronym VVT. The
new building block, ‘innovation’, is a feature of this benchmark, along with
new ideas on performance excellence.

Financial performance

Clients
Staff
HPOs

Innovative strength

Figure 7.1 Benchmark model: continuous nursing, care and home care benchmark
Source: VVT continuous benchmark; PricewaterhouseCoopers

In 2006 a new benchmarking model was set up for the nursing, care and home
care sectors, which jointly participate in the benchmark. It has several
modules, with the basic package offering data-gathering and reporting at
" Benchmarking in Dutch healthcare

overall organisation level plus one level below – all other levels are optional.
Time-keeping is a separate option.

Nearly all building blocks include innovations. ‘Quality of care’, for instance,
will build on from the Responsible Care Standards Project as soon as this is
feasible. This project aims to arrive at a broad-based description of the quality
of care and, as we have noted, draws on a survey of clients and/or their families
as well as the views of the Dutch Healthcare Inspectorate.

‘Quality of work’ has been simplified and alignment with other building blocks
improved. The staff monitor includes an employee survey and data gathered
via ‘financial performance’ such as FTE breakdown and sick leave. While the
‘financial performance’ building block itself has been simplified – e.g. no
compulsory time-keeping – it has, at the same time, been extended to include
treasury data – e.g. debt servicing charges and gap analyses. These gap analyses
compute the difference between the current budget and the budget based on
care complexity, and between expected costs and expected budget.

The ‘innovation’ building block is being developed. It has been agreed to focus
not so much on innovation projects, as every organisation would interpret this
differently, but rather on the ability to innovate and the parameters for
innovation. Innovation has been defined as the organisation’s ability to keep
reinventing itself and so adapt to changing circumstances. This ability is not
restricted to finding totally new solutions, but also involves the ability to adopt
solutions devised by others and/or adapt these to the organisation’s needs.

A start has also been made on benchmarking performance excellence, the


subject of a lengthier discussion in Section 8. Lastly, the benchmark is based on
products as much as is practicable: for inpatient care these are care
complexity-derived products and for outpatient care the focus is on care as
provided under the Social Support Act and the Exceptional Medical Expenses
Act.

7.2 Child healthcare benchmark


In the Netherlands, child healthcare is the remit of home care providers and
the GGD community health services. Home care organisations are responsible
for healthcare to children between 0 and 4 years of age, which they provide
Healthcare benchmark: #
notable features

through the baby health clinics (‘consultatiebureaus’) well-known to all


parents in the Netherlands. The GGDs provide healthcare to children and
young people aged between 4 and 19 (school healthcare, counselling, etc.).
A number of organisations do both.

To encourage alignment between the two types of child healthcare and


improve overall child healthcare in the Netherlands, ActiZ, GGD Nederland
(the association of all community health services) and the Association of
Netherlands Municipalities (VNG) are running a project called Beter Voorkomen
(Prevention is Better) until 2008, sponsored by the Ministry of Health, Welfare
and Sport and directed by ZonMw, the Netherlands organisation for health
research and development. One of its key features is a benchmark that is
designed to contribute to a joint frame of reference.

Working with Van Naem & Partners, we drew up a plan of action for this
benchmark in 2006. One key consideration underlying this benchmark is that
these organisations’ funding is more diverse than that of many other
healthcare providers, with resource allocation criteria deciding what goes
where. At the same time, municipalities are free to buy additional, customised
care. Moreover, the community health service is embedded in the municipal
set-up, with each municipality deciding its own financial performance and
care targets. The differences between the Dutch home care providers and the
GGDs also have a part to play here.

For the 2004 home care benchmark we conducted a financial benchmark of the
child healthcare supplied by the country’s home care providers. This
benchmark showed that organisations found it very difficult to attribute
revenues and costs to individual products, as records still reflected the system
of input funding that had been in force until 2003. At the time, some
organisations were already indicating that this would not happen again: they
wanted to allocate costs to products for the sake of their own business
operations as much as for any subsequent benchmark.

7.3 Healthcare administration agency benchmark


The benchmark for healthcare administration agencies was commissioned by
their regulator, the Health Care Insurance Board (CVZ), with Zorgverzekeraars
Nederland, the industry organisation representing providers of care insurance
in the Netherlands, later getting on board as co-sponsor.
$ Benchmarking in Dutch healthcare

This benchmark started off as a broadly-based instrument with a financial


component, quality building blocks and research into operations. A successful
pilot was completed and the aim was to launch the benchmark countrywide.
However, priorities shifted and the national benchmark ended up as a single
client survey and financial performance investigation. The client survey
revealed that many clients have no real concept of the role of a healthcare
administration agency, even when they have recently had dealings with one. As
a result, many found it impossible to adequately rate the performance of
healthcare administration agencies. The financial performance survey showed
up the difficulty such agencies had in deciding who to compare themselves
with. Costs varied, but this was found to reflect policy choices, at least in part.
And with the client survey inadequate as a proper quality gauge, it proved
impossible to identify the choices that contributed most to quality. Healthcare
administration agencies also found it a challenge to define new products in
uniform and measurable terms, e.g. waiting-list mediation. All that said, the
2005 benchmark produced many instructive outcomes, one of them being
greater insight into FTE breakdown. The benchmark also contained elements
that might prove very useful in benchmarking healthcare insurers.

7.4 Benchmarking care for the disabled


Having run a pilot in 2004, the Vereniging Gehandicaptenzorg Nederland
(VGN, Association for Care of the Disabled in the Netherlands) commissioned a
full-fledged, sector-wide benchmark in 2006. Comprising such building blocks
as ‘quality of care’, ‘quality of the job’ and ‘financial performance’, the
benchmark is currently in full swing, with 108 organisations participating.
Customised benchmarking is also possible: organisations are able to receive
more detailed, on-demand reports, additional comparisons with self-selected
reference groups or rather more customised notes to individual reports.

Figure 7.2 captures the planning process for the benchmark, with CS standing
for client survey, ES as employee survey and FP as financial performance.
Healthcare benchmark: %
notable features

Workshops with peer groups


Data set for annual review

countrywide presentation
Organisation reports and
Preparing benchmark survey

Integrated analyses
Integration and developing
building blocks

Figure 7.2 Care for the disabled: a review of the benchmarking timeline
Source: 2004 care for the disabled benchmark

The timeline coincides with the one set out in the annual healthcare review. It
also includes workshops to discuss benchmark outcomes. All building blocks
making up this benchmark take account of a breakdown into client categories:
physically disabled; mentally disabled; sensory disabled; slightly mentally
disabled; and heavily behaviourally disturbed, slightly mentally disabled. If
relevant, data are sorted by client group, with the same applying to the
different services provided: outpatient day programmes; outpatient day care;
residential living; treatment; and inpatient day programmes. A
product/market matrix thus emerges.
& Benchmarking in Dutch healthcare

Service groups Target client groups


VG LG ZG LVG SGLVG
Children Adults Children Adults Children Adults
Non-residen Outpatient care
tial care at home
Outpatient day
programmes
Outpatient
treatment
Residential Residential living
care
Day programme
Treatment

Figure 7.3 Care for the disabled: product/market combinations


Source: 2004 care for the disabled benchmark

Starting the benchmark is projected to require a time investment on the part of


the organisation of 23 working days on average:
• 8 days for general project coordination
• 6 to 12 days for the client survey (information, sample survey, setting up
appointments)
• 2 days for the employee survey (information, encouraging participation)
• 5 days for the provision of financial data

We have found a careful sign-up procedure to be key – and time-consuming. As


the aim is to embed the benchmark’s results throughout the organisation, we
agreed it was a good thing to discuss the decision to participate at various
levels. However, this took time. Also, organisations were free to decide which
units to enrol in the benchmark – and that also required time. In selecting their
participating units, an organisation would sometimes find that it did not have
the requisite data at the level of the relevant unit. And then the process would
start all over again.

At the end of the day, we extended the sign-up term by a number of weeks, as
we had failed to factor in that these aspects would cause time pressures.
However, the final outcome – sector-wide recognition of the benchmark –
proved worthwhile.

This benchmark, like many others before it, was testimony to the importance
of frequent communication with participants to secure ongoing commitment.
After all, participants are unable to see all the hard work going on behind the
scenes, and wonder when the next step will come. Keeping its member
Healthcare benchmark: '
notable features

organisations regularly updated through its network, by letter and email, VGN
publishes a newsletter as well as issuing brochures on different building
blocks. Posters are sent out to draw attention to the client and employee
surveys, while contacts receive sample letters they can send to clients/parents
and employees. And to top it all off, the organisations’ contacts are invited to
attend a series of informative meetings. Quite a package, then, and all centred
on the idea that participants will stay on board only if they know what is going
on. VGN has also plumped for its own benchmark logo, making all
communication immediately recognisable as benchmark-related.

An unusual feature of this particular benchmark is that client surveys are


primarily conducted through one-on-one interviews. Numbers recorded at
sign-up suggest that a total of 18,000 interviews will be required. Admittedly,
this is a time-consuming and costly affair, but organisations end up hearing
how clients feel and not just other people’s opinions of what clients are likely
to feel – however well-meaning those others may be.

Adapted questionnaires have been developed specifically for parents or


relatives. Organisations can opt either to write to all parents/relatives or to use
a sample of half the number of these. Klanq and Perspectief, the companies
conducting the survey, have now carried out 10,000 interviews.

The benchmark’s financial building block includes what is called a gap


analysis, a tool for calculating the gap between revenues and costs based on
current prices. The tool was included to help show up the difference between
an organisation’s current revenues and the revenues it might look forward to
after the introduction of care complexity packages. Contrary to expectations,
compensation by care complexity package is as yet unclear, so a number of
scenarios have been included instead. Organisations may find their results in
their individual reports, but can also use the tool, a web-based click-to-calculate
service, to work out the effects of their own additional scenarios – and, when
the time comes, also to identify the ‘real’ gap, of course.

The only benchmark tool that has seen sector-wide application before is the
employee survey. As in 2004, the survey uses the ‘quality of the job’
questionnaire, which has undergone a few minor changes since then. In
January 2007, organisations participating in the survey received boxes of
questionnaires for each internal unit enrolled during sign-up. This time,
employees were also given the option to complete the questionnaire online.
Results have since come in: a response in excess of 50 per cent, a solid
percentage.
 Benchmarking in Dutch healthcare

Another noteworthy feature of this benchmark is its alignment with


regulatory accountabilities under the banner ‘one-time delivery only’. Which is
why we try to provide feedback on client and employee surveys as fast as we
possibly can, allowing organisations to use these data for their social reports.
Granted, this involves only a small batch of data at present, but it’s a start.

Following the end of the benchmarking exercise in September 2007, all


organisations received their individual reports connecting the various
building blocks and comparing the employee survey outcomes with those for
2004 – provided, of course, that the organisation had participated in the
previous survey.

Lastly, benchmark survey results informed the sector-wide benchmark report


presented on 13 September 2007, which also included all other benchmark
building blocks.

7.5 Partial benchmarks in mental healthcare


To date, PricewaterhouseCoopers has not been involved in the benchmarking
efforts by the mental healthcare association GGZ. For this subsection, then, we
110
will draw on a GGZ Nederland brochure.

Observing that benchmarking is now firmly rooted in the sector and no longer
needs the association to act as a key driving force, the brochure provides a
guide encouraging organisations to grasp the nettle. GGZ Nederland sees
benchmarking primarily as a tool for learning – and not therefore as an
accountability-driven instrument. It also feels that benchmarking has a part to
play in innovation, as a quality tool that can help generate and spread new
knowledge. Noting ten – not sector-wide – projects over the past ten years, the
brochure lists benchmarks on:
• the health information system ZORGIS’s key figures
• key facility figures, with an annual cycle covering a specific theme and
identifying relevant performance indicators
• clinical psychotherapy (care provision outcomes)
• inpatient rehabilitation centres (addiction care, care provision outcomes)
• addiction care, lifestyle training results
• overhead charges
Healthcare benchmark: 
notable features

Project structures in GGZ’s benchmarks range from independent project


organisations to a platform for discussion to (in some instances) external
consultants.

As the brochure reveals, the mental healthcare sector has not yet opted for
multidimensional benchmarks, but it does run benchmarks that explicitly
investigate care outcomes. Incidentally, this also applies to an example of
mental healthcare benchmarking outside Dutch borders: in Towards National
Benchmarks for Australian Mental Health Services, its authors present a
comprehensive benchmark model covering both costs and effectiveness,
including quality. Their model comprises a set of performance indicators and
outcome indicators, with outcome defined as both the difference before and
after the intervention or treatment, and the difference with and without
intervention. The discussion paper also introduces an index of case complexity
as a performance indicator.

In addition to the GGZ Nederland brochure, we have found a sector-wide set of


performance indicators for the quality of care provided. This set of indicators
was agreed by care providers, client organisations, insurers, professional
associations, the Dutch Healthcare Inspectorate and the Ministry of Health,
Welfare and Sport. Featuring in the annual healthcare review, this set is very
suitable for benchmarking indeed.

7.6 Benchmarking the healthcare chain


Measuring ‘chain care’ adds a whole new dimension to benchmarking. We are
no longer talking individual performances by individual organisations here,
but the joint performance by all the healthcare organisations that make up the
chain. The first benchmark tool for chain care across sectors is now in place for
the care given after a cerebrovascular accident or CVA, i.e. the care that stroke
patients receive. The benchmark measures cooperation between hospitals,
nursing homes, rehabilitation centres, home care and general practitioners.
The Erasmus Medical Centre’s Institute of Health Policy and Management
(iBMG) and Prismant were commissioned to develop the benchmark by ZonMw,
the Netherlands organisation for health research and development. On 10 May
2005, European Stroke Day, they released their research report Stroke services
111
gespiegeld (‘Comparing stroke services’), which discusses the creation and
operation of the chain benchmarking tool in great depth. The feasibility
study’s key conclusions were:
 Benchmarking in Dutch healthcare

• Benchmarking care chains and best practice selection are feasible.


Geographical regions can compare benchmarking outcomes with their
own working practices and use these to make improvements.
• All care chains are unique, but benchmarking helps to achieve better
results and reduce regional differences.
• Both leaders and laggards have room for improvement: chains have much
to learn from sharing.
• Benchmarking a care chain requires a great deal of commitment from every
single link in the chain. It simply will not do for only a section of the chain
to benchmark. The fact that chains were willing to invest in the benchmark
was a major achievement in itself. The report also suggests that
government and regulators such as the Dutch Healthcare Inspectorate
should make the benchmark compulsory but, even if they do not, that they
should nurture and promote it.
• Benchmarking would appear a distinct possibility for other care chains as
well. The report argued the case for standardisation of general repeat
elements such as employee motivation surveys.
Healthcare benchmark: !
notable features

7.7 Benchmarking Dutch hospitals


The hospital sector is benchmarked by a wide range of partial benchmarks – or
comparative studies, at least. That said, a generally accepted, sector-wide,
integrated benchmark is not available just yet. Partial surveys cover the usual
comparisons of waiting periods and the effects of hospital care that typically
make it into the country’s magazines and newspapers. Insurers’ websites also
112
disclose comparative studies. For instance, Zorgverzekeraars Nederland
reports that healthcare insurer Agis’s website provides links to recent surveys
by newspaper Algemeen Dagblad, weekly magazine Elsevier and Roland Berger
Strategy Consultants, and supports comparisons between survey outcomes.

Dutch comparative studies of the hospital sector

A – non-exhaustive – list of comparative studies into the Dutch hospital sector


yields the following:

• the NVZ database (NVZ being the Dutch Hospitals Association)


• the Dutch Healthcare Inspectorate’s performance indicators
• annual performance comparisons by Algemeen Dagblad and Elsevier
• a survey into the best care per condition by Consumentenbond, the Dutch
consumers’ association
• Ernst & Young financial key figures
• treasury and interest comparisons by PricewaterhouseCoopers
• comparisons of operations between leading clinical hospitals, e.g. facility
management, radiology and laboratories
• Prismant management features
• www.snellerbeter.nl
• Zorgverzekeraars Nederland’s purchasing guide
• costing model used by the Diagnosis Treatment Combinations (DBCs in the
Dutch acronym)

NVZ database

The Dutch Hospitals Association NVZ runs a database storing a plethora of data
on hospitals that allow for comparative analyses.
" Benchmarking in Dutch healthcare

Dutch Healthcare Inspectorate performance indicators

The Dutch Healthcare Inspectorate has developed a set of performance


indicators that all Dutch hospitals report on and that are publicly disclosed,
e.g. on hospital websites. Measuring both quantitative and qualitative data,
these indicators allow for comparisons between hospitals by stakeholders and
hospitals alike. The same indicators inform hospitals’ annual quality
reviews/annual reviews.

Algemeen Dagblad’s and Elsevier’s annual performance comparisons

Both the newspaper Algemeen Dagblad and the weekly Elsevier publish annual
comparisons of Dutch hospitals. Although controversial, these reviews have
become more comprehensive over time. The hospitals themselves do not
initiate these comparisons.

Consumentenbond questionnaires

In 2005 and 2006 the Dutch consumers’ association sent out questionnaires to
hospitals and independent treatment centres a total of eight times. All
questionnaires focused on a different condition and investigated the quality of
care. The association wrapped up its review with a league table revealing which
hospitals typically scored best and which lagged behind. It also gave hospitals
brownie points for each time they agreed to participate. Industry associations
initially came out against these surveys, as they felt that fragmented
investigations such as these put too great a burden on the organisations.

Ernst & Young’s financial key figures

Ernst & Young periodically releases a review of financial key figures per
hospital, drawing on the hospitals’ annual accounts.

PricewaterhouseCoopers’ treasury and interest comparisons

PricewaterhouseCoopers carries out annual treasury and interest comparisons


for hospitals, discussed at greater length below.
Healthcare benchmark: #
notable features

Operations at leading clinical hospitals compared

The country’s leading 19 clinical hospitals have agreed on participation in


comparisons and benchmarking, with participating hospitals having access to
benchmark data. Specific departments of the hospitals are compared, e.g.
radiology, laboratories and parts of facility management. Areas benchmarked
typically include care provided, staff deployment, capacity and finance.

Prismant management features

Prismant, the healthcare management consultants, provide the means for


hospitals to run their own business comparisons in different areas – finance,
capacity and care provided – thus allowing them to select which group of
hospitals they would like to be benchmarked against.

www.snellerbeter.nl

The Sneller Beter programme sees Dutch hospitals share best practices in the
field of business operations.

Zorgverzekeraars Nederland’s purchasing guide

The sector organisation representing the providers of care insurance in the


Netherlands has released a guide to hospital quality indicators, which informs
purchasing agreements between care insurers and care providers. Indicators
cover eight Diagnosis Treatment Combinations (widely known by the Dutch
acronym DBC) in the so-called B segment, where insurer and organisation are
free to negotiate price and quality, and three A-segment DBCs. Drawing on the
indicators, the insurers measure the care provided and take this into account
in their negotiations. Comparing hospital scores is part of this: if a hospital has
come in at a below-average score, the insurer may wish to agree quality
improvement. To all intents and purposes, the purchasing guide may serve as a
benchmark, even though it was not created with benchmarking in mind. The
indicators were uniformly defined countrywide.

DBC costing model

A number of market leaders have developed a costing model in preparation for


the introduction of DBCs in hospitals. Accurately indicating how costs may be
$ Benchmarking in Dutch healthcare

allocated to individual DBCs, the model has been widely implemented across
the hospital sector. As it is based on uniform definitions and allocation, the
model serves as a very appropriate vehicle for comparing hospitals, while
providing comprehensive transparency on the breakdown of costs. This also
makes it very useful for analyses of factors explaining cost differences.

Comparative studies of the hospital sector outside the Netherlands

Outside the Netherlands there are examples of best practices in benchmarking,


but even here benchmarking – and particularly multidimensional
benchmarking – is not always the generally accepted methodology. A few
international examples of benchmarking or comparative studies:

International benchmarking

An unusual example of a benchmark focusing on a specific condition is the


international benchmark on cataract operations. The purpose of this
comparative study of eye hospitals in Europe is to further improve the quality
113
of cataract treatment. Putters et al. found that professional debate and
knowledge exchange had given a sharp boost to the quality of treatment. The
authors feel benchmarking has an increasingly important part to play in
gaining insight into quality of care improvement. The Rotterdam Eye Hospital
is a benchmark partner.

United States

In the United States hospitals are benchmarked on the basis of a set of


parameters imposed by the government. As it started years ago, the process has
given rise to extensive databases and independent agencies that analyse these
data per hospital and in many cases even per doctor. A comparison between
American hospitals will throw up a number of best practices, with focuses
including customer satisfaction, outreach activities and the use of
sophisticated ICT to prevent medication errors.

Kaiser Permanente, an American health maintenance organisation (HMO),


does a lot of internal benchmarking, comparing doctors and sharing best
practices. Kaiser Permanente also benchmarks itself against nationwide
results, with areas covered including treatment results, protocols and
Healthcare benchmark: %
notable features

operations. As it owns a large number of hospitals, Kaiser Permanente is able to


use internal benchmarking as a strategic management instrument.

United Kingdom

The National Health Service compares hospitals on a standardised set of


parameters known as the NHS performance rating, with hospitals awarded
stars for their performance. The rating is somewhat similar to the Dutch
Healthcare Inspectorate performance indicators, albeit that the NHS’s is a
centralised rating system applicable to all NHS hospitals.

Germany

PricewaterhouseCoopers in Germany runs a hospital benchmark covering data


on finance, care provided and FTE breakdown. Launched in 2001, it is repeated
every year and has well over 120 hospitals participating, broken down into
teaching hospitals and general hospitals in three different size categories. The
benchmark draws on the hospitals’ annual accounts, which means that the
hospitals need not make any extra effort other than granting permission for
the use of their data. Participants do have the option of having further analysis
done, for which additional data need to be submitted. All data are broken down
by speciality and the software incorporates a large number of automatic checks
for consistency, completeness and probability.

Every year, the hospitals receive a report comparing their data with those of
other participants in their subgroup, without being told who those others are.
The drawback to this low threshold, of course, is that organisations are unable
to sit down and exchange experiences.
& Benchmarking in Dutch healthcare

Clinical treatment

Average length of stay

Benchmark in baseline year 2005 Development of median

Figure 7.5 Sample report from German benchmark


Source: PwC Deutsche Revision

The most notable feature of this benchmark is the brevity of its timeline: as
soon as a subgroup has enough members to enable the creation of reliable
comparisons, participants can obtain their reports with a single mouse click.
The reports take two to four hours to produce. The German benchmark’s
motto: Wer heute den Kopf in den Sand steckt, knirscht morgen mit den Zähnen (‘Head
in the sand today, gnash your teeth tomorrow.’)

Greater interest in hospital benchmarks

We see benchmarking garnering more interest on the part of Dutch hospitals


in the near future, as they come under greater pressure from government, care
insurers, patients and the media to present transparent results and as they
themselves cast around for opportunities to learn and improve their
operations. Initially, such learning is likely to focus on single areas but as time
goes on the call for multidimensional benchmarking will increase. In complex
organisations like hospitals, multidimensional benchmarking is necessary to
show up the interrelationships between results: overall results and/or the way
results are connected may not become visible from a single-dimension
benchmarking exercise, making improvements difficult to implement.

With chains of hospitals being formed, the need for benchmarking as a


strategic management tool will also increase, as the holding company will
want to raise performance by comparing the hospitals that make up the chain.
Market forces should come increasingly into play and drive the need for
Healthcare benchmark: '
notable features

improved performance – albeit that hospitals’ willingness to share


information might decline as a result. And lastly, hospitals are large enough to
be able to use benchmarking internally to compare performances by
department and doctor, and to implement improvements within the hospital
itself.

We feel that the multidimensional benchmark model discussed at great length


in Section 5 would be appropriate for hospitals, particularly if it takes on board
the additions currently being developed in the nursing, care and home care
benchmark (innovation, performance excellence). Of course, the hospital
sector has its own dynamics and simply adopting a model and tools will not do.
For one thing, hospitals will need to be appropriately categorised for
benchmarking purposes; for another, benchmarking might need more than
the cooperation of hospital boards: depending on the precise nature of the
benchmark, the partnerships of medical specialists so typical of the Dutch
hospital landscape might need to be brought on board.

Treasury benchmarks

The treasury benchmarks run by PricewaterhouseCoopers focus on treasury


aspects only, and do so in great depth. The aim of these comparisons is to
improve the treasury function and to optimise net interest income. The
benchmark involves small groups that really allow their fellow participants to
look behind the scenes. Following data collection and processing, workshops
are held where benchmark partners discuss their own results, best practice and
how to help other organisations achieve these standards. Participants leave
with a customised assignment and return at a later date to discuss progress.
Treasury benchmarks are conducted across the healthcare spectrum, but the
groups only include organisations from the same sector.

Benchmarking the DBC process

We will end this section with an example of a process benchmark, born out of
the treasury benchmark we briefly touched on above. In mid 2006 it transpired
that hospitals were losing interest income as they were invoicing later than
had been assumed in the standard interest rate payment. A number of
hospitals then decided to compare their invoicing processes.
  Benchmarking in Dutch healthcare

DBC process: key features of multiple processes

DBC process: key features of multiple processes

Opening Closing Validating Invoicing Collecting


DBC DBC DBC DBC DBC

Work in progress Accounts receivable


Figure 7.6 DBC process benchmarked
Source: PricewaterhouseCoopers (DBCs)

Assisted by PricewaterhouseCoopers, the benchmarking hospitals discuss the


process from opening to collecting, agreeing and measuring key figures for
every stage of the process. Major areas include market share versus share of
accounts receivable, collection periods, past-due status of accounts receivable,
and of course the length of the entire process – all of which are broken down by
insurer and DBC. Major differences have emerged. By comparing the outcomes
and exchanging tips, participants are creating their own opportunities for
improvement.
8 Innovations in benchmarking

This final section reviews innovations in benchmarking, in terms of both


content and tools. After all, benchmarking should be benchmarked as much as
anything else, with the literature and real-world benchmarking exercises
scoured for examples of less taxing methods or more successful reporting.
There are no reasons to assume that benchmarking is perfect as it is. Granted,
114
Accenture’s international study puts it among oft-used instruments, but no
less than 78 per cent of respondents – government bodies in the main – feel
they are not putting the tool to optimum use.

8.1 Towards performance excellence


We have briefly touched upon the challenge of creating a benchmark that does
not just underpin performance improvement but actually provides the
impetus for excellence. Benchmarking should be more than a good way to
bring the laggards up to speed: it could also help the ambitious achieve
superior excellence.

Reviewing and analysing over 90 studies with a view to identifying the


characteristics of excellently performing organisations, De Waal found a
number of recurrent themes and organised these into clusters demonstrably
linked to performance excellence. Organisations boasting superior
performance show better long-term results than others in terms of healthy
financial operations and satisfied customers, he argues. Table 8.1 lists key
115
features of excellent performers on the basis of published results.
 Benchmarking in Dutch healthcare

Cluster Key features (examples)

Organisational design The organisation is straightforward and flat and has no barriers
between units.

Sharing best practices within the organisation is actively


encouraged.

Strategy The strategy is absolutely clear to all members of the


organisation.

Robust plans are in place for achieving objectives.

Processes The reward system is seen as fair.

Organisational assets are put to highly effective use.

Technology Flexible ICT systems have been implemented throughout the


organisation.

Systems are exceptionally user-friendly.

Leadership Management has an effective, focused and strong leadership


style.

Management challenges itself and others to always achieve


better results.

Organisational The organisation sees and treats its members as its main
members instrument to achieve its objectives.

The organisation only hires exceptional people of entrepreneurial


spirit – provided they are a good cultural fit – and deals swiftly
and effectively with non-performers.

Culture The organisation has strong and meaningful core values.

Transparency, openness and trust are key to the organisational


culture.

Outward-looking The organisation is committed to adding value for its clients.

The organisation has unreservedly opted to benchmark against


best-in-class performers in the industry.

Table 8.1 Hallmarks of excellence


Source: A.A. de Waal, Characteristics of a High Performance Organisation, 2006

Excellence is a relative concept. As soon as other organisations achieve the


same level of excellence, an organisation will have to excel even more if it is to
116
stand out from the crowd. ‘If you stop getting better, you stop being good.’
Innovations in benchmarking  !

To illustrate how the drive for performance excellence can spark far-reaching
117
changes in company cultures, here is what management guru Tom Peters had
118
to say on several visits to the Netherlands:

• ‘Reward brilliant failures, punish mediocre successes.’


• ‘Find people who don’t live by the rules. Excellence can only be obtained if
you risk more than others think is safe and if you dream more than others
think is practical. Or do you want your tombstone to say “He always made
budget”?’
• ‘Talent is all a company has got. HR should sit at the head of the table. A real
leader doesn’t do finance or sales, a real leader does people.’
• ‘I hate mission statements.’

8.2 Excellence and innovation


De Waal, like many others, sees a direct link between excellent performance
and innovation. In fact, innovation is considered an essential ingredient of
performance excellence. Watson’s Strategic Benchmarking has many examples of
the success of innovative companies and of how less innovative rivals have
fallen behind. He links innovation to creativity and to exceeding customer
expectations. Imaginative understanding of customer requirements creates
wildly enthusiastic customers who will do whatever they can to lay their hands
on the product. Innovation requires being able to think out of the box and even
sometimes forgoing the backing of others. Citing Compaq as a successful
example, Watson describes how the computer manufacturer started focusing
on design and making computers smaller and lighter, while all its rivals were
still engaged in improving stability and reliability. Though technically no
better than the competition, Compaq’s products were undoubtedly more
119
innovative – and a massive hit. Watson also points out that innovation is an
inflationary phenomenon: every new development in the outside world
requires fresh innovation.

120
In its 2005 Management Tools & Trends study, Bain & Company identifies
innovation as the next big organisational challenge reported by its
respondents.

Our own review of excellently performing organisations reveals that many of


their key characteristics coincide with the best practices that emerge from our
 " Benchmarking in Dutch healthcare

benchmarks – albeit that the benchmarks paint a fragmented picture, as we


have never investigated all characteristics at the same time. But best practices
such as flat organisation structures, a focus on employee development, shared
values and robust, dynamic leadership confirm De Waal’s findings.

In its VVT benchmark, industry association ActiZ decided to start measuring a


number of characteristics of excellence. To a large extent, the benchmark’s
current building blocks already capture these characteristics. Changes to be
introduced will involve a different clustering, often transcending building
blocks, but will limit the need for new data collection as the answers to
questions in the current building blocks are simply rearranged.

The idea is that performance excellence scores will provide greater insight into
ways of improving operations and more information about any gaps between
an organisation’s own operations and those of excellent performers. Needless
to say, we are very eager to see the first results emerge in the autumn of 2007.

8.3 Benchmarking outside the box


The literature on benchmarking often argues the case for studying
121
organisations outside one’s own industry or in other countries (Watson ), a
point frequently raised in the healthcare benchmarks as well. Home care
organisations, in particular, many of which have participated in
benchmarking exercises three times, are keen to take this next step, and we
definitely plan to investigate the possibilities in our current benchmarks.

8.4 More research into cost-to-reward ratios


Up to this point, we have primarily focused on the potential rewards of
benchmarking and not on its costs, which, of course, there are – and not just
the direct costs of participating, but also of time invested. Organisations
provide data, employees and clients spend time completing questionnaires. In
this report we have refrained from quantifying these costs, as there are too
many differences between the benchmarks reviewed. Obviously, a benchmark
accessing client views through questionnaires via the Internet or on paper will
be much less costly than a benchmark involving face-to-face client interviews.
Besides, in many cases the relevant data are not intended exclusively for the
benchmark. The VVT benchmark, for one, no longer uses separate client
Innovations in benchmarking  #

surveys, but draws on surveys carried out as part of its responsible care
programme and accountability to the Dutch Healthcare Inspectorate.

Most benchmarking exercises in the past few years have asked participants for
their post-benchmarking views on cost-to-reward ratios among other matters.
On the whole, respondents were positive – if they had not been, we would not
have been able to continue the benchmarks – but that is not to say there were
no critical comments. Invariably, some organisations felt a benchmark was too
general and not sufficiently instructive, while others cited too much focus on
detail.

These differences perhaps reflect the purpose of the benchmarking exercise. A


more generalised benchmark would be better suited to gauging one’s own
position relative to others, while a rather more detailed benchmark would also
be appropriate for generating concrete strategic management information.

Designing a modular benchmark might be the answer here, combining a


generalised benchmark as its backbone and more detailed modules at the
customer’s choice. The latest benchmarks are testing this approach to some
extent, but it would be rash to say that the optimum balance has been found.
This will require more research.

Whatever balance is struck, benchmarking remains a matter of one thing


affecting another. The desire of some – a concise benchmark providing copious
amounts of concrete, strategic management information – is likely always to
remain a utopian ideal.

8.5 More benchmark partner involvement


Section 3 highlighted the disappointing phenomenon that organisations
spend a great deal of effort participating in benchmarks but that learning and
improving based on benchmark results is decidedly lagging. Our observation
has been no different.

To date, we have typically responded by attempting to enhance our benchmark


reports, making them more accessible by capturing key results in scorecards,
developing applications that allow participants to click on the information
 $ Benchmarking in Dutch healthcare

they wish to see, applying colour codes to indicate specific areas for
improvement, listing points of action – but all to no avail.

Without dismissing the importance of benchmark reporting out of hand –


listing points of action, we reckon, is an encouraging road to pursue further –
the solution would appear to lie in another direction entirely: getting
participants more involved in conducting the benchmarking survey
themselves.

Customising questionnaires for each individual organisation is one idea, if a


hotly debated one. After all, benchmarking requires shared sets of data to really
classify as benchmarking. Moreover, it is virtually impossible – and in any case
very, very expensive – to completely tailor a questionnaire to a single
organisation and thus individually process and report on the response.
Nonetheless, organisations have been pretty clear in their desire to see
flexibility and, when all is said and done, we are talking additional questions
here, not completely different questionnaires.

Perhaps the following suggestion might prove useful: keep the backbone of the
benchmark unchanged with identical questions for all participants, but add
the option of additional questions that individual participants can select. The
requirements of the individual organisations would underpin the creation of a
library of such additional questions, which would be clearly defined so that all
users know exactly what is meant. Single questions would indicate their
building block category and, if applicable, the relevant subsection of the
building block. Participants electing to include additional questions would
receive individual reports on these and, if other participants were found to
have answered a specific question in this or any previous rounds, would also
receive comparative data. This approach would ensure the continued quality of
the information, retain the nature of comparative data and still meet
individual requirements.

Real-world experience shows that flexible questionnaires work:


PricewaterhouseCoopers in Germany created a benchmark for hospitals using
precisely such questionnaires. Nor did the exercise involve a small group: their
number can easily top 100 participants.

In addition to setting up a library of customised questions, interactive and


communicative methods might help enhance participants’ involvement and
Innovations in benchmarking  %

commitment. One idea mooted is to design a simple tool that would enable
benchmarking teams in an organisation to indicate what they think their
scores will be and what scores they are aiming to achieve. These figures could
then be compared with the actual scores later in the process.

Lastly, we work together with industry associations and sounding-board


groups on the best approach to the workshops that discuss benchmark
outcomes. What would be the best group size? Should the organisations
themselves put the groups together? What role, if any, have the researchers and
the industry association to play?

8.6 More dynamic reporting


Benchmark reporting to organisations can and should be more dynamic. This
will make it easier to identify actions for improvement, but also to get a handle
on the effects of any such action. What button should we push to reduce costs
by x per cent? Which other aspects would that affect? How do we see the impact
of our efforts after the previous benchmarking exercise? How much could our
client score go up or down if we raised overheads by x per cent? Dynamic,
web-based reporting should enable organisations to carry out these types of
analyses with the aid of their own computer models.

All this is, of course, predicated on the assumption that the data for all
participants show up sufficient interrelationships for the computer models to
actually work. A word of warning here: causal or statistically significant
relationships have not been found to exist in by any means all cases. And even if
such relationships are uncovered at the aggregate level, this does not mean
that these apply to all organisations. We are occasionally asked to produce
integrated analyses at the level of the organisation – but that is simply not
possible, as any correlation at organisation level might be coincidental. What
we can do is offer tools to show what would happen if the same
interrelationships applied to the organisation as to the aggregate.

Benchmarking will never be an entirely mechanical process. At most, we may


be able to say what the chances are that something or other happens if the
organisation pushes this button or that. But there is definitely room for
improvement in the reporting tool. In addition to identifying the relationships
between outcomes, an organisation should also be able to get the five best
scores for each building block at the click of a mouse or make its own analysis
 & Benchmarking in Dutch healthcare

of differences between organisational units. It should also be able to access


suggestions for improvement if it scores low, or to select dummy tables to help
print outcomes that include columns to indicate the organisation’s proposed
actions for improvement and who will be implementing them.

8.7 Continuous benchmarking


ActiZ’s request for a new VVT benchmark was not just about the integration of
the various benchmarks for nursing homes, care homes and home care but also
about the creation of a continuous benchmark. Continuous benchmarking
implies a system allowing organisations to decide when to start the
benchmarking exercise themselves, using a web-based application to complete
and return questionnaires without any intervention from consultants. Data
submitted are automatically compared with averages and best practices in the
database, and organisations have immediate and automatic access to these
analyses.

There are many advantages to such a system, perhaps the most important
being that the process will be shortened considerably. One-time
benchmarking, by contrast, always needs a certain amount of time for sign-up
and data collection, allowing for a pre-agreed timeline. Researchers
subsequently need time for analysis, even if they are using ICT. Moreover, a
continuous benchmark allows organisations to benchmark when they see fit,
making for optimum embedding in their own management cycles.

A third benefit is that there is no longer any need for organisations to wait for
the next benchmarking exercise to roll around. If an organisation wants to
know whether its improvement measures have had any effect, nothing need
keep it from investigating this by starting a new benchmarking exercise – even
when other organisations benchmark less frequently. In the case of repeat
benchmarks, organisations may choose not to participate in all building blocks
but to use their most recent benchmark figures as a reference point. The system
should also reduce the costs of benchmark participation – an advantage not to
be underestimated.

The aim is to create a database enabling automatic processing of data, either


immediately after input or periodically. In addition, the system should
calculate best practices and countrywide trends at regular intervals. What it
Innovations in benchmarking  '

will look like in detail is as yet unknown. A number of issues still need
resolving, such as data validation – in-built automatic testing? – and the vexed
matter of constantly moving averages: even if it records the exact same
performance, an organisation might score above average in one benchmarking
exercise and below average in the next. This issue might perhaps be addressed
by adding another comparison based on fixed values alongside that based on
performances by other organisations.

A primary requisite of a continuous benchmark, however, is that enough data


be available to set up a reliable database. Previous benchmarks are of only
limited use here, as numerous changes have since been made to both
organisational funding and benchmarking tools. This means that at least one
group of organisations will have to benchmark simultaneously for the
database to be filled with their data. Once up and running, the database can
add fresh data from repeat participants and/or data from new participants,
with older data periodically removed.
! Benchmarking in Dutch healthcare

8.8 Simplified data supply


To simplify data supply for participating organisations, the continuous
benchmark could – with the permission of the organisations themselves, of
course – draw on other sources, e.g. the annual healthcare review database.

At the time of writing this report, healthcare providers were expected to be


obliged to publish an annual care review with effect from the 2007 financial
year. But even before this, VGN and ActiZ were scouring the annual healthcare
report database for data that might be useful in a continuous benchmark.
Expectations should not run too high, though: annual healthcare reporting is
not yet compulsory and some data, even if featuring in the annual care review,
will not be available in the short run. Another problem we see here is that the
annual care review has data at the aggregate level of the organisation, whereas
benchmarking requires lower-level data.

All that said, further streamlining is a good thing. Plus which, aggregate data
are often collated from lower-level data, and as long as definitions are the same
it should be less of a problem for organisations to supply these data than any
other information that might be requested.

8.9 Introduction of XBRL


Because an organisation’s records should be as closely aligned to national data
requests as possible, a number of organisations have been arguing the merits of
introducing the XBRL open standard, initially only for financial data but
eventually also for other quantitative data.

The Ministry of Health, Welfare and Sport has now joined the Dutch Taxonomy
Project, a collaborative venture between the Dutch justice and finance
ministries that is looking to standardise and simplify the financial information
that organisations are expected to supply to the government. Working
together with Dutch trade and industry, intermediary bodies such as
accountancy firms, trust offices, tax advisers and software suppliers, the
ministries are developing a Dutch XBRL taxonomy (XBRL stands for eXtensible
Business Reporting Language). This taxonomy is a data dictionary that is built
into financial software. Marking the relevant data in an organisation’s records,
XBRL allows for rapid and efficient collation of data for accountability
Innovations in benchmarking !

purposes and simplifies electronic data exchange with the government


122
through re-use of information. Data become easy to collect, exchange
electronically, analyse and, if need be, process further. Partners in the project,
including PricewaterhouseCoopers, have signed a covenant committing to the
creation of an XBRL taxonomy – and the reduction in accountancy fees that this
makes possible.

The objective of the project is to ease the administrative burden through the
creation of a basic structure by the government and a timely adjustment to the
taxonomy of their own organisation and infrastructures by intermediary
bodies.

XBRL breaks down into three distinctive elements:

• Taxonomy: A taxonomy is a bit like a dictionary or an index. It describes all


the potential financial data or elements as well as the relationships –
statistical or otherwise – between them. A taxonomy describes which data
feature in the document and where.
• Instance document: The XBRL-generated document or ‘instance document’
contains the actual financial data as described in the taxonomy. Data users
can put together their own reports by reading instance documents in their
own, XBRL-compliant applications.
• Style sheet. An instance document is not readily readable and needs to be
presented correctly. Style sheets may be used to present information in a
specific way, e.g. annual accounts in more than one language. Style sheets
also allow for conversion into other formats such as PDF or HTML
webpage.123

The annual healthcare review and benchmarking also require other data, such
as client and employee numbers. It is possible to develop software drawing on
XML (eXtensible Markup Language, the language underpinning XBRL) to allow
relatively easy integration of all the requisite information from a care
provider’s systems.
Appendices
A Steering committee and
sounding-board group

Members of the steering committee


Representing HEAD
G. van Berlo Alysis Zorggroep Arnhem
H. Bonté Trivium Zorggroep Hengelo
P. van der Wijk Delfzicht Ziekenhuis Delfzijl

Representing PwC

R.J. Poerstamper PwC Utrecht


G.J. Postma PwC Utrecht

Members of the sounding-board group


Representing HEAD
A. Members
Sector: Hospitals and rehabilitation centres
J.A. Naaktgeboren Ziekenhuis St. Jansdal Harderwijk
Sector: Nursing and care
A. Nijholt VZC LindeStede Wolvega
Sector: Care for the disabled
G.A. Born ASVZ Groep Leerdam
Sector: Mental healthcare
G.J. Meijerhof De Geestgronden Bennebroek
!$ Benchmarking in Dutch healthcare

B. Quality managers
+KHA
G. Gerritsen Alysis Zorggroep Arnhem
+=HA
M. Roelink Amerpoort ASVZ Baarn
C. Board
+=HA
B. van den Dungen Viataal Sint Michielsgestel
+KHA
L. van Eijck Alysis Zorggroep Arnhem
D. Industry associations
+=HA
M. Straks ActiZ Utrecht
M. Dopper VGN Utrecht
E. Personnel manager
L. De Braal GGZ Regio Breda Breda

Representing PwC

G.J. Postma PwC Utrecht


A. Veltman PwC Utrecht
B Bibliography

Literature/reports
• Accenture study entitled Assessment of Benchmarking Within Government,
quoted in Accenture press release dated 31 July 2006.
• Argyris, C. and Schon, D. (1978) Organizational Learning: A Theory of Action
Perspective. Addison-Wesley.
• Arcares (April, 2002) Eerste test benchmarkinstrumentarium. Algemeen rapport.
Utrecht. (First pilot of benchmarking tools. General report, in Dutch only)
• Arcares (November, 2002) Tweede test benchmarkinstrumentarium.
Algemeen rapport. Utrecht. (Second pilot of benchmarking tools. General
report, in Dutch only)
• Arcares (February, 2003) Tweede test benchmarkinstrumentarium, onderzoek
toepasbaarheid benchmarkinstrumentarium in extramurale zorg. Algemeen
rapport. Utrecht. (Second pilot of benchmarking tools, review of
applicability benchmarking tools in outpatient care. General report, in
Dutch only)
• Arcares (February, 2004) Benchmark verpleeg- en verzorgingshuizen 2003.
Prestaties van zorgaanbieders gemeten. Utrecht. (2003 nursing and care homes
benchmark: Measuring performance of care providers, in Dutch only)
• Arcares (November, 2005) Benchmark verpleeg- en verzorgingshuizen 2004/2005.
Prestaties van zorgaanbieders gemeten. Utrecht. (2004/2005 nursing and care
homes benchmark: Measuring performance of care providers, in Dutch
only)
• Arcares (May, 2006) Rapportage vooronderzoek continue benchmark V&V.
Utrecht. (Report on preliminary investigations into continuous benchmark
of nursing and care, in Dutch only)
• Berg. S. et al. (March, 2006) Water Benchmarking Support System: Survey of
Benchmarking Methodologies (abstract), Public Utility Research Center,
University of Florida.
• Bendell, T. et al. (1998) Benchmarking for competitive advantage, London.
• Bentlage F., Boelens J.B. and Kip J.A.M., De excellente overheidsorganisatie,
Kluwer, 1998. (The excellent government organisation, in Dutch only)
• Bullivant, J. (1994) Benchmarking for Continuous Improvement in the Public Sector,
Longman, Harlow.
• Camp, R., (1989) Benchmarking: The Search for Industry Best Practices that Lead to
Superior Performance. ASQ Quality Press.
!& Benchmarking in Dutch healthcare

• Consumentenbond, Consumentengids, December 2006. (Dutch consumers’


association, December 2006 issue of its consumer guide magazine, in Dutch
only)
• Cowper, J. and Samuels, M. (1996) ‘Performance benchmarking in the public
sector: the United Kingdom experience’. In: Trosa, S. (ed.) Benchmarking,
Evaluation and Strategic Management in the Public Sector, OECD, Oxford.
• Customers Choice and PricewaterhouseCoopers (December, 2004): Europese
benchmark in zorginstellingen voor ouderen (test in drie instellingen), Nijmegen.
(European benchmark in elderly care homes (benchmarking three
organisations), in Dutch only)
• Daft, R., Understanding Management, Fort Worth, 1995.
• Eagar, K. et al. (August, 2000). ‘Towards National Benchmarks for Australian
Mental Health Services’. Information Strategy Committee Discussion Paper
No. 4.
• Eenennaam, F. van and van der Zwart R.A. (1996) ‘Benchmarking’. In Dury C.
(ed.) Management Accounting Handbook. Oxford, Butterworth-Heinemann.
• Ernst & Young (December, 2003) Health Sciences Digest. Nr. 4, Amsterdam.
• Gangelen, J. van (May, 2005) Benchmarken in de Openbare Sector. De bijdrage van
benchmarken aan organisatieleren. Erasmus Universiteit, Rotterdam.
(Benchmarking in the public sector. How benchmarking contributes to
organisational learning. In Dutch only)
• Groot, H. de, Goudriaan, R., Hoogwout, M., Alexander - de Jong, A. and
Poerstamper, R.J. (2004) Benchmarking in de publieke sector, vergelijken van
prestaties als management tool. Public Controlling Reeks, Sdu Uitgevers.
(Benchmarking in the public sector: using performance comparison as
management tool, in Dutch only)
• Grayson, C. Jackson Jr (1994), ‘Back to the basics of benchmarking’, Quality,
Vol. 33 No.5, pp. 20-2.
• Grotenhuis, F. (2001), Patterns of acculturation in technology acquisitions,
dissertation, Rijksuniversiteit Groningen.
• Harrington, H. and Harrington J. (1996) High Performance Benchmarking,
McGraw Hill, New York.
• Helgason, S., International benchmarking experiences from OECD countries.
Conference on international benchmarking, Copenhagen, 20-21 February
1997.
• HEAD Association and PricewaterhouseCoopers (2006) Laat zien wat uw zorg
heeft betekend. (Showing the value of your care, in Dutch only)
• Healthcare Commission (2006) 2006 Community Mental Health Survey. South
London & Maudsley NHS Trust.
Bibliography !'

• Kaczmakre, D.S. and Metcalfe, G.R. (October, 2005) ‘Benchmarking supply


expenses the devil’s in the definition: why hasn’t the healthcare industry
been able to fix the disconnect between the supply chain and the revenue
cycle? Maybe a generally accepted definition of “supply expense” would be
a start.’ Healthcare Financial Management.
• Klages, H., (1997) ‘Benchmarking of public services in the United Kingdom
and Sweden – Commentary’. In: OECD/OCDE: Benchmarking, Evaluation and
Strategic Management in the Public Sector.
• Keehley, P. et al. (1996) Benchmarking for best practices in the public sector:
Achieving performance breakthroughs in federal, state, and local agencies.
San Francisco/London.
• Kets de Vries, M.F.R. (1993) Organizations on the Couch: Perspectives on
Organizational Behaviour and Change. Jossey-Bass.
• Ministerie van Binnenlandse Zaken en Koninkrijksrelaties (Dutch Ministry
of the Interior and Kingdom Relations) (May, 2004) Handreiking Prestatieverge-
lijking binnen de Openbare Sector. The Hague. (Guide to performance
comparison in the public sector, in Dutch only)
• Mulder, E. and De Loor, M. (June, 2005) Leren door benchmarken. Een
handreiking voor de ggz. GGZ Nederland, Amersfoort. (Learning through
benchmarking. A guide for the mental healthcare sector, in Dutch only)
• Nieboer A. et al (2005) Benchmark CVA-ketens. Eindrapportage. Erasmus
MC/Prismant, Utrecht. (Benchmarking the CVA chain. Final report, in
Dutch only)
• Nieboer, A. et al. (April, 2005) Stroke services gespiegeld. Hoofdrapport haalbaar-
heidsstudie benchmark CVA-ketens. Eindrapportage. Erasmus MC/Prismant,
Utrecht. (Comparing stroke services. Main report on feasibility study into
CVA chain benchmarking. Final report, in Dutch only)
• Odenthal, L. and Van Vijfeijken, H. (January, 2006) Verschillen vergelijken.
Handreikingen voor benchmarking en het gebruik van benchmarks. CPS,
Amersfoort. (Comparing differences. Guide to benchmarking and the use of
benchmarks, in Dutch only)
• Organisation for Economic Co-operation and Development (January, 2002)
‘Improving the Performance of Health Care Systems: From Measures to
Action (A Review of Experiences in Four OECD Countries)’, Labour Market and
Social Policy Occasional Papers no. 57. Paris.
• PricewaterhouseCoopers and Berenschot (Autumn, 1998) Toepassing
benchmarkanalysemodel voor sector verpleging en verzorging gefaseerd en onder
randvoorwaarden haalbaar. Utrecht. (Application of benchmarking model in
nursing and care feasible – phased and within set parameters, in Dutch
only)
" Benchmarking in Dutch healthcare

• PricewaterhouseCoopers and Berenschot (March, 1999) Benchmarkonderzoek


thuiszorg biedt aanknopingspunten voor instellingen en overheid. Utrecht.
(Benchmark study of home care sector offers opportunities for government
and organisations, in Dutch only)
• PricewaterhouseCoopers (September, 2002) Benchmarkonderzoek zorgkantoren
fase II omvat ontwikkeling analysemodel, kengetallen en instrumentarium. Almere.
(Benchmark study of healthcare administration agencies, Phase II,
consisting of developing model, key figures and tools, in Dutch only)
• PricewaterhouseCoopers (June, 2003) Gedragen kostprijsmodel gehandicap-
tenzorg vormt basis voor bruikbare en integrale kostprijsberekening. Utrecht. (Cost
price model in care for disabled sector serves as basis for useful and
integrated costing, in Dutch only)
• PricewaterhouseCoopers (June, 2003) Analysemodel, instrumentarium en
pilotbenchmark bieden solide basis voor integrale benchmark zorgkantoren, Generiek
eindrapport fase III. Met appendix. Utrecht. (Model, tools and pilot benchmark
provide solid basis for integrated benchmark for healthcare administration
agencies. Generic final report Phase III. Including Appendix. In Dutch only)
• PricewaterhouseCoopers (October, 2003) Kostprijsmodel verpleeg- en ver-
zorgingshuizen en benchmarkdata 2002 leiden tot eerste inzicht in parameters en
kostprijzen. Utrecht. (Cost price model for nursing and care homes and 2002
benchmark data provide early insight in parameters and costs, in Dutch
only)
• PricewaterhouseCoopers (May, 2004) Toepassing kostprijsmodel gehandicapten-
zorg in testbenchmark biedt inzicht in kostprijzen en relevante spiegelinformatie.
Utrecht. (Application of cost price model for disabled care in test
benchmark provides insight in costs and relevant comparative data, in
Dutch only)
• PricewaterhouseCoopers (January, 2005) Testbenchmark gehandicaptenzorg
2004. Brancherapportage. Utrecht. (2004 test benchmark in care for the
disabled sector. Industry report, in Dutch only)
• PricewaterhouseCoopers and IWS (April, 2005) Medewerkers positiever over
werkomstandigheden. Lidinstellingen LVT scoren bij medewerkerraadpleging 2004
hoger dan bij survey 2002. Amstelveen/Utrecht. (Employees more positive
about working conditions. LVT member organisation score higher in 2004
employee motivation survey than in 2002 EMS, in Dutch only)
• PricewaterhouseCoopers (May, 2005) Continue benchmarkverpleeg- en verzor-
gingshuizen. Instellingsspecifieke rapportage. Utrecht. (Continuous
benchmarking nursing and care homes. Organisation-specific report, in
Dutch only)
Bibliography "

• PricewaterhouseCoopers (June, 2005) Generiek rapport eerste benchmark


zorgkantoren. Kostenmeting 2003 en cliëntensurvey 2004. Utrecht. (Generic report
first benchmark healthcare administration agencies. 2003 cost check and
2004 client survey, in Dutch only)
• PricewaterhouseCoopers (June, 2005) Individueel rapport eerste benchmark
zorgkantoren. Kostenmeting 2003 en cliëntensurvey 2004. Utrecht. (Individual
report first benchmark healthcare administration agencies. 2003 cost
check and 2004 client survey, in Dutch only)
• PricewaterhouseCoopers (September, 2005) Brancherapport Z-org benchmar-
konderzoek thuiszorg 2004. Utrecht. (Z-org industry report on 2004 benchmark
study into home care, in Dutch only)
• PricewaterhouseCoopers (November, 2005) Benchmark Beroepsonderwijs.
Stuurinformatie voor strategische thema’s, optimaal perspectief als groeimodel.
Utrecht. (Benchmarking vocational education: management information
for strategic themes, optimum perspective as model for growth, in Dutch
only)
• PricewaterhouseCoopers (January, 2006) Brancherapport Z-org benchmarkon-
derzoek jeugdgezondheidszorg 0-4-jarigen. Utrecht. (Z-org industry report on
benchmark study into healthcare to children between 0 and 4 years of age,
in Dutch only)
• PricewaterhouseCoopers (December, 2006) Eerste fase benchmark middelbaar
beroepsonderwijs afgerond. Utrecht. (Benchmarking vocational education:
completion first phase, in Dutch only)
• Putters, K., Frissen, P.H.A., and Foekema, H. (2006) Zorg om vernieuwing. TNS
NIPO/Tilburg School for Politics and Public Administration, University of
Tilburg (Care for innovation, in Dutch only)
• PwC Consulting (March, 2002) Benchmarkonderzoek 2000 verscherpt inzicht in
prestaties en bedrijfsvoering thuiszorginstellingen. Resultaten benchmarkonderzoek
op sectorniveau. Almere/Utrecht. (2000 benchmark study hones insight into
performance and operations at home care organisations. Outcomes
benchmark study at sector level, in Dutch only)
• PwC Deutsche Revision (2005) Benchmarking Ablauf und Übersicht. Frankfurt.
(Benchmark execution and findings, in German only)
• PwC Deutsche Revision (September, 2006) Erste Ergebnisse Benchmark 2005.
Frankfurt. (Initial benchmark results 2005, in German only)
• Rigby, D. and Bilodeau, B. (2005) Management Tools and Trends 2005. Bain &
Company.
• Pollitt, C., Cave, M. and Joss, R. (1994) ‘International benchmarking as a tool
to improve public sector performance - A critical overview’, in: OECD,
Performance measurement in government - Issues and illustrations. Paris.
• PricewaterhouseCoopers, EMEA HC, Benchmarking Hospitals, presentation
May, 2006.
• RVZ (1998) Maatschappelijk ondernemen in de zorg. Background paper. (Social
enterprise in Dutch healthcare, in Dutch only)
• Senge, P.M. (1990) The Fifth Discipline: The Art & Practice of the Learning
Organization. Doubleday.
• Steehouder, M. et al. (1984) Leren Communiceren. Wolters-Noordhoff,
Groningen. (Learning to communicate, in Dutch only)
• Swieringa, J. and Wierdsma, A.F.M. (1992) Becoming a Learning Organisation.
Addison Wesley.
• Swinkels, G.J.P. De Vrind, H.J.A. and Boerkamp, M.A. (1998) ‘Benchmarking:
Stimulation or Control?’ Primavera Working Paper 1998-16, Universiteit van
Amsterdam Business School.
• Treacy, M. and Wiersema, F. (1996) The Discipline of Market Leaders: Choose Your
Customers, Narrow Your Focus, Dominate Your Market, Perseus Books, New York.
• Vaidya, K. et al. (2004) ‘Towards a Model for Measuring the Performance of
e-Procurement Initiatives in the Australian Public Sector: A Balanced
Scorecard Approach’. Proceedings of the Australian Electronic Governance
Conference, April 14-15. Melbourne
• Vereniging Gehandicaptenzorg Nederland (VGN, Association for Care of the
Disabled in the Netherlands) (2007). Various benchmarking brochures and
information packs.
• Vriend, G.K. de and Timmerman, A. (1997) Benchmarking, een strategie om con-
currentievoordeel te behalen. Kluwer Bedrijfsinformatie. (Benchmarking for a
competitive edge, in Dutch only)
• Vries de, J. and Van der Togt, J. (1995) Benchmarking in 9 stappen, Deventer.
(Benchmarking in nine steps, in Dutch only)
• Waal, A.A. de (2005) ‘Het tijdperk van de externe concurrentie is
aangebroken’. Kluwer Management, November/December. (The time of
external competition has come, in Dutch only)
• Waal, A.A. de and Ardon, A.J. (2002) ‘Hoe prestatiegedreven is uw
zorginstelling?’ Zorginstellingen, May 2002. (How performance-driven is your
healthcare organisation? In Dutch only)
• Waal, A.A. de (2003) On the road to Nirvana. Hyperion Solutions Nederland BV,
Utrecht.
• Waal, A.A. de (2006) The Characteristics of a High Performance Organisation.
• Watson, G.H. (1993) Strategic Benchmarking: How to Rate Your Company’s
Performance against the World’s Best. Wiley.

Internal PricewaterhouseCoopers presentations


• Alexander-De Jong, A. (March, 2006) Benchmarking in the Dutch Healthcare
Sector.
• PricewaterhouseCoopers (various years) Territory benchmarking examples.

Internal materials
• Internal materials of the industry associations and
PricewaterhouseCoopers.

Websites
• http://www.12manage.com/methods_valuedisciplines_nl.html
• http://hcro.enigma.co.nz/website/print_issue.cfm?issueid=61
• http://kb.webebi.com/article.aspx?id=10003&cNode=5K3B4O
• http://nation.ittefaq.com/artman/publish/printer_29837.shtml
• http://www.allbusiness.com/management/benchmarking/491524-1.html
• http://www.firstyear.org/fyi/detrickandpica.html
• http://www.fortherecordmag.com/archives/ftr_04172006p22.shtml
• http://www.inova.org/inovapublic.srt/news/pressreleases/benchmarkIFH.
html
• http://www.management-development.com/organizational_performance
• http://www.managementstart.nl
• http://www.totalbenchmarksolution.com
• http://www.uow.edu.au/arts/sts/bmartin/dissent/documents/health/bench
mark.html
C Benchmark studies

Table C-1: Healthcare benchmarks with involvement of PricewaterhouseCoopers


Sector Report Working with Nature of the Number of Commissioned by
study participants *
Nursing and 1998 Berenschot Feasibility NA VWS, NVVZ, Wzf
care homes study
Home care 1999 Berenschot, First 122 VWS, LVT, BTN
Stichting Kwaliteit benchmark
In EigeN huis
(KIEN)
Home care 2002 NIVEL Second 106 LVT, BTN, VWS
benchmark
Healthcare 2002 Developing NA CvZ
administration model and
agencies tools
Nursing and 2002 ATOS First test 90 VWS, Arcares
care homes Beleidsadvies en
-onderzoek B.V.,
Cliënt & Kwaliteit,
Customers
Choice, Economic
Programs B.V.,
Van Loveren &
Partners B.V.,
Prismant
Nursing and 2003 ATOS Second test 119 VWS, Arcares
care homes Beleidsadvies en
-onderzoek B.V.,
Cliënt & Kwaliteit,
Customers
Choice, Economic
Programs B.V.,
Van Loveren &
Partners B.V.
Prismant
Nursing and 2003 Customers Applicability 21 VWS, Arcares
care homes Choice, Van outpatient
Loveren & care
Partners B.V.
Healthcare 2003 NIVEL Pilot 7 CvZ
administration
agencies
"$ Benchmarking in Dutch healthcare

Sector Report Working with Nature of the Number of Commissioned by


study participants *
Nursing and 2004 See second test First 100 Arcares
care homes countrywide
rollout
Care for the 2004/200 Customers Test 28 VGN
disabled 5 Choice, Economic
Programs and
Prismant
Nursing and 2004 Customers Exploratory 3 NA
care homes Choice data gathering
in the
Netherlands,
Belgium and
Germany
Healthcare 2005 NIVEL Countrywide 32 CvZ
administration rollout
agencies
Home care 2005 Desan, IWS, Third 82 Z-org
NIVEL and TNO benchmark
Nursing and 2004/200 See second test Second 75 Arcares
care homes 5 countrywide
rollout
Child 2006 - Financial only 26 LVT
healthcare 0
4 years
Child 2006 Van Naem & Plan of action NA LVT, GGD
healthcare 0 Partners Nederland
19 years
Care for the 2007 Research for First 108 VGN
disabled Beleid, NIVEL and countrywide
NIZW rollout
Nursing, care Continuou CBO, Desan Integrated As yet ActiZ
and home s countrywide unknown
care VVT
benchmark
Crossover Periodicall - Treasury only 30 Organisations
healthcare (by y
sector)
Vocational 2005/200 Kenniscentrum Eerste 67 MBO Raad
education 6 Beroepsonderwijs benchmark
en Arbeidsmarkt
Vocational 2007 Kenniscentrum Follow-up 67 MBO Raad
education Beroepsonder benchmark
wijs en
Arbeidsmarkt
Housing 2007 - Pilot 15 Corporations
corporations
Benchmark studies "%

* A number of benchmarks allowed for participation in a limited number of building blocks.

Consultants and agencies we have worked with or currently work with


are typically involved at the general set-up and creation stage of the
benchmark, while at the same time taking on responsibility for a specific
building block or task:
• General: Berenschot, Van Naem & Partners
• Client surveys: Cliënt & Kwaliteit, KIEN, NIVEL, NIZW
• Employee surveys: ATOS, IWS, Prismant, Research voor Beleid
• Operations/process management, strategic positioning: Prismant,
TNO
• Innovation: CBO
• Measuring care complexity elderly care: Van Loveren & Partners
• Care and treatment registration (time-keeping): Customers Choice
• ICT and analysis: Economic Programs, Desan
• Financial: Van Naem & Partners (child healthcare 4-19 years)

About the authors

Robbert-Jan Poerstamper is the partner ultimately responsible for all


healthcare benchmarks discussed in this report. He has been involved in
development and implementation of benchmarks since 1996.

Anneke van Mourik - van Herk has also been involved in healthcare
benchmarks since 1996, with analyses and reporting as her specific field
of expertise in addition to benchmark development.

Aafke Veltman has been involved in benchmark studies since 2006.

All three authors work at PricewaterhouseCoopers.


"& Benchmarking in Dutch healthcare

*connectedthinking

PricewaterhouseCoopers (www.pwc.com) provides industry-focused assurance,


tax and advisory services to build public trust and enhance value for its clients
and their stakeholders. More than 146,000 people in 150 countries work
collaboratively using Connected Thinking to develop fresh perspectives and
practical advice. ‘PricewaterhouseCoopers’ refers to the network of member
firms of PricewaterhouseCoopers International Limited, each of which is a
separate and independent legal entity.

At PricewaterhouseCoopers in the Netherlands, over 4,400 professionals draw


on Connected Thinking to provide sector-specific services and innovative
solutions for large, medium-sized and smaller national and international
companies, governments and not-for-profit organisations. Our worldwide
network of people and countries has access to a vast amount of knowledge and
experience that we share with each other, with our customers and with their
stakeholders. We seek fresh approaches, make surprising links, are committed
and involved and collaborate from strength.

PricewaterhouseCoopers and healthcare

PricewaterhouseCoopers has active policies for what it considers pivotal


industries. Healthcare is one of those pivotal industries. The healthcare sector
group is the largest specialist groups within PricewaterhouseCoopers, putting
our expertise to work in healthcare at both national and international level.

We actively identify key trends in healthcare. We monitor local and global


healthcare to help our customers understand the issues, take decisions and
achieve objectives. Among the results of our efforts is HealthCast 2020: Creating a
Sustainable Future, a report setting out our views on future healthcare
developments and trends.
D Dutch healthcare abbreviations
and acronyms

ActiZ the Dutch association for nursing, care and home care
AWBZ Exceptional Medical Expenses Act
CVZ the Health Care Insurance Board
GGD community health services
GGZ the mental healthcare association
HEAD Dutch Association of Finance Managers in Healthcare
IGZ the Dutch Healthcare Inspectorate
JGZ healthcare to children
NVZ Dutch Hospitals Association
RVZ the Dutch Council for Public Health and Care
VGN Association for Care of the Disabled in the Netherlands
VVT nursing, care and home care
ZBR care and treatment provided
ZonMw the Netherlands organisation for health research and development
ZZP care and complexity module
E Endnotes

1. RVZ (1998) Maatschappelijk ondernemen in de zorg. Background paper.


(Social enterprise in Dutch healthcare, in Dutch only)
2. Government Papers II 2003/2004 29 200 VII nr. 38.
3. Watson, G.H. (1993) Strategic Benchmarking: How to Rate Your
Company's Performance against the World's Best. Wiley. (Read in its
official 1998 Dutch translation)
4. Groot, H. de, Goudriaan, R., Hoogwout, M., Alexander - de Jong, A. and
Poerstamper, R.J. (2004) Benchmarking in de publieke sector, vergelijken
van prestaties als management tool. Public Controlling Reeks, Sdu
Uitgevers. (Benchmarking in the public sector: using performance com-
parison as management tool, in Dutch only).
5. Spendolini, M. (1992) The benchmark book. Amacon, New York.
6. Camp, R., (1989) Benchmarking: The Search for Industry Best Practices
that Lead to Superior Performance. ASQ Quality Press.
7. Edwards Deming, W. (1988) Out of the Crisis. Cambridge University Press.
8. Watson.
9. Watson.
10. De Groot et al.
11. Gangelen, J. van (May, 2005) Benchmarken in de Openbare Sector. De
bijdrage van benchmarken aan organisatieleren. Erasmus Universiteit,
Rotterdam. (Benchmarking in the public sector. How benchmarking con-
tributes to organisational learning. In Dutch only).
12. In the 'Promoting Benchmarking within LED' project.
13. Bentlage F., Boelens J.B. and Kip J.A.M., (1998) De excellente
overheidsorganisatie. Kluwer. (The excellent government organisation,
in Dutch only)
14. Keehley, P. et al. (1996) Benchmarking for best practices in the public
sector: Achieving performance breakthroughs in federal, state, and local
agencies. San Francisco/London.
15. Harrington, H. and Harrington J. (1996) High Performance
Benchmarking. McGraw Hill, New York.
16. Daft, R. (1995) Understanding Management. Fort Worth.
17. Bentlage et al.
18. Bullivant, J. (1994) Benchmarking for Continuous Improvement in the
Public Sector. Longman, Harlow.
19. Bendell, T. et al. (1998) Benchmarking for competitive advantage. London.
20. Watson.
21. One notable point is that writers on benchmarking typically refer to the
quality of products or processes and barely touch upon the financial
dimension, quality of the job or the quality of corporate social responsi-
bility.
22. Watson.
23. Watson.
24. Quoted in our first benchmark proposal.
25. Bendell et al.
26. Rigby, D. and Bilodeau, B. (2005) Management Tools and Trends 2005.
Bain & Company.
27. Rigby and Bilodeau.
28. Defined as comparing processes and performance with internal and
external benchmarks. Companies incorporate identified best practices to
meet improvement targets.
29. Results based on statements by 960 managers in the corporate and
not-for-profit sectors.
30. Rigby and Bilodeau.
31. Defined as comparing efficiency and effectiveness of a process or
processes in one organisation to those in other organisations.
32. Pollitt, C., Cave, M. and Joss, R. (1994) 'International benchmarking as a
tool to improve public sector performance - A critical overview', in: OECD,
Performance measurement in government - Issues and illustrations,
Paris; Helgason, S., International benchmarking experiences from OECD
countries, Conference on international benchmarking, Copenhagen,
20-21 February 1997.
33. De Groot et al.
34. Accenture study entitled Assessment of Benchmarking Within Govern-
ment, quoted in Accenture press release dated 31 July 2006.
35. Bendell et al.
36. Hardjono, T. and Hes F. (1994) De Nederlandse kwaliteitsprijs en
onderscheiding. Deventer. (The Dutch Quality Award, in Dutch only)
37. De Groot et al.
38. Bendell et al.
39. Keehley, P. et al. (1996) Benchmarking for best practices in the public
sector: Achieving performance breakthroughs in federal, state, and local
agencies. San Francisco/London.
40. Vries de, J. and Van der Togt, J. (1995) Benchmarking in 9 stappen.
Deventer. (Benchmarking in nine steps, in Dutch only)
41. De Vries and Van der Togt.
42. In fact, this list identifies the key features of excellent performance - see
Section 8.
43. De Vries and Van der Togt.
44. Swinkels, G.J.P., De Vrind, H.J.A. and Boerkamp, M.A. (1998)
'Benchmarking: Stimulation or Control?' Primavera Working Paper
1998-16, Universiteit van Amsterdam Business School.
45. Watson.
46. Van Gangelen.
47. Het Financeele Dagblad, 5 January 2007.
48. Grotenhuis, F. (2001) Patterns of acculturation in technology acquisi-
tions, dissertation, Rijksuniversiteit Groningen.
49. 2006 Accenture study.
50. 2006 Accenture study.
51. De Groot et al.
52. Vaidya, K. et al. (2004) 'Towards a Model for Measuring the Performance of
e-Procurement Initiatives in the Australian Public Sector: A Balanced
Scorecard Approach'. Proceedings of the Australian Electronic Gover-
nance Conference. April 14-15. Melbourne.
53. Senge, P.M. (1990) The Fifth Discipline: The Art & Practice of the Learning
Organization. Doubleday.
54. Argyris, C. and Schon, D. (1978) Organizational Learning: A Theory of
Action Perspective. Addison-Wesley.
55. Kets de Vries, M.F.R. (1993) Organizations on the Couch: Perspectives on
Organizational Behaviour and Change. Jossey-Bass.
56. De Groot et al.
57. De Groot et al.
58. Organisation for Economic Co-operation and Development (January,
2002) 'Improving the Performance of Health Care Systems: From
Measures to Action (A Review of Experiences in Four OECD Countries)'.
Labour Market and Social Policy Occasional Papers no. 57. Paris.
59. Open questions do not lend themselves for scoring and comparison, but
can be of tremendous help to learning organisations. Our healthcare
benchmarks typically scan such responses in full, with the results then
sent to the organisations as feedback.
60. Bendell et al.
61. Keehley et al.
62. De Groot et al.
63. Bendell et al.
64. De Groot et al.
65. De Groot et al.
66. De Vries and Van der Togt.
67. Cowper, J. and Samuels, M. (1996) 'Performance benchmarking in the
public sector: the United Kingdom experience'. In: Trosa, S. (ed.)
Benchmarking, Evaluation and Strategic Management in the Public
Sector. OECD, Oxford.
68. Grayson, C. Jackson Jr (1994), 'Back to the basics of benchmarking'.
Quality, Vol. 33 No.5, pp. 20-2.
69. Watson.
70. Odenthal, L. and Van Vijfeijken, H. (January, 2006) Verschillen
vergelijken. Handreikingen voor benchmarking en het gebruik van
benchmarks. CPS, Amersfoort. (Comparing differences. Guide to
benchmarking and the use of benchmarks, in Dutch only)
71. Odenthal and Van Vijfeijken.
72. Odenthal and Van Vijfeijken.
73. Watson.
74. Klages, H., (1997) 'Benchmarking of public services in the United Kingdom
and Sweden - Commentary'. In: OECD/OCDE: Benchmarking, Evaluation
and Strategic Management in the Public Sector.
75. The only best practice to come out of the 2000 home care benchmark that
also constituted best practice in 2004.
76. Cowper and Samuels.
77. Klages.
78. According to the Consumer Quality Index, a standard system used in the
Netherlands that takes on board both the client's assessment of a particu-
lar item and the importance they assign to it.
79. Treacy, M. and Wiersema, F. (1996) The Discipline of Market Leaders:
Choose Your Customers, Narrow Your Focus, Dominate Your Market.
Perseus Books, New York.
80. Treacy and Wiersema.
81. Until recently this was based on Zorgbehoeftemeting OuderenZorg (ZOZ)
as developed by Van Loveren & Partners - i.e. a gauge for elderly care needs
- but now part of the care complexity module system.
82. Kaczmakre, D.S. and Metcalfe, G.R. (October, 2005) 'Benchmarking supply
expenses the devil's in the definition: why hasn't the healthcare industry
been able to fix the disconnect between the supply chain and the revenue
cycle? Maybe a generally accepted definition of "supply expense" would
be a start.' Healthcare Financial Management.
83. E.g. Dementia Care Mapping (DCM) as developed by Tom Kitford at the
University of Bradford.
84. RAI = Resident Assessment Instrument. Zimmerman, Morris, Hirdes et al.
have developed indicators that Prismant has fleshed out into a
benchmark instrument.
85. Home care benchmark.
86. Pilot benchmark for healthcare administration agencies.
87. Pilot benchmark for healthcare administration agencies.
88. Consistent finding in home care, nursing and care home benchmarks
and in pilot benchmark for healthcare administration agencies.
89. Preliminary finding in care for the disabled pilot benchmark.
90. 2003 nursing and care home benchmark study.
91. 2003 nursing and care home benchmark study.
92. 2003 nursing and care home benchmark study.
93. Some client characteristics that organisations cannot influence in any
way are known to affect scores. In addition, organisations will also get to
see their unweighted scores, to give them a better handle on areas for im-
provement.
94. For an overview of benchmarking phases, see Keehley et al.
95. De Vries and Van der Togt.
96. Harrington and Harrington.
97. Bullivant.
98. Keehley et al.
99. De Groot et al.
100. De Groot et al.
101. De Groot et al.
102. Keehley et al.
103. Keehley et al.
104. Keehley et al.
105. Keehley et al.
106. Keehley, et al.
107. Keehley, et al.
108. Berg. S. et al. (March, 2006) Water Benchmarking Support System: Survey
of Benchmarking Methodologies (abstract). Public Utility Research
Center, University of Florida.
109. E.g. a different text for above-average scores than for below-average
scores.
110. Mulder, E. and De Loor, M. (June, 2005) Leren door benchmarken. Een
handreiking voor de ggz. GGZ Nederland, Amersfoort. (Learning through
benchmarking. A guide for the mental healthcare sector, in Dutch only)
111. Nieboer A. et al. (2005) Benchmark CVA-ketens. Eindrapportage. Erasmus
MC/Prismant, Utrecht. (Benchmarking the CVA chain. Final report, in
Dutch only)
112. ZN Journaal 2006 nr. 45.
113. Putters et al. (TNS)
114. 2006 Accenture study.
115. Healthcare-specific research has since been carried out and the number
of clusters reduced to five. The findings of the follow-up study should be
reported in September 2007.
116. www.management-development.com
117. Tom Peters, author (with Robert Waterman) of In Search of Excellence,
1982.
118. Het Financieele Dagblad, 29 December 2006 and 6 January 2007.
119. Watson.
120. Rigby and Bilodeau.
121. Watson.
122. Bedrijfseconomische Statistieken. Newsletter, Issue 4, June 2006.
123. HEAD Association and PricewaterhouseCoopers (2006) Laat zien wat uw
zorg heeft betekend. (Showing the value of your care, in Dutch only)

Você também pode gostar