Escolar Documentos
Profissional Documentos
Cultura Documentos
Dutch healthcare
Towards an excellent organisation
Robbert-Jan Poerstamper
Aafke Veltman
Preface 3
Contents 5
Introduction 9
1 Why benchmark? 11
1.1 Rationale for benchmarking 11
1.2 Positioning 12
1.3 Learning and improving 14
1.4 Relationships between performances 15
1.5 Transparency and profile 16
1.6 Information for industry associations 16
1.7 Benchmarking and accountability 17
1.8 Reasons for not benchmarking 20
1.9 In conclusion 20
2 Benchmarking: comparing and improving 21
2.1 Our definition of benchmarking 21
2.2 Other definitions of benchmarking 22
2.3 Benchmarking and Total Quality Management 23
2.4 Definitions: differences and similarities 24
2.5 History of benchmarking 26
2.6 Benchmarking: increasingly embedded 27
2.7 Benchmarking as necessity 29
3 How to make benchmarking a success 31
3.1 When is benchmarking an appropriate tool? 31
3.2 Key success factor 1: Optimise learning 33
3.3 Key success factor 2: The benchmark model should
be broadly based 37
3.4 Key success factor 3: A multidimensional approach 38
3.5 Key success factor 4: High-quality tools 38
3.6 Key success factor 5: Do not leave everything to
external consultants 40
3.7 Key success factor 6: Aligning benchmark to regular records 40
3.8 Key success factor 7: Sensitive data handling 41
3.9 Key success factor 8: No compulsory benchmarking 42
3.10 Key success factor 9: Strength through repetition 43
4 Different types of benchmarking 45
4.1 Classification criteria 45
4.2 Classification by benchmarking objective 45
4.3 Classification by what is being measured 46
4.4 Classification by reference group: internal or external
benchmarking 48
4.5 Classification by level of organisation 49
4.6 Classification by use of normative standards 49
4.7 Classification by research process 51
4.8 Profile of a healthcare benchmark model 53
5 Benchmarking model for healthcare benchmarks 55
5.1 Input and strategic themes 56
5.2 Building blocks of benchmark surveys 59
5.3 The financial building block 59
5.5 Quality of care 66
5.6 Quality of the job 72
5.7 Social responsibility 73
5.8 Relationship between building blocks 75
5.9 Best practices 76
5.10 Explaining performance 79
5.11 Innovation 82
5.12 Reporting results 84
5.13 Benchmark strategic management information 86
6 The step-by-step benchmarking process 87
6.1 Benchmarking phases in the public sector 87
6.2 Keehley’s step-by-step plan 89
6.3 A phased approach to healthcare benchmarking 96
7 Healthcare benchmark:
notable features 103
7.1 Nursing, care and home care benchmark 103
7.2 Child healthcare benchmark 104
7.3 Healthcare administration agency benchmark 105
7.4 Benchmarking care for the disabled 106
7.5 Partial benchmarks in mental healthcare 110
7.6 Benchmarking the healthcare chain 111
7.7 Benchmarking Dutch hospitals 113
8 Innovations in benchmarking 121
8.1 Towards performance excellence 121
8.2 Excellence and innovation 123
8.3 Benchmarking outside the box 124
8.4 More research into cost-to-reward ratios 124
8.5 More benchmark partner involvement 125
8.6 More dynamic reporting 127
8.7 Continuous benchmarking 128
8.8 Simplified data supply 130
8.9 Introduction of XBRL 130
Appendices 133
Transparency
‘We want to communicate our
performance to our clients.’
‘We want to present a clear profile
to the outside world.’
‘We want to be externally
accountable.’
‘We want to boost the industry’s
image.’
‘We want to supply management
information to our industry
association.’
1.2 Positioning
Benchmarking allows comparison of your organisation’s performance and that
of other benchmark participants and provides insight into where you stand
relative to other organisations – e.g. do you rank among the leaders or the
laggards? Benchmarking helps to broaden your perspective and to make your
organisation less inward-looking.
Figure 1.1 gives an example of the kind of information that might arise from a
benchmarking exercise. It is taken from a report submitted by a participant in a
home care benchmark and in this instance relates to client assessment of the
care provider’s accessibility. The blue triangles indicate the scores of the
participating organisations, with the emanating lines demarcating the
confidence intervals. The figure’s horizontal line captures the average score. As
the diamond-shaped symbol shows, the relevant healthcare provider clearly
lags the average.
Why benchmark? !
9.5
9
Client assessment
8.5
8 Average score
Z-org 2004: 8.2
6.5
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81
Healthcare providers
Our second example was taken from the benchmark study on nursing and care
homes, the Benchmark verpleeg- en verzorgingshuizen. Table 1.1 provides a review of
care provided per client in care homes, allowing care providers to compare their
performance with both the average and the best-performing organisations.
Table 1.1 Example of benchmark information: time spent on care in care homes in
minutes per client per day
Time Your organisation Average Best practice
Direct client-facing 77.36 68.64 108.00
Indirect client-facing 10.45 9.48 17.72
Non-client-facing 23.74 21.90 28.37
Total 111.55 100.02 154.09
Source: 2004/2005 nursing and care home benchmark study Benchmark verpleeg- en verzorgingshuizen
" Benchmarking in Dutch healthcare
Provision of care:
• Greater focus on quality of life 7.8 %
Figure 1.2 Example of benchmark information: areas for improvement in home care,
as cited by clients (in percentages of clients citing relevant areas)
Source: 2004 home care benchmark Benchmark thuiszorg
If previous benchmarking has been carried out, a benchmark will also produce
a comparison over time. Figure 1.3, for instance, shows workforce assessments
of their working conditions. A ‘traffic-light’ system highlights performances
and immediately shows up areas where improvements have – or have not –
been made.
Why benchmark? #
Wellbeing Wellbeing
350
300
250
200
150
100
50
0
Total costs per child
Average total costs per child € 267
Figure 1.4 Example of industry information: breakdown of costs per child per year,
child healthcare (JGZ), 0-4 years
Source: 2005 JGZ financial benchmark
Healthcare benchmarks may thus help to instil and increase trust in the health
sector, allowing governments, regulators and financial backers to adopt a less
1
interventionist approach. The Raad voor de Volksgezondheid en Zorg (RVZ) ,
the Dutch Council for Public Health and Care, has found that benchmarking
changes the nature and intensity of the relationship between government,
market and private enterprise.
If benchmark results are used to hold them to account, organisations could shy
away from proper use of benchmarks, some argue. Organisations would hold
back information or make things look better than they really are. They would
display strategic behaviour and thus totally disrupt any learning curve.
Nonsense, others say. Any self-respecting organisation will compare and report
without reservations. Any organisation that does not do so is not really
prepared to learn and will go on the defensive if they do not like the scores. In
2
its report Presteren door excelleren (‘Performing by excelling’), the Dutch
government explicitly linked performance improvement and
transparency/accountability, arguing that benchmarking should serve both
purposes.
4
The authors of Benchmarking in de publieke sector (‘Benchmarking in the public
sector’) would seem to consider learning and being held accountable as a
gradated difference. ‘A choice can be made for a broader or a narrower
perspective. A broader perspective implies measuring and improving, with the
learning curve a vital ingredient for benchmarking organisations. For the
rather more limited purpose of accountability, many public organisations can
stick to benchmarking in its narrowest sense: comparison with a benchmark as
a means of determining relative performance.’
Healthcare benchmarks span the whole range of these views. To an extent, one
key factor will be whether the benchmark in the relevant sector is still
developing. If it is, organisations may wonder if its outcomes are sufficiently
valid and reliable to be used for accountability purposes.
Why benchmark? '
We take the view that learning and accountability are fundamentally different
goals. That said, at least some of the information disclosed is exactly the same.
As accountability is required anyway, whether they benchmark or not,
healthcare providers had best make sure that the required data are
streamlined as much as is feasible and that definitions are harmonised. If they
do not, benchmark participants are in danger of having to disclose a
completely different set of data for accountability purposes – not exactly an
encouraging scenario, and one that would put a double burden on healthcare
providers.
The best way to go is obviously to collect the same data for both benchmarking
and accountability purposes wherever possible. Aggregation levels will
typically differ, with disclosures to regulators aggregated at high levels – e.g. at
the level of the organisation – while benchmarking requires lower levels of
aggregation as its outcomes are intended to feed into actions for improvement.
A basic set of data thus emerges for use towards both learning and
accountability, supplemented where applicable with data used for one of these
purposes only.
1.9 In conclusion
It is our experience that benchmarking can produce many rewards for
organisations, with gauging their position and identifying areas for
improvement being the most basic. Transparency, image improvement and
input towards policy-making are other benefits. Accountability and regulatory
disclosures are fundamentally different from learning and improving, but
aligning data sets is vital if organisations are not to face a double burden.
2 Benchmarking: comparing and
improving
This second section sets out the various definitions of benchmarking found in
the literature. ‘Comparing’, ‘learning’ and ‘improving’ typically crop up in
almost all of them. The section also touches on the relationship between
benchmarking and Total Quality Management and describes how
benchmarking has developed into a management tool that is widely used
across the world and has now also made inroads into the public and healthcare
sectors.
5
Spendolini defines benchmarking as a ‘continuous, systematic process for
evaluating products, services, and work processes of organisations that are
recognized as representing best practices for purpose of organisational
improvement’. Benchmarking involves continuous measuring of trends and
developments on the basis of a series of activities, with learning not just the
product of measuring (quantitative) but also involving investigating
(qualitative). Benchmarking is not restricted to specific types of activities or
organisations, and preliminary research should help narrow down the list of
suitable benchmark partners, i.e. those that excel in the product or process to
be studied. Learning should be action-oriented, that is to say, it should lead
somewhere.
6
Camp defines benchmarking as systematically investigating the performance
and underlying processes and practices of one or more leading reference
organisations in a particular field, and comparing one’s own performances
with these best practices, with the aim of identifying one’s own position and
improving one’s own performance.
9
Westinghouse calls benchmarking a ‘continuous search for and application of
significantly better practices that leads to superior competitive performance’.
Note that the Westinghouse definition is rather more ambitious than that of
most others. We will return to the use of benchmarking to achieve superior
performance in Section 8.
10
In Benchmarking in de publieke sector the authors describe benchmarking as
creating insight into the relative performance of organisations within a group
through comparison with a benchmark organisation. However, they also
observe that organisations typically aim for more, with benchmarking also
expected to contribute to improving the way institutions or companies
function. Performance should not just be measured and compared, but where
possible also improved. Benchmarking in the public sector is primarily seen as
a tool to measure and enhance effectiveness – i.e. are the right things being
done? – and efficiency – are things being done well and affordably? In other
words: the learning curve is key.
11
For Van Gangelen the learning aspect is so important that he includes it in his
definition: systematically investigating the performance and underlying
processes and practices of one or more leading reference organisations in a
particular field, and comparing one’s own performance with these best
practices, resulting in action-oriented learning.
12
The European Commission also includes learning in its definition, which is
the briefest we have found: ‘benchmarking is improving by learning through
comparison’.
16
par with outsourcing and continuous improvement. He feels that
benchmarking is often a very useful boost to energy and direction in a TQM
17
programme.
Benchmarking and TQM differ in that benchmarking focuses on key issues and
best-in-class comparisons, while TQM covers all aspects of an organisation and
18
may also be totally internally focused.
19
Bendell sees the current interest in benchmarking as ‘a natural evolution
from total quality management’ and in fact takes TQM one step further. TQM
focuses on a set of minor inefficiencies in need of improvement, but small
incremental improvements are not enough in this day and age, Bendell
reckons. Global competition requires quantum changes that are only
achievable through benchmarking.
Benchmarking may have started in the private sector but it has long since made
the transition to the public sector, at local, provincial, national and European
level.
As for the healthcare benchmarks at the heart of this report, we would rate the
following quotations as revealing of their development. In 1998 a very tentative
research question read: ‘Is it possible to develop a single, integrated benchmark
model for nursing and care homes and if so under what conditions?” (Request
for a feasibility study on a benchmark for nursing and care, September 1998).
Benchmarking: comparing and improving %
In 2006, a mere eight years later, the request was to set up a sector-wide and
forward-looking benchmark investigation: ‘Develop a continuous benchmark
for the totality of nursing, care and home care, drawing on state-of-the-art ICT
and complying with other information trajectories.’
The change in who is giving the assignment is also interesting. Things started
out like this: ‘The Government has tasked the Ministry of Health, Welfare and
Sport to launch an investigation in 1998 into the possibilities for
benchmarking in all sectors governed by the Algemene Wet Bijzondere
24
Ziektekosten (AWBZ, Exceptional Medical Expenses Act).’ The ministry may
have taken the initiative, but industry associations quickly developed into
co-sponsors, to end up as these benchmark studies’ sole sponsors. The
benchmark subsequently changed from a subsidised project into an activity
paid for by healthcare providers themselves. In some instances the government
is still acting as the driving force behind benchmarks in sectors that have had
no sector-wide benchmarking. To date, the Dutch child healthcare system (JGZ,
covering 0-19 years of age) has no comprehensive industry-wide benchmark,
and we are seeing the government provide the first push by way of a project
structure and subsidies. But it is doing so within a broader framework, as part
of its Beter Voorkomen (Prevention is Better) programme that also
encompasses financial accountability.
Outsourcing 3.89
CRM 3.91
Customer segmentatin 3.97
Benchmarking 3.98
Strategic planning 4.14
3.5 3.6 3.7 3.8 3.9 4 4.1 4.2 4.3 4.4
Figure 2.2 Most used management tools in 2004
Source: Bain & Company, 2005 management tool survey
29
Bain & Company also finds benchmarking all over the world, with the one
exception of Asia, where the tool is clearly less popular. Ignoring Asia, the use
of benchmarking would move up a slot in Figure 2.2, making it second only to
strategic planning. In Europe, a hefty 88 per cent of respondents used
benchmarking as a management tool.
30
Benchmarking featured high on the list of tools surveyed by Bain & Company
for over a decade, leading the consulting firm to conclude that it is not a fad but
a consistently used instrument. Accenture’s 2006 global survey into the use of
31
benchmarking within public administration finds that government and
government bodies are increasingly reporting the use of benchmarking as a
tool.
A sure sign of its growing popularity in the public as well as the private sector
was the creation of the International Benchmarking Network under the
auspices of the Organisation for Economic Cooperation and Development
(OECD). An informal experts group, the network’s objective is to monitor
benchmarking developments in public sector organisations and to gather and
disseminate such information. The network has a particular focus on types of
32
international benchmarking. The group first met in Paris on 21 November
Benchmarking: comparing and improving '
1997, and its planned activities include maintaining a database of web links on
benchmarking in the public sector.
33
The authors of Benchmarking in de publieke sector believe that the United
Kingdom and the United States have a clear edge in benchmarking, especially
in the public sector. In the Netherlands, by contrast, benchmarking is still seen
as something out of the ordinary.
34
In its survey of benchmarking in the public sector, Accenture observes that
the objective of the benchmarking exercise (performance improvements
35
and/or cost-cutting) often derives from heightened outside pressure. Bendell
lists three developments driving benchmark studies: global competition,
prices/publicity and the need for breakthrough projects. To survive, companies
will have to match or exceed best practice at their competitors all across the
world. And winning awards brings more kudos, too: think the Malcolm
Baldridge Award in the United States, for instance, or the European Quality
Award for Business Excellence. Holland’s equivalent is the Nederlandse
36
Kwaliteitsprijs en -onderscheiding (the Dutch Quality Award).
Benchmarking has a lot to offer, and we would argue that it has its place in
Dutch healthcare. Like any other management tool, benchmarking of course
also comes with its preconditions and pitfalls. This section lists them and
suggests solutions to potential challenges, reviewing such issues as:
• preconditions for benchmarking
• optimising learning
• the need for a broad-based benchmark model
• the use and purpose of external consultants
• the added value of a multidimensional approach
• tool quality requirements
• aligning data gathering with general administrative duties
• the ethics of benchmarking
• the importance of voluntary participation
• the importance of repeat benchmarking
43
According to De Vries and Van der Togt , a structure conducive to
benchmarking typically displays the following features:
• a focus on processes and operations and not on people, jobs or parts of the
organisation
• a pre-established TQM system
• a framework encouraging information-sharing
• a team-driven approach, training facilities (benchmarking needs to be
taught) and pre-established monitoring mechanisms
44
Paraphrasing the words of a PrimaVera Working Paper, benchmarking
requires a learning organisation. This paper’s authors also identify a number of
preconditions if benchmarking is to be successful. First is that the structure of
the organisation allows differences. Second, that its culture encourages
learning and experimenting with organisational change. And lastly, the
organisation needs to have a vision that puts changes in perspective, i.e. where
are we taking the organisation?
49
Accenture , for one, finds that nearly all companies and public organisations
come out of a benchmarking study knowing what issues they score less well on,
but that only four per cent of them have any inkling as to how to adjust their
50 51
procedures and systems subsequently. Van Gangelen says more or less the
same: ‘Benchmarking turns out to make a crucial contribution to obtaining
insight but does not appear to inspire action-oriented learning.’ He considers
that there is no evidence that benchmarking leads to the implementation of
new knowledge in the organisation, citing potential reasons such as limited
capacity for change in the organisation, other strategic priorities and cultural
aspects such as disposition to change. Lack of transparency hides a fear of being
held accountable, he argues.
52
Reviewing twelve benchmarking initiatives, Kishor Vaidya et al. also find no
trace of any measures for change but conclude, somewhat to their surprise it
would seem, that this does not seem to affect the tool’s popularity.
In our healthcare benchmarks we have also heard it said that not much is being
done with benchmark outcomes. In one employee survey we asked staff that
had taken part in the previous survey whether it had brought about any
change. Half of respondents felt little or nothing had been done with their
views.
!" Benchmarking in Dutch healthcare
The literature on benchmarking has little to say about the absence of learning.
But why would an organisation invest so much in a benchmark to then do
nothing about it?
We can only put forward a number of hypotheses. One would be that learning
and improving imply change. And change is not something people do easily,
even when they understand that it is necessary. Watson may argue that
organisations unwilling to change are not ripe for quality, but that is by no
means to say that change is easy.
53
Senge is convinced that less is being learned than should be possible not
because of lack of will or good intentions, but because of our internal map of
reality. He explains that new insights are not applied because they do not fit in
with our views of the world and reality. These views – or what he calls our
‘mental models’ – prevent us from thinking and acting in any other ways than
those in which we are used to thinking and acting. The learning organisation
would do well to devote a great deal of attention to these mental models and
discuss them at length, he recommends.
54
Argyris has also done a lot of research into organisations’ capacity for
learning, more specifically into the capacity for learning of management
teams – or rather the absence thereof. He even goes as far as to call this the
‘skilled incompetence’ of teams of people who are experts at not learning.
Managerial and professional behaviour creates defence mechanisms and then
clothes these with various types of argument. A management team that really
wants to learn is not just interested in the reality of the organisation but also in
the actual nature of the management team itself.
55
His findings gel with what Kets de Vries identifies as the need to ‘fight with
the demon’ in his observations about organisations. He reckons a key condition
for achieving improvement is being able to handle the irrationality in
organisations and managers, seeing a ‘world of difference’ between identifying
and analysing symptoms, and really getting to the bottom of the problem. The
real task, he argues, is to ‘shatter illusions’.
If benchmark outcomes are not what the organisation expected them to be, it
will be tempted to blame the benchmark survey. And sometimes rightly so, as
How to make benchmarking a success !#
we know from experience. But occasionally the Not Invented Here syndrome
kicks into action: ‘It wasn’t us who invented the indicator or method, so we
doubt its validity.’
When looking into the question of how to encourage learning and improving,
we soon hit on a number of fundamental change principles. Change experts
call these the need for change, the willingness to change and the capacity for
change – the three key tenets of any change programme.
The first condition for any change is that the need for change is recognised, and
– in this instance – that the benchmark is agreed to be a tool that could
potentially help in learning and improving. Recognising the need for change is
a largely rational mental process. Benchmark outcomes are also rational:
scores that reveal whether one performs better or worse than others.
But whether the need for change actually leads to change then depends on a
willingness to change – a willingness to think outside the box and take a risk.
Now this is a much less rational process. Don’t forget, we are now talking about
the willingness to change of people who will actually have to implement the
change. And they are not necessarily always the same people who have decided
to participate in the benchmark in the first place.
The third and final precondition of change is the capacity for change. Are the
people who need to change capable of changing? Do they have the knowledge
and expertise to apply fresh insights? Focusing on the benchmark: do they
know how to interpret benchmark outcomes? And how to translate these into
action? Again, we are talking rational aspects here, although the challenge of
change always brings to light qualities that are rather less rational.
Ensure that participants are adequately advised of how the outcomes may
be interpreted and translated into improvement measures.
Embedding
Act Plan
Check Do
57
Benchmarking in de publieke sector sees an integrated approach as a key success
factor for benchmarks: ‘Always opt for an integrated approach comparing both
financial and non-financial indicators and explicitly taking into account the
resources – financial and otherwise – available to the individual organisations.’
More specific success factors here, we reckon, are a fact-based approach and
optimum use of ICT. As the term implies, a fact-based approach primarily
means that we stick to the facts. Facts and figures can be validated and
How to make benchmarking a success !'
In the early years of healthcare benchmarking there was certainly some debate
as to whether client views could be captured in a quantitative gauge. Some
pressed for a more qualitative approach, but in practice this is hardly possible
and not really necessary either. It is now generally accepted that client views
can indeed be adequately captured in a score, e.g. by having clients agree or
disagree with specific statements or list how often they have had specific
experiences (the latter is now generally agreed to be the most exact line of
questioning). This type of survey can always also include a question that allows
the client to pick the most urgent from a list of improvements or, if benchmark
participants so desire, an open text box allowing clients to raise their own
59
issues. Of course, the same also applies to employee surveys.
Research tool quality is also reflected in ICT usage. A database is set up for every
benchmark, storing all data that feed into the analyses. ICT also comes into
play in feedback reports, which are automatically generated and include the
relevant organisation’s data. In addition, our more recent benchmarks feature
a web-based tool for the sign-up procedure.
" Benchmarking in Dutch healthcare
62
The authors of Benchmarking in de publieke sector recommend: ‘Embed periodic
benchmarking in regular quality improvement activity within the
organisation. Make the necessary internal and external data gathering part
and parcel of day-to-day operations. Align your regular administration of
financial and non-financial data with your benchmark partners.’ We concur.
Aligning data flows requires full attention. But aligning should never come at
the expense of quality. It will not do simply to benchmark with data that
happen to be there. This would sharply reduce the chances of unearthing
interesting interrelationships or finding useful clues for change.
How to make benchmarking a success "
US literature on the subject often calls this the ethics of benchmarking. ‘What
you do not wish for yourself, do not do to others’ is Bendell’s key ethical
principle of benchmarking. Ethical guidelines to benchmarking have been set
down in the Benchmarking Code of Conduct as developed by the American
63
Productivity & Quality Center and in the European Benchmarking Code of
Conduct, developed by companies and institutions working together in the
Performance Improvement Group.
64
Benchmarking in de publieke sector observes that it is easy for organisations in
benchmarks to make things look better than they are. We are often asked about
this, too. ‘Don’t organisations abuse the benchmark? How do you know
whether you are getting genuine information?’ Our rejoinder is that the
benchmark was designed by and for the participating organisations, and that
the responsibility for supplying true and accurate information lies squarely
with them. Years of experience have taught us that strategic behaviour only
happens once in a while – experience that derives from consistency checks
included in the benchmark, among other things. Of course, these checks are
not infallible, but they do give a very good indication. Besides, it would not be
very logical for organisations to supply misleading information: they would be
investing a lot of time, money and energy in a benchmark whose outcomes
they could not trust if they supplied incorrect data or thought that others did.
instance, and finding the benchmark’s data in the media is something else
again. Organisations do voice serious concerns about public disclosure,
arguing that this will encourage strategic reporting and lead to loss of control.
Numbers can take on a life of their own, are prone to misinterpretation and
might create an unjustifiably negative picture. Moreover, public disclosure
could mean competitors making off with secrets after all.
During the first years after the introduction of healthcare benchmarks there
was considerable debate about the term ‘performance’. Many healthcare
providers felt that it did not apply to them and was a term coined in the
corporate world that was not appropriate to the healthcare sector. This debate
has since subsided. The question now is what the performance, or outcome, of
healthcare providers actually is. Healthcare benchmarking to date has focused
only on output, which is to say on the amount of care. The industry – in
particular non-curative healthcare – still has a long way to go when it comes to
unambiguously measuring the outcome of care. How can you uniformly
measure an improvement in well-being on a national scale, for example, and
how can you prove that this improvement is the result of the care provided?
That is why the focus on the quality of care in existing healthcare benchmarks
serves to replace a lack of measurable outcomes.
"& Benchmarking in Dutch healthcare
A further analysis of the literature shows that many researchers are in favour of
including organisations from other industries in the benchmark. Odenthal
72
and Van Vijfeijken observe that looking beyond existing boundaries can yield
Different types of benchmarking "'
new perspectives and solutions. Education could, for example, learn from the
way in which the health sector operates, for instance in the areas of knowledge
management, rostering and planning. Another example is a hospital
benchmarking exercise in the United States where the patient admission
process was compared with the way in which airlines and hotels run their
73
check-in counters.
After all, if indicators that are not generally accepted are used in a benchmark,
the organisations involved could negate the outcomes by referring to an
invalid standard: the Not Invented Here (NIH) syndrome.
# Benchmarking in Dutch healthcare
This also has its drawbacks, of course. One could point out that there may still
be room for improvement even in the best-performing organisations in a
particular industry. So is this the best possible solution? We see it as a process.
For the time being, there is still much to be gained by taking the
best-performing organisations in an industry as the point of reference. And the
best performers themselves usually need to put in considerable effort to
75
remain the industry leaders. That said, we are aware that this approach has its
limits. Organisations aiming for true performance excellence tend to look
beyond the performance of their own industry. They wish to excel, either by
formulating ambitious standards or by emulating organisations abroad or
organisations in other industries.
In some cases the adoption of standards may be closer than one might think.
Standards for responsible healthcare set specifically for the healthcare sector
are expected to apply soon. Care providers, clients, health inspectors and other
parties are jointly developing quality standards that need to be met by all
organisations and to be tested from time to time. The standards combine scores
78
taken from client surveys with scores for indicators used by the Dutch
Healthcare Inspectorate. Experience to date has shown that these standards are
highly suitable for use in benchmarking.
Different types of benchmarking #
information sessions at the start of the process, via the helpdesk and during the
workshop at the end.
It goes without saying that these two types of benchmarks place different
requirements on organisations. The first requires that the organisations
benchmarked largely give shape to the learning process themselves. An
internal team will be required to initiate an awareness-raising process during
the benchmark survey, embed the benchmark results in the organisation and
see to the implementation of quality-improvement measures. The second type
of benchmark, the interactive model, requires participants to be more closely
involved in developing the benchmark from start to finish.
Whereas the first type of benchmark is better suited to large-scale surveys with
several dozen or even hundreds of partners (in which case jointly developing
the benchmark is practically impossible), the second type is better suited to
small-scale surveys.
And whereas the fact-based benchmark may disregard the learning needs of
individual participants, the interactive benchmark could lead to repeatedly
‘reinventing the wheel’ and possibly less-than-optimum solutions, such as
insufficiently ambitious benchmarks, which would diminish the learning
effect.
There are also practical reasons for choosing a particular type of benchmark.
Whereas large-scale surveys may include several dozen or even several hundred
Different types of benchmarking #!
In this section we will address more closely the benchmarking model used in
recent decades in the benchmark surveys we have carried out together with our
clients and other partners in the health sector. This means that this section is
based on practical experience.
The key features of the model are financial performance and quality. With this
model the survey results yield strategic management information, which the
organisations can use to adjust their strategy or use of people/resources in order
to improve performance. As far as we are concerned, the model will continue to
form the basis of good benchmarking, which does not mean that there is no
room for improvement. For further information on this, see Section 8.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
Figure 5.1 Benchmark analysis model
Source: PricewaterhouseCoopers; various benchmark studies.
#$ Benchmarking in Dutch healthcare
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The model should be read from left to right. The left shows the organisation’s
input, or starting point. The organisation will have to take account of its
environment (or, more precisely, the environment ultimately determines the
organisation’s goal) and its historical context. And the organisation employs
resources: capital, equipment and of course, being in the health sector, people.
The organisation will then identify its strategic themes on the basis of this
starting point, which so far has little to do with benchmarking. Which
product-market combinations do we wish to deliver? How do we deal with a
tight labour market? How can we remain financially sound? How can we
become the very best? The answers to these questions will largely determine
the way in which people and other resources are employed. In other words: the
strategic themes determine an organisation’s structure and work processes.
The benchmark can provide useful information for the organisations’ strategic
themes. Consequently, we often start a benchmark survey with a session in
which representatives of the partners are asked to name a number of themes
about which information could be gathered. In the 2004 home care benchmark
the tight labour market was named as one of the strategic themes. That is why
this benchmark paid particular attention to identifying the factors that
influence the motivation of existing employees and the appeal of the
organisation as an employer.
Benchmarking model for healthcare benchmarks #%
Strategic position
We were able to test this hypothesis in the 2004 home care benchmark. We
opted for a classification into three possible strategic positions, based on a
79
theory developed by Treacy and Wiersema. These researchers make a
distinction between the position of client leader, product leader and cost
leader. A client leader focuses on optimum customer satisfaction, with service
and long-term customer loyalty being key concepts. A product leader offers
sophisticated and innovative products (e.g. ICT in healthcare) while a cost
leader focuses on offering the lowest possible price. Treacy and Wiersema hold
that successful companies choose to pursue one specific position.
In the benchmark we opted for a construction in which the position could vary
per product. We could, for example, imagine that home care organisations
would choose the position of client leader for the product ‘personal care’, but
the position of cost leader in the case of ‘housekeeping services’.
We assumed that the client leaders in the benchmark would score highest on
the building block ‘client assessment’, that cost leaders would score highest on
the financial building block, and that product leaders would have a bigger
investment budget for product development.
The findings of the study did not confirm our hypothesis. First of all, no more
than 20 per cent of all home care providers actually opted for one of the three
positions; of those that did, all chose the position of client leader. A large
majority of the organisations did not make a clear choice, or chose two or three
positions.
#& Benchmarking in Dutch healthcare
Secondly, no correlation was found between the position opted for and the
performance delivered. No more than five of the 17 organisations that made a
clear choice in favour of client leadership scored above average in the client
survey.
These findings can be interpreted in different ways. First of all, it could be that
the three positions distinguished and/or the questionnaire are not yet
80
sufficiently geared to the healthcare sector. Treacy and Wiersema did not
develop their theory specifically for the healthcare sector, and not even for the
public sector. Secondly, it may be that most organisations have so far chosen
their strategic position implicitly and have not related it to the structure of
their organisation or their work processes. Several benchmark partners
advised us that this instrument had helped them in the awareness-raising
process. A third interpretation is that whilst they may have chosen a position
explicitly, its implementation in the organisation was still in its infancy.
That said, there was also evidence indicating that the strategic position of
organisations did affect their performance. Three of the five best performers in
the benchmark survey had chosen a clear position, which is much higher than
the average of 20 per cent referred to above. We may not, of course, draw any
conclusions from such small numbers, but this does give us grounds to suggest
that including the strategic position of organisations in the benchmark can
add value.
The hypothesis that the strategic position opted for by organisations influences
their performance has not been rejected. The time may not yet be ripe to
include this aspect in the benchmark, but that time will no doubt come.
Benchmarking model for healthcare benchmarks #'
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
Let us return to the analytical model. The central part of the figure depicts the
building blocks. These building blocks constitute the key elements of the
benchmark survey, namely the performance aspects measured. In defining the
various areas measured, we decided to follow as closely as possible the INK
management model designed by the Dutch Quality Institute. As this is a
well-known model in healthcare, its outcomes are familiar to participants and
easily applicable to their own operations.
The building blocks examined differ per benchmark. In all cases, however, a
benchmark should include several building blocks, and therefore several
dimensions.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The financial building block has so far formed part of all healthcare
benchmarks. An organisation’s financial performance is based on cost, equity
$ Benchmarking in Dutch healthcare
The definition of financial performance has been extended since the 2004
home care benchmark survey, as efficiency is not the only aspect of financial
performance that providers must deliver on. Even an organisation that delivers
much more care per euro than other providers, for example, has a problem if
costs exceed income in a given year. A similar problem would arise in the long
term if an organisation’s financial position were not sufficiently sound. That is
why the financial building block was extended to include the components
‘financial position’ and ‘net profit/loss’. The components are weighted in order
to calculate a total score. The financial position, for example, accounts for 10
per cent of the total score, net profit/loss makes up 20 per cent of the score and
efficiency 70 per cent. Other weightings are also possible, but the degree to
which a care provider itself is able to influence performance should also be
borne in mind. If, for example, the government or another financial backer
permits only limited capital accumulation, or chips in immediately in the
event of losses, an organisation’s results are not necessarily a good indicator of
performance.
It is for this reason that the most recent benchmarks for care of the disabled
and for nursing, care and home care (VVT) no longer require that time is
registered (although care providers may do so on a voluntary basis). The
amount of care delivered is now measured through production targets or
registered production, which may be related to staffing levels. Today (2007) this
is much easier than it was some time ago. We are better able to keep tabs on the
amount of care delivered than we were a few years ago, now that we work with
so-called ZZPs, or care complexity modules, rather than nursing days. Another
development is that the amount of care provided is limited given the
requirement that the care delivered should be matched to need. Some
$ Benchmarking in Dutch healthcare
providers are even experimenting with electronic patient files in which the
care provided is registered daily.
The first item to be measured is the personnel costs per contract hour per group
of employees or level of expertise. In other words, the salary costs, special
allowances etc. are related to the length of the working week (or ‘contract
hours’) as stated in the employee’s contract of employment. Time
measurements are then used to calculate the percentage of contract hours
spent on client care. After adjusting for group-based care, this gives the
Benchmarking model for healthcare benchmarks $!
personnel costs per hour of client care. The costs are calculated for each level of
expertise. These costs are multiplied by an extra amount for overhead and
materials, broken down by level of expertise. A calculation of this kind shows
up the actual costs for each client-facing hour, which in turn can be used to
calculate the costs per hour per AWBZ function by determining, with the aid of
the ZBR, the share of each level of expertise in each function. The costs per care
complexity module could also be calculated in this manner.
Calculating efficiency gives the cost per AWBZ function (or ZZP) per time unit.
The price level can then be explained by each of the underlying factors and any
relationships between these factors. In this way the benchmark can be used to
provide pointers for improvement in an organisation’s financial performance.
The cost price model can also be used independently of the benchmark, for
instance to calculate the effects of a new budget or of certain cost measures.
Figure 5.3 presents the model for nursing and care.
hours
16 Staff payroll department
4
10 Direct client-facing hours costs per contract hour personnel costs
l Gross salary
3 = healthcare workers
l Shift work bonus productivity per client-facing
2 Indirect client-facing hours
l Net social security
nursing
housekeeping
personal care
support activities
activation
treatment
accommodation
1 Staff insourcing at group level personnel
contributions including l Sick leave costs
0 savings premiums costs per contract hour personnel costs Insourcing healthcare
l Leave/holidays
l Net pension = healthcare workers factor workers per
productivity per client-facing
HVZ
PV
VP
OB
AB
B
V
HVZ
PV
VP
OB
AB
B
V
l Training
contributions client-facing
l Holiday allowance l Other work hour
Outsourced to third parties
Contract hours
l Overtime l Meetings
personnel costs
costs per contract hour
specific
l Travelling time
number of client-
number of clients
facing hours Cost price per function per time unit
Activities (day/night) per function
Care delivered
number of client-facing
for example: hours per function p (price)
Group matrix
tion
HVZ
PV
VP
OB
AB
B
l accommodation
HVZ
PV
VP
OB
AB
B
V:
moda-
accom-
Benchmarkanalysemodel voor zorgbenchmarks $#
Efficiency measurement entails more than cost price calculations per function
per hour. It should also include a comparison of different organisations,
bearing in mind the differences in care complexity among clients. One hour of
care given to clients requiring highly complex care may be more expensive
than an hour given to clients with less complex needs, for instance if more
highly skilled staff are used. This means that cost prices should not be
indiscriminately compared with one another.
We solved this problem of comparison in the benchmarks for nursing and care
homes by grouping the organisations into clusters based on the complexity of
81
care. Cluster classification was used mainly in measuring efficiency, but the
results of the client survey were also reported per cluster.
The same problem was found in the home care benchmark study. Cost prices in
organisations that focus on delivering housekeeping services care rather than
nursing cannot be simply compared with organisations in which the situation
is the reverse. Here, too, we were able to use cluster classification, although its
value has diminished over the years. These days, many home care providers
offer comprehensive care packages that are very similar. Things have changed
substantially since the days when family care providers offered a package that
differed markedly from that offered by the home nursing organisations.
We calculated the cluster classifications used in the benchmarks with the aid
of the DEA method. DEA stands for Data Envelopment Analysis and is a method
used to automatically calculate which clusters of care providers offer a similar
product mix, and subsequently to calculate which organisations offer most
care per euro. These organisations are found on the efficiency frontier, as
shown in Figure 5.4. Care providers to the left of this line are less efficient.
$$ Benchmarking in de zorg
Number of
hours of 150 14
11
housekeeping 15
12
services 13 Efficiency
per NLG 100 frontier
10
100 8
6 4
50
1 2
0
50 100 150
Number of hours of care per NLG 100
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
From the very beginning we have evaluated the quality of care using client
assessments. We had to defend this choice at first, not so much to the home
care providers themselves or to industry associations, but to other
stakeholders. What do clients know about the quality of care, they would
argue. He or she (and in particular she) may notice whether the window sills
have been thoroughly dusted, but who are they to judge whether the
intravenous drip was properly sterilised? All parties meanwhile agree that
client assessments form an essential part of quality assessments and that
quality involves more than just medical or technical nursing skills. Measuring
client assessments is indispensable in an industry in which clients can
sometimes even switch to another care provider if they are not satisfied with
the quality offered. That is why client assessments have been included as a
fixed parameter in the standards for responsible care referred to earlier.
If at all possible, we want to know what the client’s own assessment is, even if
this means that individual interviews need to be conducted or that
questionnaires need to be especially adapted. We sometimes address
additional questions to a client’s family, but only if the client is truly incapable
of answering any questions. In the existing nursing, care and home care
benchmark this is done with the aid of the client surveys used to measure
responsible care. Client surveys used to assess care of the disabled are
developed as part of the benchmarking exercise. The benchmark participants
indicate whether clients are competent to judge. Whilst we are aware of the
drawbacks of this approach – do organisations assess the situation in a
comparable fashion? – we see no alternative as things stand now. And whereas
83
there are methods to measure the quality of care by observing clients , these
$& Benchmarking in de zorg
methods also make use of third parties (observers), and they are
time-consuming and expensive. That said, we should keep a close tab on
developments in order to further improve this aspect of healthcare
benchmarking.
Interviews or questionnaires?
We only use an Internet tool for relatives of clients, and then only on a limited
scale and in addition to the written questionnaires.
Benchmarkanalysemodel voor zorgbenchmarks $'
The Internet
The possible uses of web surveys need to be explored further. Clients and their
families alike are becoming increasingly comfortable with the Internet, and
developing user-friendly survey tools should no longer be a problem. Providing
group instructions on location in nursing and care homes could enable
residents to subsequently complete the questionnaires by themselves, and
software is currently being developed for the disabled to go online. It may be
somewhat crude to say that there is a bright future for digital client surveys, as
personal contact will always be the preferred choice, but we should at least
ensure that the costs of written questionnaires or face-to-face interviews do not
deter us from taking stock of clients’ opinions at regular intervals.
In the housing corporation benchmark, for example, the entire client survey
was digitised (see Figure 5.5). The pilot for this benchmark was recently
completed.
Whereas the contents of client surveys are tailored to the specific industry
being benchmarked, some issues feature in all surveys. Surveys can usually be
divided into questions relating directly to the care provided and questions
relating to how the care is organised. The first type of questions addresses the
way in which care providers treat clients (listening to clients, showing respect)
and the actual care provided (such as the quality of personal care, meals or
daily activities). In short, they reflect how clients experience the care they
receive. The second type includes questions such as whether the number of
staff suffices and whether the healthcare providers are easily accessible.
Examples of statements used in the client survey of the nursing and care
home benchmark:
• The personal grooming I receive here is very good.
• Residents receive adequate assistance when they need to go to the
toilet.
• I’m happy with the way they use the hoist.
• My care plan was drawn up in consultation with me.
• Staff always comply with arrangements made.
• If I ask my carers to do something, they sometimes give me the feeling
that I am a burden.
We expected to find a clear correlation between the client’s assessment and the
benchmarking organisation’s score on care-specific quality indicators, as one
would think that aspects such as bedsores or being strapped down would
influence a client’s opinion. But our analyses consistently found either no
Benchmarkanalysemodel voor zorgbenchmarks %
correlation or only a weak one. The number of organisations with a high client
assessment score and a low score on care-specific indicators, or vice versa, was
far too large for the two to be correlated. This suggests that these two indicators
measure quite different aspects of quality. It is useful to take note of the fact
that the care-specific indicators are not what clients themselves find
important. They look at the way in which they are treated by their carers and
take their professional expertise for granted. Watson calls this a basic
requirement: clients only consider it to be a dimension of quality if care
providers fail to deliver. Clients see a serious medical error as a severe lack of
quality, but when choosing a care provider other aspects appear to play an
important role.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The social responsibility building block has been used twice in healthcare
benchmarks: once in the 2002 home care benchmark study and once in the
pilot benchmark for healthcare administration agencies.
%" Benchmarking in de zorg
showed quite clearly that healthcare administration agencies scored less well
on a number of indicators. However, as these items were the same for all
healthcare administration agencies in the pilot, the question that arose was
what added value there would be in including these same indicators when
launching the benchmark country-wide. The healthcare administration
agencies themselves felt there was a big chance that this would yield the same
findings. In other words: the area for learning had already been identified.
And so it was decided in both cases not to include social responsibility in future
benchmarks. That said, the notion that society’s views matter has not been
discarded. The housing corporation benchmark has included social
responsibility as a building block, and the vocational education sector has also
decided to include this building block in their next benchmarking exercise.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
Once the results of the individual building blocks in a benchmark study are
known, the relationships between the various building blocks can be analysed.
Are employees with a higher average salary more positive about their jobs than
employees with a lower salary? If clients rate the way in which they are treated
by staff poorly, could that be explained by the fact that the organisation
employs many low-skilled workers? Analysis of the relationships between the
building blocks has yielded a wealth of possible relationships. Whereas some of
the correlations found are weak and merely indicative, others are significant
and have been confirmed by regression analysis, and are consistent over time
and across industries.
%$ Benchmarking in de zorg
Clients were found to be more positive about the quality of care than were
healthcare staff (and more positive than the clients’ contact at the care
provider). As a rule, however, one can say that organisations that score
well on quality of care also score well on quality of the job. We found a
relationship between the organisation’s financial performance and several
aspects of quality of the job. This would suggest that people feel better if
they work for organisations that perform well financially. Conversely,
healthy staff (the quality indicators measured related mainly to matters of
health) may be expected to contribute to financially sound business
operations.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The main question that arises after the analysis has been performed is who the
benchmarked organisations should try to emulate. If a healthcare provider
wishes to improve the quality of care, it would seem logical for it to emulate the
organisations that perform best in the client survey. Similarly, if it’s an
organisation’s efficiency that needs to be improved, the logical choice would
seem to be to compare oneself with the organisations that score best on this
item.
Yet however logical these conclusions may seem, they do not appear to hold
true, as organisations are not one-dimensional. Could it be that a very high
score on one building block takes its toll on other building blocks? We found
that this was indeed the case. Whereas almost all best performers score well on
Benchmarkanalysemodel voor zorgbenchmarks %%
individual building blocks, they tend not to rank among the very best. An
exceptionally high level of efficiency tends to go hand in hand with
middle-of-the-road quality, and vice versa. True best practices are found in
organisations that have succeeded in striking a balance between the building
blocks, providing them with a good overall score. One could compare it to
all-round speedskating championships: the all-round winner is usually found
among the top five for most of the individual events, but need not win over
every single distance in order to become the all-round champion.
In short, best practices are determined by the total score. In our reports, we
usually visualise this in a matrix presenting the different scores (see Figure 5.6).
The matrix has three dimensions, as it uses colours in addition to showing the
position. The organisations in the top right corner of the figure score well on
the financial building block and in the client survey, but only the care
providers given in light blue also performed well in the employee survey. The
blue organisations in the top right corner (BP) are best-practice organisations.
The care providers given in light blue and black are not, because they score less
well in the employee survey.
8.6
Client assessment
8.4
8.2
8.0
7.8
4.5 5.5 6.5 7.5 8.5 9.5
Total score financial survey
Identifying best practices using the above method also has a drawback: it is
only effective with a sufficiently large group of participants. And the greater
the number of building blocks included in the exercise, the more participants
are needed for there to be enough organisations that score well on all building
blocks. Given this drawback, the identification of best practices has not always
been given the highest priority.
It is also for this reason that it has been suggested to make do with ‘good
practices’ (a broader group of good performers) or to work with the best-scoring
organisations per building block after all.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The home care benchmark study, the benchmark studies in nursing and care,
the pilot benchmark addressing care of the disabled and the pilot benchmark
for healthcare administration agencies have all made efforts to gain better
insight into business operations and culture. The questionnaires used for this
purpose range from a small set of questions about aspects considered to be
promising to hundreds of questions addressing such issues as healthcare
processes, facility management, financial policy, organisational set-up, etc.
Blood, sweat and tears have been put into the questionnaires, not only to
design them but also, no doubt, to complete them. For weren’t we all
determined to find an explanation for good performance? New questions were
invented and reinvented, drawing from the most recent literature or yet
another round of interviews with well performing organisations. The expertise
of specialist agencies was called in, indicators were developed and tested,
hypotheses were formulated. And so it was rather frustrating to have to
conclude that the results of these efforts were meagre. Or, to put it more
accurately, that the results did not yield a clear picture. The findings showed
that good performers did not all have the same pattern of business operations.
The expected relationships between performance and processes failed to
materialise, correlations found in one benchmarking study did not reappear in
the next, or certain processes were found to be linked not only to performance
excellence but also to exceptionally bad performance.
Still, we did manage to bring to light a number of relationships over the years
(see box). But this gave rise to yet another issue: if we know which factors
explain good performance, what reason would there be to continue to
investigate this?
Benchmarkanalysemodel voor zorgbenchmarks &
5.11 Innovation
In developing the new nursing, care and home care benchmark, ActiZ (the
Dutch association for nursing, care and home care) decided to introduce the
building block ‘innovation’. ActiZ sees the capacity to innovate as a crucial
factor for success and survival. As discussed in Section 8, this notion is
corroborated in the contemporary literature. We are currently co-designing an
instrument to measure the innovative power of organisations, enabling
benchmark partners to learn from each other in this area too.
as age, sex and whether or not the client was a resident of a care home or
93
nursing home.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The right side of the benchmark model shows that the benchmark outcomes,
the data measured and analysed, are reported at various levels. The central
level is that of the benchmarked organisation itself. In practice, defining this
level may not be as simple as it might seem. Does it refer to a group comprising
several locations or to an individual location? And can a large group of
organisations be compared with a foundation or trust that runs a single, small
organisation? These are questions that need to be answered prior to the
benchmark survey to avoid disappointment afterwards.
A special application has been developed for the most recent benchmarks for
care of the disabled and for nursing, care and home care (VVT), enabling
Benchmarkanalysemodel voor zorgbenchmarks &#
The industry level is also the level at which relationships between the building
blocks, or relationships between performance and business operations, can be
identified. This is not always clear to all parties concerned. Individual
benchmarking participants tend to expect that industry-wide reporting will
yield an ‘integral analysis’ of their own organisation, but this is usually not
possible. The finding that a positive assessment by clients of their daily
activities is related to their carers’ positive assessment of their work, for
instance, is only statistically valid if the number of observations is sufficiently
large. Conclusions about individual care providers can only be drawn for very
large organisations that have access to data specified per organisational entity.
Strategic analyses
Quality Quality of
Historical Department/unit
of care the job Explaining
context
performance
Benchmark
strategic management
information
The last dimension of the benchmark model – presented at the bottom of the
figure – shows that participants can use the benchmark results as strategic
management information for their own organisation. This information will
enable participants to consider possible actions for improvement. If
improvements are needed, the care provider may decide to make adjustments
to its strategy or to existing solutions addressing strategic issues, or it may
decide to employ its people and resources in a different way. Benchmarking is
therefore a cyclical process.
6 The step-by-step benchmarking
process
This section looks into the different phases of benchmarking. A review of the
literature reveals general agreement that benchmarking should at least
94
include preparation, data collection and analysis, and follow-up. While some
researchers present more extensive step-by-step plans than others, they all
distinguish:
The design phase consists of three activities. The first is to identify goals and
parameters. When identifying the goals, the following elements should be
addressed: learning versus accountability, an integral approach versus a partial
approach, and a general description of the indicators to be compared. The
parameters include such things as the research costs and how these costs are to
be divided, whether or not to engage third parties during the implementation
phase, the role of other stakeholders (financial backers, regulators,
government bodies) and the manner in which the findings should be
99
disclosed.
The second activity during the design phase is the identification of the
indicators against which benchmarking takes place. A common framework has
proven to be a good starting point for a discussion of the indicators to be
selected. For example, the much-used INK management model referred to
earlier, designed by the Dutch Quality Institute, offers a framework that
encompasses the organisation’s performance, client assessments, employee
assessments and the views of society at large, as well as organisational
parameters concerning leadership, organisation of the primary process, HRM,
100
strategy and policy, and resource organisation.
The third activity during the design phase entails designing a comparative and
101
explanatory model to determine the depth of analysis. The preparation phase
is followed by an implementation phase consisting of data gathering, analysis
and reporting. The last phase in the benchmark process is follow-up, when
participants are supported in their efforts to work towards improvement and
the benchmark is evaluated.
The step-by-step benchmarking process &'
Keehley sees the selection of the processes to be benchmarked as the very first
step, which should not be taken lightly. In doing this, the organisation should
take account of the degree to which it is ‘ready’ for benchmarking. It should
address questions such as ‘How experienced are we in benchmarking?’, ‘Do we
have a learning organisation culture, or do we suffer from We Are Different or
Not Invented Here syndromes?’ Other factors are the strategic importance of
the activities to be benchmarked and possible external pressure from clients,
competitors or politicians. It is judicious to prioritise the processes to be
103
benchmarked using explicit criteria.
PricewaterhouseCoopers’ perspective
We would argue in favour of opting for an integrated benchmark rather than
selecting individual processes to be benchmarked. Selecting one or a few issues
is only useful if these issues exist more or less in isolation and are barely related
to other issues. An example would be a benchmark of an organisation’s
treasury activities (for further details, see Section 7). We would also make a case
for addressing an organisation’s performance in the broadest sense. Of course,
' Benchmarking in Dutch healthcare
this would also entail a selection, as it is not possible to include all aspects of an
organisation’s performance in a benchmarking exercise. A selection that is
made as part of an integral approach is different, however, in the sense that it
entails a selection of an organisation’s core activities, or the (temporary)
exclusion of issues for which information is lacking.
PricewaterhouseCoopers’ perspective
In our view, two aspects should be distinguished: involving all relevant actors
in the organisation, and staying focused. The first element is addressed in more
detail elsewhere in this report. As for remaining focused, we share Keehley’s
view that this should feature prominently during the preparations. It is vital
that the scope of the project is spelled out in a plan of action, for instance, as
well as in agreements with individual participants. Only then is it possible to
keep control of the process and avoid disappointment afterwards. This does not
mean, of course, that we should stick rigidly to what has been agreed at the
start of the benchmarking exercise. Each benchmarking study should be a
voyage of discovery for the participants and for ourselves. The need to
incorporate innovations in a benchmark or to respond to changing legislation
means that there will always be things that have to be reinvented. In such cases
parties would do well to give each other the space to discard categorisation that
has no basis in reality, say, or to re-examine that ‘one and only’ promising
explanatory factor one more time.
The step-by-step benchmarking process '
Keehley underlines the need to make adequate preparations and take stock of
the existing situation. When this step has been completed, the benchmark
team will have a detailed flowchart of its own processes and some idea of
potential performance indicators, bottlenecks and solutions. This
introspection will yield a project plan for the remaining benchmark project.
The plan details what needs to be done when, and by whom.
PricewaterhouseCoopers’ perspective
As the healthcare industry tends to use ready-made instruments, some of the
work in this step does not always need to be done. Even so, it is important in any
organisation that a dedicated team focus on the benchmarking exercise so that
the organisation will not be taken by surprise when it is asked to provide
information or, at a later stage, by the benchmark findings themselves.
This step begins with the organisation drawing up an extensive list of potential
partners. Important sources of information are research literature, personal
contacts, benchmark databases (such as the Benchmarking Exchange of the
American Society for Quality Control) and the services of consultants
specialised in benchmarking. The initial list will then be trimmed using
carefully chosen selection criteria, leaving only the ‘perfect few’. One would,
for example, need to check whether a potential partner actually has a best
practice and whether the best practice is not only proven to be successful, but is
also transferable and repeatable in the sense that it is not linked to unique
104
circumstances. Other criteria should also be borne in mind, such as the
degree to which the potential partner is similar to, or differs from, one’s own
organisation. Organisations initiating a benchmarking study for the first time
would do well to restrict the exercise to internal or similar partners. As a rule,
one could say that the more experienced an organisation is in benchmarking,
the more capable it will be of benchmarking against unlike partners. Note that
real breakthroughs are most likely to happen when unlike partners are
compared.
' Benchmarking in Dutch healthcare
PricewaterhouseCoopers’ perspective
In healthcare benchmarking the search for partners is usually initiated by the
industry associations: they announce to their members when the next
benchmark study is due to take place, provide information and recruit. By
contrast, the benchmarking exercise for housing corporations opted for the
establishment of an independent benchmarking body charged with recruiting
the benchmarking partners.
PricewaterhouseCoopers’ perspective
In healthcare benchmarking, selecting indicators is a process that necessarily
involves only a representative selection of the participants (brought together in
a sounding-board group). Whereas we agree that the list of indicators should
stay manageable, it should not become too short. Organisations truly striving
to launch actions for improvement should not expect to be able to identify such
actions on the basis of rough outcomes. Too small a number of indicators could
lead to disappointment (‘We already know this’) or to distorted findings (‘We
didn’t take into account the outcomes that were not AWBZ-related because
that would have been too cumbersome’).
PricewaterhouseCoopers’ perspective
In healthcare benchmarking, information relating to all participating
organisations is requested simultaneously to allow for comparison. Keehley
suggests that, in future, each organisation should be able to decide for itself
when it wishes to embark on a benchmarking exercise, assuming that it has
access to a database of recent comparative data. We fully support Keehley’s
idea, in the sense that it is important for organisations to closely examine their
own processes.
PricewaterhouseCoopers’ perspective
As things stand now, data collection and data analysis in healthcare
benchmarking are coordinated and carried out by research agencies in
consultation with the principal and the sounding-board groups. Participants in
healthcare benchmarks typically come into contact with each other later on in
the process – usually after the results have been released.
105
Keehley gives the following tips:
• Avoid paralysis by overanalysing. There is a danger of getting bogged down
in peripheral details.
• Keep an open mind for the unexpected. If you focus on the differences and
exceptions, you will be more likely to detect opportunities for fundamental
improvement of your own processes.
• Take a systematic approach to selecting recommendations. Select various
excellent practices on the basis of explicit criteria.
PricewaterhouseCoopers’ perspective
The participants in healthcare benchmarking are given feedback on these
analyses. But the real pointers for improvement only come to the fore in
consultations with other organisations. That is why healthcare benchmarking
exercises are often concluded with a series of workshops involving a limited
number of participants, or at least with an invitation to take part in such
workshops.
Once best practices have been identified and selected and management has
adopted the recommendations of the benchmarking team, the main focus of
the project is to incorporate opportunities for improvement. The start of this
phase is the development of an action or implementation plan that provides an
answer to the key questions: who does what and when? Resistance to change
can be reduced by ensuring the active participation of all parties concerned in
the implementation process.
One should not assume, however, that simply adopting a best practice will
automatically lead to the desired results. Keehley argues that a copycat
approach tends to result from superficial internal research and anecdotal
106
external research, also referred to as stop-and-shop benchmarking. Cloning
the practices and processes of model organisations can have dramatic
repercussions that widen the performance gap rather than narrowing it.
The step-by-step benchmarking process '#
PricewaterhouseCoopers’ perspective
Strictly speaking, implementing actions for improvement may not be a part of
the benchmarking process. That said, it may well be the most important phase
for the organisations concerned. Appropriate timing of actions for
improvement is just as important as a sound approach. Actions for
improvement that fit logically into the planning and monitoring cycle are
more likely to become firmly embedded in the organisation than actions taken
independently of the cycle. It is more efficient, of course, to have access to the
benchmarking findings when the budget and policy plans are being drawn up.
Improvement measures need special care and attention shortly after they have
been introduced. Organisations are always at risk of slipping back into old,
familiar practices. It is therefore important that, during the implementation
phase and the initial stages of change, the participants monitor whether the
measure is being introduced as planned and whether the desired results are
being achieved in practice. Do the performance indicators reflect
improvements?
PricewaterhouseCoopers’ perspective
See step 11.
PricewaterhouseCoopers’ perspective
See step 11.
PricewaterhouseCoopers’ perspective
We have noticed that the call for continuous benchmarking is becoming ever
louder. Participants want a better understanding of the impact of the measures
taken on their performance and position. Whilst most benchmarking partners
are in favour of a frequency of once every two years, there are also partners who
would like to repeat the benchmarking exercise, or at least one or several
aspects of it, more frequently. This is one of the reasons why we are now
designing a system in which the organisations themselves can determine the
frequency and starting dates. We have been commissioned to do so by ActiZ.
The five phases in the figure can be clustered into three main phases:
preparations (phase 1), the research itself (phases 2, 3 and 4) and actions to be
taken in response to feedback on findings (phase 5). The latter phase feeds into
the actual implementation of improvements. As the actions for improvement
no longer strictly form part of the benchmarking process, they have not been
included in the figure. Needless to say, bringing about organisational
improvement is the very reason for benchmarking in the first place.
The sign-up procedure, during which partners register for participation in the
benchmark, concludes the first phase.
Once the benchmark model and its constituent building blocks have been
developed, participants will begin to gather information for the benchmark.
This can be done in a variety of ways, such as:
• conducting interviews using questionnaires (in particular in the client
survey)
• asking client and employee contacts to complete questionnaires
• asking the organisation benchmarked to provide information
• requesting data about the organisation from other information sources
All data are saved and structured in a dedicated benchmark database. Using
sophisticated ICT and well-structured data can simplify analysis considerably.
108
Sanford Berg (University of Florida) does not mince his words: ‘If managers do
not have the date required for such comparisons, then one must question what
they are actually managing.’
The step-by-step benchmarking process ''
With the aid of software analysis tools and the assistance of research agencies,
the client and the sounding-board group, we first carry out analyses for each
individual building block and subsequently analyses that go beyond the level of
individual building blocks and address the relationship between the building
blocks and the identification of best practices. The analytical phases are what
make research agencies tick. We are always interested in knowing what the
outcomes are and hope to discover unexpected relationships and further
insights.
Final score
10
4
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65
Ranking final scores on financial performance (FP) building block
Financial performance
Clients
Staff
HPOs
Innovative strength
Figure 7.1 Benchmark model: continuous nursing, care and home care benchmark
Source: VVT continuous benchmark; PricewaterhouseCoopers
In 2006 a new benchmarking model was set up for the nursing, care and home
care sectors, which jointly participate in the benchmark. It has several
modules, with the basic package offering data-gathering and reporting at
" Benchmarking in Dutch healthcare
overall organisation level plus one level below – all other levels are optional.
Time-keeping is a separate option.
Nearly all building blocks include innovations. ‘Quality of care’, for instance,
will build on from the Responsible Care Standards Project as soon as this is
feasible. This project aims to arrive at a broad-based description of the quality
of care and, as we have noted, draws on a survey of clients and/or their families
as well as the views of the Dutch Healthcare Inspectorate.
‘Quality of work’ has been simplified and alignment with other building blocks
improved. The staff monitor includes an employee survey and data gathered
via ‘financial performance’ such as FTE breakdown and sick leave. While the
‘financial performance’ building block itself has been simplified – e.g. no
compulsory time-keeping – it has, at the same time, been extended to include
treasury data – e.g. debt servicing charges and gap analyses. These gap analyses
compute the difference between the current budget and the budget based on
care complexity, and between expected costs and expected budget.
The ‘innovation’ building block is being developed. It has been agreed to focus
not so much on innovation projects, as every organisation would interpret this
differently, but rather on the ability to innovate and the parameters for
innovation. Innovation has been defined as the organisation’s ability to keep
reinventing itself and so adapt to changing circumstances. This ability is not
restricted to finding totally new solutions, but also involves the ability to adopt
solutions devised by others and/or adapt these to the organisation’s needs.
Working with Van Naem & Partners, we drew up a plan of action for this
benchmark in 2006. One key consideration underlying this benchmark is that
these organisations’ funding is more diverse than that of many other
healthcare providers, with resource allocation criteria deciding what goes
where. At the same time, municipalities are free to buy additional, customised
care. Moreover, the community health service is embedded in the municipal
set-up, with each municipality deciding its own financial performance and
care targets. The differences between the Dutch home care providers and the
GGDs also have a part to play here.
For the 2004 home care benchmark we conducted a financial benchmark of the
child healthcare supplied by the country’s home care providers. This
benchmark showed that organisations found it very difficult to attribute
revenues and costs to individual products, as records still reflected the system
of input funding that had been in force until 2003. At the time, some
organisations were already indicating that this would not happen again: they
wanted to allocate costs to products for the sake of their own business
operations as much as for any subsequent benchmark.
Figure 7.2 captures the planning process for the benchmark, with CS standing
for client survey, ES as employee survey and FP as financial performance.
Healthcare benchmark: %
notable features
countrywide presentation
Organisation reports and
Preparing benchmark survey
Integrated analyses
Integration and developing
building blocks
Figure 7.2 Care for the disabled: a review of the benchmarking timeline
Source: 2004 care for the disabled benchmark
The timeline coincides with the one set out in the annual healthcare review. It
also includes workshops to discuss benchmark outcomes. All building blocks
making up this benchmark take account of a breakdown into client categories:
physically disabled; mentally disabled; sensory disabled; slightly mentally
disabled; and heavily behaviourally disturbed, slightly mentally disabled. If
relevant, data are sorted by client group, with the same applying to the
different services provided: outpatient day programmes; outpatient day care;
residential living; treatment; and inpatient day programmes. A
product/market matrix thus emerges.
& Benchmarking in Dutch healthcare
At the end of the day, we extended the sign-up term by a number of weeks, as
we had failed to factor in that these aspects would cause time pressures.
However, the final outcome – sector-wide recognition of the benchmark –
proved worthwhile.
This benchmark, like many others before it, was testimony to the importance
of frequent communication with participants to secure ongoing commitment.
After all, participants are unable to see all the hard work going on behind the
scenes, and wonder when the next step will come. Keeping its member
Healthcare benchmark: '
notable features
organisations regularly updated through its network, by letter and email, VGN
publishes a newsletter as well as issuing brochures on different building
blocks. Posters are sent out to draw attention to the client and employee
surveys, while contacts receive sample letters they can send to clients/parents
and employees. And to top it all off, the organisations’ contacts are invited to
attend a series of informative meetings. Quite a package, then, and all centred
on the idea that participants will stay on board only if they know what is going
on. VGN has also plumped for its own benchmark logo, making all
communication immediately recognisable as benchmark-related.
The only benchmark tool that has seen sector-wide application before is the
employee survey. As in 2004, the survey uses the ‘quality of the job’
questionnaire, which has undergone a few minor changes since then. In
January 2007, organisations participating in the survey received boxes of
questionnaires for each internal unit enrolled during sign-up. This time,
employees were also given the option to complete the questionnaire online.
Results have since come in: a response in excess of 50 per cent, a solid
percentage.
Benchmarking in Dutch healthcare
Observing that benchmarking is now firmly rooted in the sector and no longer
needs the association to act as a key driving force, the brochure provides a
guide encouraging organisations to grasp the nettle. GGZ Nederland sees
benchmarking primarily as a tool for learning – and not therefore as an
accountability-driven instrument. It also feels that benchmarking has a part to
play in innovation, as a quality tool that can help generate and spread new
knowledge. Noting ten – not sector-wide – projects over the past ten years, the
brochure lists benchmarks on:
• the health information system ZORGIS’s key figures
• key facility figures, with an annual cycle covering a specific theme and
identifying relevant performance indicators
• clinical psychotherapy (care provision outcomes)
• inpatient rehabilitation centres (addiction care, care provision outcomes)
• addiction care, lifestyle training results
• overhead charges
Healthcare benchmark:
notable features
As the brochure reveals, the mental healthcare sector has not yet opted for
multidimensional benchmarks, but it does run benchmarks that explicitly
investigate care outcomes. Incidentally, this also applies to an example of
mental healthcare benchmarking outside Dutch borders: in Towards National
Benchmarks for Australian Mental Health Services, its authors present a
comprehensive benchmark model covering both costs and effectiveness,
including quality. Their model comprises a set of performance indicators and
outcome indicators, with outcome defined as both the difference before and
after the intervention or treatment, and the difference with and without
intervention. The discussion paper also introduces an index of case complexity
as a performance indicator.
NVZ database
The Dutch Hospitals Association NVZ runs a database storing a plethora of data
on hospitals that allow for comparative analyses.
" Benchmarking in Dutch healthcare
Both the newspaper Algemeen Dagblad and the weekly Elsevier publish annual
comparisons of Dutch hospitals. Although controversial, these reviews have
become more comprehensive over time. The hospitals themselves do not
initiate these comparisons.
Consumentenbond questionnaires
In 2005 and 2006 the Dutch consumers’ association sent out questionnaires to
hospitals and independent treatment centres a total of eight times. All
questionnaires focused on a different condition and investigated the quality of
care. The association wrapped up its review with a league table revealing which
hospitals typically scored best and which lagged behind. It also gave hospitals
brownie points for each time they agreed to participate. Industry associations
initially came out against these surveys, as they felt that fragmented
investigations such as these put too great a burden on the organisations.
Ernst & Young periodically releases a review of financial key figures per
hospital, drawing on the hospitals’ annual accounts.
www.snellerbeter.nl
The Sneller Beter programme sees Dutch hospitals share best practices in the
field of business operations.
allocated to individual DBCs, the model has been widely implemented across
the hospital sector. As it is based on uniform definitions and allocation, the
model serves as a very appropriate vehicle for comparing hospitals, while
providing comprehensive transparency on the breakdown of costs. This also
makes it very useful for analyses of factors explaining cost differences.
International benchmarking
United States
United Kingdom
Germany
Every year, the hospitals receive a report comparing their data with those of
other participants in their subgroup, without being told who those others are.
The drawback to this low threshold, of course, is that organisations are unable
to sit down and exchange experiences.
& Benchmarking in Dutch healthcare
Clinical treatment
The most notable feature of this benchmark is the brevity of its timeline: as
soon as a subgroup has enough members to enable the creation of reliable
comparisons, participants can obtain their reports with a single mouse click.
The reports take two to four hours to produce. The German benchmark’s
motto: Wer heute den Kopf in den Sand steckt, knirscht morgen mit den Zähnen (‘Head
in the sand today, gnash your teeth tomorrow.’)
Treasury benchmarks
We will end this section with an example of a process benchmark, born out of
the treasury benchmark we briefly touched on above. In mid 2006 it transpired
that hospitals were losing interest income as they were invoicing later than
had been assumed in the standard interest rate payment. A number of
hospitals then decided to compare their invoicing processes.
Benchmarking in Dutch healthcare
Organisational design The organisation is straightforward and flat and has no barriers
between units.
Organisational The organisation sees and treats its members as its main
members instrument to achieve its objectives.
To illustrate how the drive for performance excellence can spark far-reaching
117
changes in company cultures, here is what management guru Tom Peters had
118
to say on several visits to the Netherlands:
120
In its 2005 Management Tools & Trends study, Bain & Company identifies
innovation as the next big organisational challenge reported by its
respondents.
The idea is that performance excellence scores will provide greater insight into
ways of improving operations and more information about any gaps between
an organisation’s own operations and those of excellent performers. Needless
to say, we are very eager to see the first results emerge in the autumn of 2007.
surveys, but draws on surveys carried out as part of its responsible care
programme and accountability to the Dutch Healthcare Inspectorate.
Most benchmarking exercises in the past few years have asked participants for
their post-benchmarking views on cost-to-reward ratios among other matters.
On the whole, respondents were positive – if they had not been, we would not
have been able to continue the benchmarks – but that is not to say there were
no critical comments. Invariably, some organisations felt a benchmark was too
general and not sufficiently instructive, while others cited too much focus on
detail.
they wish to see, applying colour codes to indicate specific areas for
improvement, listing points of action – but all to no avail.
Perhaps the following suggestion might prove useful: keep the backbone of the
benchmark unchanged with identical questions for all participants, but add
the option of additional questions that individual participants can select. The
requirements of the individual organisations would underpin the creation of a
library of such additional questions, which would be clearly defined so that all
users know exactly what is meant. Single questions would indicate their
building block category and, if applicable, the relevant subsection of the
building block. Participants electing to include additional questions would
receive individual reports on these and, if other participants were found to
have answered a specific question in this or any previous rounds, would also
receive comparative data. This approach would ensure the continued quality of
the information, retain the nature of comparative data and still meet
individual requirements.
commitment. One idea mooted is to design a simple tool that would enable
benchmarking teams in an organisation to indicate what they think their
scores will be and what scores they are aiming to achieve. These figures could
then be compared with the actual scores later in the process.
All this is, of course, predicated on the assumption that the data for all
participants show up sufficient interrelationships for the computer models to
actually work. A word of warning here: causal or statistically significant
relationships have not been found to exist in by any means all cases. And even if
such relationships are uncovered at the aggregate level, this does not mean
that these apply to all organisations. We are occasionally asked to produce
integrated analyses at the level of the organisation – but that is simply not
possible, as any correlation at organisation level might be coincidental. What
we can do is offer tools to show what would happen if the same
interrelationships applied to the organisation as to the aggregate.
There are many advantages to such a system, perhaps the most important
being that the process will be shortened considerably. One-time
benchmarking, by contrast, always needs a certain amount of time for sign-up
and data collection, allowing for a pre-agreed timeline. Researchers
subsequently need time for analysis, even if they are using ICT. Moreover, a
continuous benchmark allows organisations to benchmark when they see fit,
making for optimum embedding in their own management cycles.
A third benefit is that there is no longer any need for organisations to wait for
the next benchmarking exercise to roll around. If an organisation wants to
know whether its improvement measures have had any effect, nothing need
keep it from investigating this by starting a new benchmarking exercise – even
when other organisations benchmark less frequently. In the case of repeat
benchmarks, organisations may choose not to participate in all building blocks
but to use their most recent benchmark figures as a reference point. The system
should also reduce the costs of benchmark participation – an advantage not to
be underestimated.
will look like in detail is as yet unknown. A number of issues still need
resolving, such as data validation – in-built automatic testing? – and the vexed
matter of constantly moving averages: even if it records the exact same
performance, an organisation might score above average in one benchmarking
exercise and below average in the next. This issue might perhaps be addressed
by adding another comparison based on fixed values alongside that based on
performances by other organisations.
All that said, further streamlining is a good thing. Plus which, aggregate data
are often collated from lower-level data, and as long as definitions are the same
it should be less of a problem for organisations to supply these data than any
other information that might be requested.
The Ministry of Health, Welfare and Sport has now joined the Dutch Taxonomy
Project, a collaborative venture between the Dutch justice and finance
ministries that is looking to standardise and simplify the financial information
that organisations are expected to supply to the government. Working
together with Dutch trade and industry, intermediary bodies such as
accountancy firms, trust offices, tax advisers and software suppliers, the
ministries are developing a Dutch XBRL taxonomy (XBRL stands for eXtensible
Business Reporting Language). This taxonomy is a data dictionary that is built
into financial software. Marking the relevant data in an organisation’s records,
XBRL allows for rapid and efficient collation of data for accountability
Innovations in benchmarking !
The objective of the project is to ease the administrative burden through the
creation of a basic structure by the government and a timely adjustment to the
taxonomy of their own organisation and infrastructures by intermediary
bodies.
The annual healthcare review and benchmarking also require other data, such
as client and employee numbers. It is possible to develop software drawing on
XML (eXtensible Markup Language, the language underpinning XBRL) to allow
relatively easy integration of all the requisite information from a care
provider’s systems.
Appendices
A Steering committee and
sounding-board group
Representing PwC
B. Quality managers
+KHA
G. Gerritsen Alysis Zorggroep Arnhem
+=HA
M. Roelink Amerpoort ASVZ Baarn
C. Board
+=HA
B. van den Dungen Viataal Sint Michielsgestel
+KHA
L. van Eijck Alysis Zorggroep Arnhem
D. Industry associations
+=HA
M. Straks ActiZ Utrecht
M. Dopper VGN Utrecht
E. Personnel manager
L. De Braal GGZ Regio Breda Breda
Representing PwC
Literature/reports
• Accenture study entitled Assessment of Benchmarking Within Government,
quoted in Accenture press release dated 31 July 2006.
• Argyris, C. and Schon, D. (1978) Organizational Learning: A Theory of Action
Perspective. Addison-Wesley.
• Arcares (April, 2002) Eerste test benchmarkinstrumentarium. Algemeen rapport.
Utrecht. (First pilot of benchmarking tools. General report, in Dutch only)
• Arcares (November, 2002) Tweede test benchmarkinstrumentarium.
Algemeen rapport. Utrecht. (Second pilot of benchmarking tools. General
report, in Dutch only)
• Arcares (February, 2003) Tweede test benchmarkinstrumentarium, onderzoek
toepasbaarheid benchmarkinstrumentarium in extramurale zorg. Algemeen
rapport. Utrecht. (Second pilot of benchmarking tools, review of
applicability benchmarking tools in outpatient care. General report, in
Dutch only)
• Arcares (February, 2004) Benchmark verpleeg- en verzorgingshuizen 2003.
Prestaties van zorgaanbieders gemeten. Utrecht. (2003 nursing and care homes
benchmark: Measuring performance of care providers, in Dutch only)
• Arcares (November, 2005) Benchmark verpleeg- en verzorgingshuizen 2004/2005.
Prestaties van zorgaanbieders gemeten. Utrecht. (2004/2005 nursing and care
homes benchmark: Measuring performance of care providers, in Dutch
only)
• Arcares (May, 2006) Rapportage vooronderzoek continue benchmark V&V.
Utrecht. (Report on preliminary investigations into continuous benchmark
of nursing and care, in Dutch only)
• Berg. S. et al. (March, 2006) Water Benchmarking Support System: Survey of
Benchmarking Methodologies (abstract), Public Utility Research Center,
University of Florida.
• Bendell, T. et al. (1998) Benchmarking for competitive advantage, London.
• Bentlage F., Boelens J.B. and Kip J.A.M., De excellente overheidsorganisatie,
Kluwer, 1998. (The excellent government organisation, in Dutch only)
• Bullivant, J. (1994) Benchmarking for Continuous Improvement in the Public Sector,
Longman, Harlow.
• Camp, R., (1989) Benchmarking: The Search for Industry Best Practices that Lead to
Superior Performance. ASQ Quality Press.
!& Benchmarking in Dutch healthcare
Internal materials
• Internal materials of the industry associations and
PricewaterhouseCoopers.
Websites
• http://www.12manage.com/methods_valuedisciplines_nl.html
• http://hcro.enigma.co.nz/website/print_issue.cfm?issueid=61
• http://kb.webebi.com/article.aspx?id=10003&cNode=5K3B4O
• http://nation.ittefaq.com/artman/publish/printer_29837.shtml
• http://www.allbusiness.com/management/benchmarking/491524-1.html
• http://www.firstyear.org/fyi/detrickandpica.html
• http://www.fortherecordmag.com/archives/ftr_04172006p22.shtml
• http://www.inova.org/inovapublic.srt/news/pressreleases/benchmarkIFH.
html
• http://www.management-development.com/organizational_performance
• http://www.managementstart.nl
• http://www.totalbenchmarksolution.com
• http://www.uow.edu.au/arts/sts/bmartin/dissent/documents/health/bench
mark.html
C Benchmark studies
Anneke van Mourik - van Herk has also been involved in healthcare
benchmarks since 1996, with analyses and reporting as her specific field
of expertise in addition to benchmark development.
*connectedthinking
ActiZ the Dutch association for nursing, care and home care
AWBZ Exceptional Medical Expenses Act
CVZ the Health Care Insurance Board
GGD community health services
GGZ the mental healthcare association
HEAD Dutch Association of Finance Managers in Healthcare
IGZ the Dutch Healthcare Inspectorate
JGZ healthcare to children
NVZ Dutch Hospitals Association
RVZ the Dutch Council for Public Health and Care
VGN Association for Care of the Disabled in the Netherlands
VVT nursing, care and home care
ZBR care and treatment provided
ZonMw the Netherlands organisation for health research and development
ZZP care and complexity module
E Endnotes