Você está na página 1de 10

FACTORS AFFECTING THE MANAGEMENT

OF SERVICE QUALITY

PHILIP CALVERT*
ROWENA CULLEN*

Introduction

The quality service model of Zeithaml, Parasuraman and Berry defines five gaps or
discrepancies which may impinge on service quality. These are:
Gap 1. The discrepancy between customers’ expectations and managements
perceptions of these expectations
Gap 2. The discrepancy between managements’ perceptions of customers’
expectations and service quality specifications
Gap 3. The discrepancy between service quality specifications and actual service
delivery
Gap 4. The discrepancy between actual service delivery and what is communicated
to customers about it
Gap 5. The discrepancy between customer’s expected service and perceived service
delivered. (Zeithaml, Parasuraman, and Berry, 1990)

Recent studies of service quality in library and information services, using the
SERVQUAL instrument or variants of it, have focused on Gap 5, the discrepancy between
customers’ expectations of service and their perceptions of service delivery. This has been
shown to be a useful tool for diagnosing areas where libraries could improve their services
and make them more responsive to customer expectations.

There is a perception in the LIS literature (Hernon and Altman, 1998; Cullen, 1997) that
managers lack an understanding of customer needs and lack the motivation to re-allocate
resources to meet these needs. In the quality service model referred to above, Gap 1, the
gap between managers’ perceptions of customers’ expectation and customers’ actual
expectations “is the first and possibly most critical step in delivering quality service”
(Zeithaml, Parasuraman, and Berry, 1990, 51). It is also assumed that the organisational
culture and leadership style of the manager may have an impact on the manager’s
perceptions of customers’ expectations, willingness to take action on this information, and
the discrepancy between customer expectations and perceptions of service delivery.

There have been no formal studies of Gap 1 in the field of LIS studies, although some
studies have investigated managers’ perceptions of customer expectations (Edwards and
Browne, 1995). There is no research on the impact of other factors on the ability of an
organisation to respond to perceived differences or gaps revealed by the SERVQUAL
instrument.

Organisational Culture
In order to investigate aspects of organisational culture that might account for these gaps,
the project employed two models. In the first model Lakos argues that the extent to which
an organisation fosters a ‘Culture of Assessment’ is critical to the effective use of
evaluation (Lakos, 1998). Lakos focuses on organisational factors as the key to evaluation:
“A Culture of Assessment is an organizational environment in which decisions are based
on facts, research, and analysis, and where services are planned and delivered in ways that
maximize positive outcomes and impacts for customers and stakeholders. (Lakos, 2002).

The second model was proposed by Cullen (1997) and was an attempt to understand why
libraries and their managers have been slow to use evaluation, despite a growing focus on
accountability, and extensive research into the evaluation of library services. The model
highlights some of the organisational factors that influence the use of evaluation in
libraries: whether the organisation’s focus is internal or external, whether it places greater
value on inputs or outputs, and the strength of its commitment to use evaluation methods to
help the organisation plan and change. These factors were built into a new model, the
Focus/Values/Purpose matrix, which attempted to explain some of the factors affecting
Lakos’s ‘Culture of Assessment’ (Lakos, 1998). Some libraries have adopted the matrix as
an approach to assessing their readiness for evaluation, and ensuring the organisation
develops a leadership focused on outputs and external relationships with stakeholders
(Phipps, 1999).

Figure 1. The Focus/Values/Purpose matrix

Research objectives
The project had two specific objectives:
1. To analyse the discrepancy between the perceptions held by library staff about the
expectations of customers, and the actual expectations of their customers, i.e. a formal
Gap 1 analysis.

2. To investigate a range of factors that might influence the organisation’s ability to


respond to and reduce any apparent discrepancies identified in Gap 1 and Gap 5
surveys. These factors might include:

• organisational culture, including the level of trust within the organisation, and staff
anxiety about negative repercussions following evaluation;
• a culture of assessment in the organisation, including the extent to which resource
allocation and strategic planning is based on evaluation (Lakos, 1998);
• orientation of the organisation on the Focus/Values/Purpose matrix (Cullen, 1997).

The underlying hypotheses were:

1. That a positive correlation will exist between the factors listed above and Gap 1—
the discrepancy between managers’ perceptions of customer expectations and
customers’ expectations of service;

2. That a high correlation will exist between results of a Gap 1 analysis and a Gap 5
analysis in the same organisation.

While Zeithaml, Parasuraman and Berry would consider that they have demonstrated the
validity of this second hypothesis in a number of studies, it has not been tested in the
library/information service environment.

Method:
The study was carried out in four university libraries in New Zealand, and at Yale
University in the United States. The primary data reported here is focused on the New
Zealand part of the study, although it is hoped to complete the comparison with Yale for
later publication.

Two separate questionnaires were used to collect data for the three main parts of the study.
Firstly, in each of the New Zealand libraries, a survey of library staff was carried out using
a ServQual instrument with 54 indicators and a 7-point Likert scale, asking staff to project
how they believed their customers would rate their expectations of service, and how they
rated their own expectations of the services they offered. Secondly, in each library a
random sample of customers was surveyed with same instrument, and respondents were
asked to rank their expectations for ideal service and perceptions of service delivery. This
questionnaire was also used to collect data on customer perceptions of the library’s
performance on the Focus/Values/Purpose axes. Factual data was collected from the
participating libraries, on library policies, communications systems, advisory committees,
and interactions with customers, that might also help determine the placement of the
library on the Focus, Values, and Purpose axes.

Questions related to the Focus/Values/Purpose instrument were devised and tested in the
Yale study and then used for the New Zealand questionnaire. They were formulated as
follows:
1. Focus: A library can be described as being either inward looking and concerned
with the efficiency of its internal processes or as having an external client-
centered focus. Two questions were posed for this factor:
• Library operations and activities are highly efficient (Q13)
• The library nurtures excellent relationships within the campus community (Q14)

2. Values: A library can be described as placing more value on acquiring materials


and building its collections rather than putting resources into customer services. The
question posed for this factor was:
• Library resources are directed primarily to
a) building outstanding collections (Q15a)
b) providing outstanding services to assist me (Q15b)

3. Purpose: A library can be described as an organization with a strong sense of


purpose or it may be perceived by either its staff or customers to lack strong resolve
and sense of purpose in what it is about. As the study was concerned with the
library’s projected service quality commitment, the question posed to address this
factor was:
• The library demonstrates a strong commitment to excellence (Q16)

Comparison between universities Gap 1 versus Gap 5


Gap 1 is measured by a single score, the difference between the management’s, in this case
the library staff’s estimate of what customers’ expectations will be, compared with
customers’ actual expectations. Negative gaps are where staff perceptions of student
expectations fall below students reported expectations on any of the 54 indicators. In this
study, no Gap 1 scores were larger than 1.0, but two institutions had approximately twice
the number of Gap 1 negative gaps as the other two (Table I). That is, their perceptions of
what customers expected were less well aligned with customers actual expectations.

Gap 1 Gap 1 Gap 5 Gap 5


Largest gap No of Largest gap No = or
negative above 1
gaps
Library A .92 30 2.01 40
Library B .54 35 2.12 39
Library C .57 17 1.8 19
Library D .57 22 1.62 14

Table 1. Summary of Gap 1 and Gap 5 scores

Gap 5 is the gap between students’ expectations, and their perceptions of reality (service
actually delivered). Library performance on this measurement is assessed here by size of
the largest gap, and number of gaps equal to or greater than 1.0.
The two institutions with the lowest gap between customer expectations, and customer
perceptions of service delivery (i.e. the least Gap 5 discrepancy), Libraries C and D are
also the institutions with the smallest discrepancy between management’s perception of
customer expectations, and customers actual expectations ( i.e. the lowest Gap 1 scores).
(Although Library B had the lowest actual Gap 1 discrepancy in this table, at .54, it had
the largest number of gaps overall.)

We observe that Gap 1 scores are lower than anticipated, but nevertheless appear to affect
performance as measured by Gap 5 scores.

Management factors affecting gaps


The four questions in the customer questionnaire which were directed towards exploring
where the libraries sat on the Focus/Values/Purpose matrix were designed to assess the
extent to which the focus of the organisation is internal or external, whether it places
greater value on inputs or outputs, and the strength of its commitment to use evaluation
methods to help the organisation plan and change. If the instrument can detect these
placements, questions 13 and 14 should be in opposition, representing whether the focus is
internal or external; questions 15a and 15b are in opposition detecting whether the
organisation values inputs or outputs, and question 16 focuses on the strength of purpose in
pursuing excellence through evaluation.

Table II summarises the means of responses to these questions. Raw scores are not
comparable between institutions, but are used here only to try to detect which ‘dimension’
the institution favours.

13. 14. Relations 15a 15b. 16.


Efficiency Collections Services Excellence
Library A 5.39 5.09 5.19 5.23 5.45
Library B 4.96 4.86 4.53 4.83 5.07
Library C 5.58 5.08 5.07 5.38 5.53
Library D 5.70 5.47 5.35 5.56 5.85

Table 2: Means of user responses to questions 13-16 on management


aspects in each institution

It seems from Table II that Library A should be placed towards the internal rather than an
external end of the focus axis, since its customers consider that it values inputs more than
outputs, and it ranks third in terms of excellence and commitment to evaluation. Library B
is also perceived by customers to have an internal rather than external focus but is
perceived by its customers to value outputs rather than inputs (neither of these score well
with customers, however, and Library B’s collection is considered by its own staff to be
less than adequate.) Overall its score on excellence, and strength of resolve to use
evaluation is the lowest score awarded by users. Both Libraries C and D are also perceived
to focus on efficiency rather than customer relations, although like Library B they score
more highly on services than collections. Their scores on excellence are higher than the
other two libraries, but only Library D’s stands out.

Results of the survey do not appear to show any clear correlation between position on the
three axes, and a better performance in terms of customer service.
Factual data collected from the libraries
Using factual data about management practices in each library, the project proceeded to
examine whether each institution could be placed on the axes using this information, and
whether this placement appeared to have any impact on the delivery of service quality.

Library A
Managers at Library A reported that although the library would once have been considered
internally focussed, they are trying to move to a more external view. Traditionally the
emphasis has been on developing collections, but recently there has been an attempt to
shift energies into services. An example given was that rather than spend time and money
on trying to achieve the perfect catalogue record, staff now simply download records from
the national bibliographic utility. Although they did not claim that they had achieved the
objective of putting more human resources (staff) into the library to serve customers
directly, there was a greater emphasis on service rather than on trying to achieve
efficiency. Most reporting in the past has concentred on traditional input and output
statistics, but starting in 1999 the library began to use more evaluative, qualitative and
customer-focused evaluation activities. These activities place the library somewhere
around the central point on the Focus, Values and Purpose axes - at a point in time where
they are shifting from positioning themselves on the left-hand, to a stronger customer focus
of the right-hand of each axis.

A Management Group of four senior librarians assists the University Librarian. Members
of the Management Group spend about 80% of their time on strategy and 20% on
operational matters. Team leaders of functional units were once involved in the ‘big
picture’ but are no longer regularly involved and members of the Management Group
agreed there was potential for team leaders to feel disempowered, though this is only an
assumption unsupported by evidence, and there is no clear connection between the
management structure and service quality.

The library uses a variety of mechanisms to communicate with customers


• A full Gap 5 service quality survey done in 1996
• The university conducts regular student surveys that includes general questions
on satisfaction with the Library;
• A survey of academic staff use of inter-library loans;
• An annual survey of law students;
• Focus groups held in the Management School;
• Comments Sheets and a Suggestion Box. The Administration Librarian writes
an annual summary of them, and each suggestion receives a reply;
• Evaluations following library tours, information literacy classes, etc.;
• Regular meetings with the student shelvers and student assistants;
• A Library Committee, although it has no student representatives;
• Library staff attend some Boards of Study.
Library B
Senior staff in Library B indicated that the library positioned itself towards the left-hand on
all axes, that is, it is more focused on internal issues than external, it tends to value inputs
more than outputs, and that it does not have a strong commitment to evaluation.

This was evidenced by current policies focused on collection development, which was
believed to have been somewhat neglected in past years, and policies focused on good
management practice rather than service. Because the library is spread over several
locations, without centralised policies, there is no strong centralised ethos about service,
evaluation and performance. The librarian said he is currently seeking to increase library
outputs by making processes more efficient, but also by getting more information about
customer needs for collection development. Organisational flexibility and responsiveness
arose through the devolution of management decisions to each branch, which can provide
for high levels of customer responsiveness, but there are few policy frameworks to provide
guidance on how managers should focus their efforts.

The library does not use any formal performance measurement. It does not measure any
outputs, except through the annual management review of staff performance. Relations
with customers are maintained through a number of mechanisms:
• The Librarian attends Academic Board and tables the Library Annual Report
there;
• Attendance at Faculty Board meetings,
• The library liaison scheme, in which each academic group is assigned to a
member of the collection management team;
• The Library Committee (meets irregularly);
• A separate liaison committee for the Law Library;
• Good relations between branch librarians and their user groups;
• Occasional interaction with the Students Association;
• Occasional notices, and informal responses to student issues in the student
magazine;
• A suggestions board.

The library also employs a number of students as part time workers, and believes that they
provide good information about issues concerning library services.

The library regarded its relations with the rest of the institution as good, because of
positive feedback it gets from informal networks, user satisfaction surveys, and anecdotal
evidence. However, the Library Manager described the lack of formal collection of the
views of students as ‘a yawning gap’, and offered the view that the library needs to rethink
its liaisons and relationships. He acknowledged that the library ‘takes the question of
library goodness’ for granted, and does little formal evaluation of its services – certainly
less than other service organisations on campus, such as Information Technology Services,
whose level of service he felt it did not match.

Library C
Senior staff at Library C placed it towards the right hand end of each of the axes, stating
overtly that the library is more focused on external issues than internal issues, that it values
outputs more than inputs, and that it feels it has a strong commitment to evaluation. While
the managers acknowledge that staff well-being is a major issue (and this is regarded as an
input), and agreed that they tend to use performance measures based on inputs, they stated
that their focus was on outputs, and they are beginning to consider outcomes.

Mechanisms used to assess user needs and satisfaction include:


• A major survey of users conducted as part of an annual general university-wide
Student Services survey;
• Library Committee with staff and student reps on it;
• Attendance at Academic and Faculty boards;
• Several surveys of students attending summer course, about hours open etc.;
• A library survey of staff and students - What are we doing well? What aren’t
we doing?
• Survey input to the Annual Plan;
• Focus groups to evaluate law library training groups;
• A web usability survey on the effectiveness of library’s web site, using
observations of users;
• Focus groups with special user groups—Maori, International students;
• Library suggestion box;
• Liaison librarians in branch and central libraries;
• Evaluations of information literacy programmes;
• Library newsletter;
• Contributions to a student magazine.

The Library is conscious that its statistics are focused on raw data, (attendances, number of
reference transactions, etc.) but not accuracy of reference answers or the actual use its
resources are put to. It also reports that it does not use many performance measures - it is
‘still trying to work out what to do’ – and it reports annual statistics covering inputs,
outputs and ratios between these (to judge efficiency), and key performance indicators.

However, the library believes it has good customer relations, a high level of credibility,
that it is proactive in communicating with customers, and that it is seen as responsive and
willing, with a real service culture. It reports that it is constantly re-evaluating services,
trying to be responsive and innovative, following best practice. This culture goes back to a
strategic planning exercise some years ago when the staff took the opportunity to say ‘let’s
put the customer first.’ Managers say this attitude came from within the library.

Library D
Library D also placed itself firmly to the right of the centre point on all three axes,
although the Librarian stated that while the library valued outputs, its focus was balanced
between internal and external concerns. While it had a strong commitment to evaluation,
the Librarian believed the organisation as a whole was not yet totally united in this.
Flexibility, and ability of staff to be responsive to customers, were seen in terms of trust,
and as being based on firm protocols and policies at the senior management level, within
which individuals and branches had autonomy over day to day decision making, and were
encouraged to be flexible.

The library reports on both inputs and outputs in its Annual Report, focusing on one
service area each year, and is working towards identifying and reporting outcomes, in
conjunction with initiatives by CONZUL1 and CAUL.2 It is also engaged in benchmarking
activities with the University of Queensland.

Consultation processes included:


• Library Committee, and committees for Law and other special libraries;
• Library/Student Liaison Committee;
• Circulating all policy issues to the Library Committee, the Student Liaison
Committee, and University Senate;
• Extensive use of customer focus group meetings on all aspects of policy
development, and new initiatives (e.g. electronic resources, automation,
buildings, collection management, etc.);
• Annual surveys of student opinion run by university;
• Staff attendance at student forums;
• Visits to halls of residence;
• Articles and responses in each issue of student newspaper, and university
newsletter;
• Feedback boxes/books in each branch;
• Informal contacts, and feedback from children of staff enrolled as students.

All staff have had customer services training, and the library believes that it has a strong
customer focus, and enjoys high standing in the university community.

Identifying the management factors that lead to service quality


The data in Table I shows that in the two institutions where staff perceptions of customer
expectations were closer to actual customer expectations, service delivery was also closer
to customer expectations. This in itself would not be sufficient to indicate cause and effect
as the sample size is too small to prove the proposition, although the data does support the
first hypothesis,

However, the interpretation of responses to questions 13-16 in the customer surveys did
not provide any evidence of differences between institutions in terms of where they placed
their focus and values, or suggest any differences that might lead to the differences in
service quality scores, as measured by the Gap 5 analysis. That evidence shows up to some
extent in the reports by the various libraries, which highlight some differences that one
could tentatively suggest might contribute to service quality ratings.

The two universities which have very significantly better scores in the Gap 1 analysis
above as shown in Table I both indicated that their focus and values and commitment to
evaluation were to the right hand end of each axis, and were able to point to a wide range
of methods by which they assessed user needs and maintained relationships with users.
These were in excess of methods used by the two other libraries. While it is too early to
postulate that there is a direct link between these two elements, it seems likely that the
management factors being investigated here, and placement on the axes in the matrix in
Figure I, go some way to explaining service quality in university libraries. The data also
suggest that better methods to ascertain where a library is placed on these key axes will
involve investigation of actual library practice, as this is not well assessed by use of
questionnaires on customer perceptions.

1
Committee of the New Zealand University Librarians
2
Council of Australian University Librarians
While the authors are confident that the basic propositions of the research are born out by
the results, especially to correlations between Gap 1 and Gap 5 results, there are many
questions still arising from this project, and much work to be done on appropriate
methodologies for assessing the culture of assessment in libraries.

REFERENCES

Cullen, Rowena (1997). Does performance measurement improve organisational


effectiveness? a postmodern analysis, pp3-20 in Proceedings of the 2nd Northumbria
Conference on Performance Measurement in Libraries and Information Services.
Newcastle Upon Tyne: Department of Information and Library Management,
University of Northumbria at Newcastle.
Edwards, Susan and Mairead Browne (1995). Quality in information services; do users and
librarians differ in expectations. Library & Information Science Research 17, 163-182.
Hernon, Peter and John R. Whitman (2001). Delivering Satisfaction and Service Quality: a
Customer-based Approach for Libraries. Chicago: American Library Association.
Lakos, Amos (2002). Defining a “Culture of Assessment” (pdf file). Culture of Assessment
-Toolkit. http.www.library.ucla.edu/libraries/yrl/reference/aalakos/Cutoolkit.html
[Accessed 15/7/03]
Nitecki, Danuta A. and Peter Hernon. 2000. Measuring service quality at Yale University’s
Libraries. Journal of Academic Librarianship 26(4): 259-273.
Phipps, Shelley and Carrie Russell (1999). Performance measurement as a methodology
for assessing team and individual performance. Proceedings of the Third Northumbria
International Conference on Performance Measurement in Library and Information
Services.
Zeithaml, Valerie A., A Parasuraman, and Leonard L. Berry (1990). Delivering Service
Quality: Balancing Customer Perceptions and expectations. New York: Free Press.

Você também pode gostar