Você está na página 1de 15

Int. J. Services and Standards, Vol. 4, No.

1, 2008 1

Levels of analysis issues relevant in the assessment


of information systems service quality

Robert E. Miller*
Department of Accounting/MIS
Ashland University
Ashland, OH 44805, USA
E-mail: rmiller9@ashland.edu
*Corresponding author

Bill C. Hardgrave and Thomas W. Jones


Department of Information Systems
University of Arkansas
Fayetteville, AR 72701, USA
E-mail: whardgra@uark.edu
E-mail: twjones@uark.edu

Abstract: To date, researchers in Information Systems (IS) and other


disciplines have exhausted a considerable amount of effort devising measures
of service quality. Given this extensive work, it is interesting to note that levels
of analysis issues have not been discussed. To address this deficiency, this
paper defines levels of analysis and applies the concept to the assessment of
service quality. Issues related to specifying the level of analysis are discussed,
including the level of theory, data collection, and statistical analysis.
Additionally, the use of service quality in unit-level models is discussed, with
special emphasis given to issues of data aggregation.

Keywords: services; service quality; levels of analysis; standards; composition


models; aggregation.

Reference to this paper should be made as follows: Miller, R.E.,


Hardgrave, B.C. and Jones, T.W. (2008) ‘Levels of analysis issues relevant in
the assessment of information systems service quality’, Int. J. Services and
Standards, Vol. 4, No. 1, pp.1–15.

Biographical notes: Robert E. Miller is an Assistant Professor in the Dauch


College of Business and Economics at Ashland University. He earned his PhD
in Information Systems from the University of Arkansas. His research interests
include information systems service quality, the uses and impact of Radio
Frequency Identification (RFID) and technologically mediated social networks.
Before pursuing his PhD, he worked for ten years as an applications developer
in the power and telecommunications industries.

Bill C. Hardgrave is the Edwin & Karlee Bradberry Chair in Information


Systems, and Executive Director of the Information Technology Research
Institute in the Sam M. Walton College of Business at the University of
Arkansas. His research on software development (primarily people and process
issues) has appeared in the Journal of Management Information Systems,
Communications of the ACM, IEEE Software, IEEE Transactions on Software

Copyright © 2008 Inderscience Enterprises Ltd.


2 R.E. Miller, B.C. Hardgrave and T.W. Jones

Engineering, IEEE Transactions on Engineering Management, The DATA


BASE for Advances in Information Systems, Information and Management,
Journal of Systems and Software, and Educational and Psychological
Measurement, among others.

Thomas W. Jones is a University Professor in the Information Systems


Department of the Sam M. Walton College of Business at the University of
Arkansas. He received a PhD in Statistics from Virginia Polytechnic Institute
and State University. His research interests focus on the applications of
statistical and operations research techniques; publications appear in, among
others, Decision Sciences, The Journal of Social Psychology, Journal of
Accounting Research, Journal of Statistical Computation and Simulation,
Journal of Accounting, Auditing & Finance, Industrial Management &
Data Systems, MIS Quarterly and Communications of the ACM. Jones has
served in numerous officer positions, including President, of three professional
organisations: Decision Sciences Institute, Southwest Decision Sciences
Institute and the Federation of Business Disciplines.

1 Introduction

The increasing significance of services within the Information Systems (IS) function
has made service quality an important determinant of IS success (DeLone and McLean,
2003). Service quality has been shown to be significant in areas from end-user support
(Shaw et al., 2002) and e-commerce (Cao et al., 2005) to the application service provider
market (Gupta and Herath, 2005; Lee et al., 2007). Even with its significance, service
quality has received little attention beyond measurement in IS. This limited view of
service quality has left many aspects of the phenomenon to be addressed. One such
aspect concerns issues related to levels of analysis. A number of authors have called
for researchers to more fully address levels of analysis issues when developing theory
(Klein et al., 1994; Rousseau, 1985). According to Klein et al. (1994, p.198), this is
necessary because:
“By their very nature, organizations are multilevel… To examine
organizational phenomena is thus to encounter levels issues.”
What are these levels issues and how should they be addressed relative to service quality?
This paper is motivated by the need to answer these questions. Specifically, the paper
investigates levels of analysis issues relevant in the assessment of service quality. As
such, it will address the following research questions:
• What level(s) of analysis is(are) appropriate when assessing service quality?
• What issues are involved in the selection of the level(s)?
• How should these issues be addressed?
The paper begins by defining the concept of service quality and reviews its development
in the literature. The paper then presents a model of service quality assessment, using
constructs developed in marketing. The paper defines levels of analysis and the related
concept ‘units of analysis’. Working from these definitions and the research model, the
paper addresses three specific levels issues: level of theory, data collection and statistical
Levels of analysis issues 3

analysis. The paper also examines the use of service quality in unit-level models.
Particular attention is paid to aggregation issues and how they should be addressed. The
paper concludes with a discussion of future research topics.

2 Service quality

In order to understand the impact of levels of analysis on the assessment of service


quality, it is first important to define what we mean by service quality. Along with
a definition, it is also important to examine the process by which service quality
assessments are made. Relevant questions to keep in mind include:
• What is really being assessed?
• Who is qualified to make the assessment?
• How is the assessment made?
As with any investigation of a latent construct, the process should begin with a definition.

2.1 Service quality defined


As stated earlier, service quality has been widely researched in multiple disciplines. As
such, a number of definitions exist to describe the construct. Examples of these
definitions include:
• “service quality is the result of the consumer’s comparison of expected service with
perceived service” (Bojanic, 1991, p.29)
• “the consumer’s overall impression of the relative inferiority/superiority of the
organization and its services” (Bitner and Hubbert, 1994, p.77)
• “the extent in which the service, the service process and the service organization can
satisfy the expectations of the user” (Kasper et al., 1999, p.188).
Although these definitions differ slightly, they share a number of key concepts which
have become standard in the academic conceptualisation of service quality. Specifically,
they all view service quality as a perceptual phenomenon and as the product of a
disconfirmation process. While these may seem like fairly innocuous points, they are
critical to the ultimate goal of properly conceptualising the phenomenon.
Why is service quality defined as a perceptual construct? To answer this question,
it is necessary to define the terms ‘service’ and ‘quality’. At their most basic, services
are deeds, processes and performances (Zeithaml and Bitner, 2000). Services can also
be defined by stating what they are not, namely products. In this line of reasoning,
services and products form two opposing ends of a continuum. Determining whether
something is a service or a product is a matter of placing it on the continuum based on a
specific set of characteristics. These characteristics include: intangibility, inseparability
and heterogeneity. Pure services are considered intangible because they have no physical
manifestation. Pure services are also considered to be inseparable because the production
of the service cannot be separated from its consumption. In essence, production
and consumption occur simultaneously. Finally, pure services are considered to be
4 R.E. Miller, B.C. Hardgrave and T.W. Jones

heterogeneous because they involve the interaction of human beings in their production,
delivery and consumption. As such, no two services can ever be exactly the same.
Obviously, all services are not pure services. However, all services do exhibit the
characteristics of intangibility, inseparability and heterogeneity to a greater degree than
do products.
While services may be easy to define, quality is a more complicated concept. For
example, the concept of quality can be viewed in three distinct ways: philosophically,
technically and user-based (Kasper et al., 1999). The philosophical view argues that
quality is innate and that it can neither be defined nor analysed scientifically. In this view,
people know quality when they see it but they cannot define what it is. Needless to say,
the philosophical view has limited application in scientific research. The technical
approach views quality as conformance to a specified technical standard. Although this
view of quality can be used to assess services (e.g., number of customers served), it is
most often applied to the assessment of products (e.g., number of defects). This is, in
large part, due to the characteristics of services mentioned previously. Specifically, the
intangibility of services and the inseparability of production and consumption make
services more difficult to measure than products. The fact that services are, by nature,
heterogeneous also makes standards conformance highly impractical. The user-based
approach views quality as the subjective perception of the consumer. Given that services
are largely intangible, there are few, if any, characteristics to assess physically.
Therefore, the service is mostly perceived in the mind of the consumer. As such, the
perception of the consumer becomes the key to assessing the service. This line of
reasoning is supported by researchers such as Hernon and Altman (1996, pp.5–6) who
make the point rather eloquently:
“Quality is in the eyes of the beholder, and although it sounds like a cliché, it is
literally true. If customers say there is service quality then there is. If they do
not, then there is not. It does not matter what an organization believes about its
level of service.”
Besides being a perceptual phenomenon, service quality is also defined as the product of
a disconfirmation process. Specifically, it is the disconfirmation resulting from the
comparison of a consumer’s expectations of service with their perceptions of service
received. This view of service quality was first proposed by Parasuraman et al. (1985) in
their model of marketer-consumer interaction. In this model, a gap was identified
between the consumer’s expectations and perceptions of the marketer’s service.
Parasuraman et al. (1985) defined this gap as perceived service quality. When the
perceptions exceed the expectations, the consumer perceives service quality as positive.
When the expectations exceed the perceptions, the consumer perceives service quality
as negative.
The disconfirmation process used to explain service quality’s formation relies heavily
on disconfirmation theories developed by Churchill and Surprenant (1982). These
theories were originally used to explain the formation of user satisfaction. The process
begins with the expectations. As in user satisfaction research, service quality expectations
refer to predictions. In effect, the expectations are what the service provider will offer
instead of what it should offer. According to Kettinger and Lee (1994), consumers
form expectations about a service prior to its delivery. These expectations are based
on such factors as personal needs, word-of-mouth and past experiences. Consumption of
the service reveals a perceived level of quality. The consumer then either confirms or
Levels of analysis issues 5

disconfirms the original expectation based on this perceived quality. As evidenced in the
definitions of service quality provided earlier, the disconfirmation paradigm has become
widely accepted.
Incorporating the previous points, the definition of service quality used in this paper
is taken from Mangold and Babakus (1991, p.60). Specifically, service quality is “the
outcome of a process in which consumers’ expectations for the service are compared with
their perceptions of the service actually delivered”. Although the definition does not
specifically require human interaction in the service encounter, this is assumed given the
heterogeneous nature of services and the factors that make them so (i.e., human beings).

3 Service quality assessment

Given the above definition, how then should service quality be assessed? The answer to
this question can be found by examining the service encounter. According to Shostack
(1987), a service encounter is the period of time during which the service provider and
the consumer interact either in person, over the phone or by other media. Various
researchers (e.g., Bitner, 1992; Gronroos, 1990; Gummesson, 1992; Parasuraman et al.,
1988; Rust and Oliver, 1994) have identified factors relevant in a typical service
encounter. These factors can be broadly grouped into three categories: service delivery,
service product and service environment. Service delivery captures those aspects of
the service encounter that involve the service provider and his, or her, interaction with the
consumer. In essence, service delivery measures the service provider’s performance
during the service encounter. While service delivery measures the quality of the
performance, service product measures the quality of the service itself. Said another way,
service delivery is the ‘how’ of the service encounter while service product is the ‘what’.
Typical aspects of the service encounter captured by service product include: the features
of the service, the usefulness of the service, the effectiveness of the service, etc.
Along with the ‘how’ and the ‘what’, a service encounter also includes a setting. Aspects
of this setting are captured by the service environment. Specific factors captured
by service environment include: lighting, layout, temperature, etc. In addition to these
physical setting factors, it is also possible to capture aspects of a virtual setting (e.g.,
ease of use, convenience). This is particularly relevant in the assessment of IS service
quality considering that many IS services are delivered via electronic media (e.g., phone,
e-mail, internet).
By comparing expectations of the service delivery, service product, and service
environment with actual perceptions, the consumer forms his, or her, assessment of
service quality. This is illustrated by the proposed model shown in Figure 1.

Figure 1 Research model

Service
Delivery

Service Service
Product Quality

Service
Environment
6 R.E. Miller, B.C. Hardgrave and T.W. Jones

4 Level of analysis

In developing the concept of service quality in the previous sections, there was no
mention of levels of analysis. This is due to the fact that levels issues are effectively
absent from the extant service quality literature (with the notable exception of service
climate research). Unfortunately, this is a phenomenon that is not restricted to service
quality research. Levels of analysis issues are consistently overlooked by researchers,
regardless of discipline or topic. Even when levels issues are addressed, the researchers
often do so without specifically defining what levels of analysis are. As a result, they
often confuse levels of analysis with units of analysis. Confusion of this type is not
uncommon. Glick (1980) argues that the two terms are frequently used synonymously. In
order to avoid such confusion, a definition of levels of analysis is clearly needed.
According to Miller (1978), levels imply a hierarchical ordering of systems such that
higher level, more advanced systems are composed of lower level, less complex systems.
Levels are thus differentiable by their complexity, the size of their constituent units, the
proximity of the units and the distinctive processes characterising the units (Rousseau,
1985). In terms of organisational research, there are two basic levels: the individual and
the unit. The individual is the most atomic element in organisational research. When
individuals are grouped, these groupings constitute units. Conceptually, a unit can be as
simple as a dyad and as complex as a team, department, organisation, etc. In effect, a unit
is any higher level system composed of lower level systems. A given unit can also
contain multiple levels of subsystems. As an example, organisations are composed of
departments which are, in turn, composed of teams, etc.
Rousseau (1985, p.4), thus, defines level of ansalysis as “the unit to which the data
are assigned for hypothesis testing and statistical analysis”. In this case, the word ‘unit’
refers to all systems including the individual. Rousseau (1985) is careful to note that the
level of analysis is not necessarily the level to which generalisations are made. This is,
instead, called the focal unit, which is often different from the level of analysis. The focal
unit is equivalent to the unit of analysis which is defined by Glick (1980) as the research
focus. According to Glick (1980), the unit of analysis is represented by the dependent
variable used in a study.
Given this definition of level of analysis, a number of questions become salient:
• What is the appropriate level of analysis for service quality assessment?
• Is only one level of analysis appropriate?
• Can service quality be assessed at multiple levels?

5 Levels issues

When determining the appropriate level(s) of analysis, there are a number of issues to
consider. These issues can be broadly classified into three groups:
1 issues dealing with the level of theory
2 issues dealing with data collection
3 issues dealing with the statistical analysis of collected data (Klein et al., 1994).
Levels of analysis issues 7

The following sections describe each of these issues in general, and in terms of service
quality specifically.

5.1 Level of theory


The level of theory refers to the target that the researcher is trying to explain. This target
can be an individual or a higher level unit (dyad, group, organisation, etc.). In order to
determine the level of theory, it is necessary to assume that the individuals within a
higher level unit are either homogeneous or independent relative to the construct of
interest. If the researcher assumes that the individuals within a unit are homogeneous,
then the individuals within that unit are all expected to have identical responses relative
to the construct of interest. Any variance in responses is expected to occur at the unit
level and not at the individual level. In this case, the level of theory is the unit. If the
researcher assumes that the individuals within a higher level unit are independent, then
membership in the unit is not expected to exert influence on responses relative to the
construct of interest. Put simply, individuals respond as if they were not members of a
higher level unit. As such, variance in responses is caused by differences in individual
personality traits, attitudes, values, experiences, etc. In this case, the level of theory is
the individual.
In addition to these straightforward cases, Klein et al. (1994) also define a third
case which they describe as unit-specific heterogeneity. According to Klein et al. (1994),
unit-specific heterogeneity implies that the responses of individuals vary for the construct
of interest relative to the unit in which the individuals are members. In essence,
unit-specific heterogeneity implies a ranking of individual responses within the unit.
Even though variance in responses is expected within the unit, individual membership
in the unit is still meaningful. In this case, the level of theory is the individual within
the unit.
Which of these three cases applies to the assessment of service quality? The key to
answering this question lies in the characteristics that differentiate services from
products. As discussed earlier, services are heterogeneous. This means that no two
service encounters can ever be exactly alike. In addition, service quality is defined as
the gap between the consumer’s expectations and perceptions. Individual perceptions
vary even when different individuals experience the same phenomenon. Since service
encounters are heterogeneous and individual perceptions of these encounters differ,
individuals within a higher level unit can not be assumed to be homogeneous in their
assessment of service quality. In short, all individuals within a unit cannot be expected
to respond with identical assessments of service quality. At the same time, the
assumption of unit-specific heterogeneity is also not applicable. Although individual
responses are expected to vary within a unit, it cannot be inferred that these responses
vary relative to unit membership. Said another way, there is no indication that
ranking responses within a unit would be meaningful, or even appropriate. Only the
assumption of independence satisfies the conditions of service encounter heterogeneity
and individual perceptual differences. Therefore, the level of theory for service quality
assessment is the individual.
8 R.E. Miller, B.C. Hardgrave and T.W. Jones

5.2 Data collection


Of all issues related to specifying the level of analysis, data collection is the most
straightforward. In essence, the issue of data collection involves determining the level of
measurement, or the source of the data. The source of the data should be the level at
which variance in the data is expected. As an example, if the data vary by individual
(e.g., beliefs, attitude), then the level of measurement is the individual and the data
should be collected from individuals. Likewise, if the data vary by unit (e.g., group size),
then the level of measurement is the unit and the data should be collected from a
surrogate representing the unit (e.g., manager).
Based on the definition of service quality as the gap between an individual’s
expectations and perceptions, the level of measurement is obviously the individual. The
fact that only the individual experiencing the service can form a perception (again, per
the definition) precludes the use of surrogates in the assessment of service quality. No
other individual, at the same level (peer) or at a higher level (manager, director), can
assess a service encounter in which they were not directly involved. As such, no other
level of measurement beyond the individual is permissible.
Given that a service encounter must necessarily involve two parties (service provider
and consumer), it could be argued that service quality could be assessed at the level of the
dyad. While a dyadic interaction is required for the service exchange, the definition of
service quality leaves no direct role for the service provider in the assessment of quality.
That is to say, the individual consumer decides the quality based on his, or her, own
expectations and perceptions. Service providers have no say in the assessment beyond
their part in the service delivery.

5.3 Statistical analysis


Statistical analysis issues deal with the way the collected data are used in statistical
procedures. The level of statistical analysis depends on the phenomenon being studied
and the data that are being collected. When service quality is being assessed purely as a
measure of performance for a single organisation, the individual assessments are simply
averaged to create an aggregate score. While this is a common use of service quality
assessment, other uses are possible. Service quality can also be assessed in order to
determine its effect on other phenomena. As an example, the model in Figure 2 posits
an indirect relationship between service quality and performance which is mediated by
user satisfaction.

Figure 2 Service quality’s effect on performance

Service User
Performance
Quality Satisfaction

According to the model, an individual who perceives that he, or she, has received a
higher level of service quality will, as a result, be more satisfied. This higher level of user
satisfaction will, in turn, result in higher levels of individual performance. To test such a
Levels of analysis issues 9

model, service quality, user satisfaction and performance data would have to be collected
and analysed at the individual level. This would then constitute an individual level of
statistical analysis.
While the model presented in Figure 2 attempts to explain individual level
phenomena, the reasoning behind the model is not limited to the individual level. In fact,
it seems reasonable to argue that the same process may also work at the unit level.
Specifically, it could be argued that an organisation with high levels of service quality
also has more satisfied users and, consequently, higher levels of performance than
an organisation with low levels of service quality. Following this line of reasoning
requires a change in the level of analysis from the individual to the unit. Testing such a
model would require the aggregation of individual service quality, user satisfaction and
performance data in order to create unit-level constructs. This aggregation is shown in
Figure 3 by the bold vertical arrows connecting the individual and unit levels.

Figure 3 Aggregating from the individual to the unit level

Unit Service User


Performance
Quality Satisfaction

Individual Service User


Performance
Quality Satisfaction

Although this type of data aggregation is common in research (Rousseau, 1985), its
use does not come without risks. If the aggregation is not theoretically supported and
empirically valid, the aggregated value may be extremely misleading (Klein and
Kozlowski, 2000). This is especially true for a construct like service quality, which can
not be directly measured at the unit level. As such, the aggregation of individual service
quality assessments to create unit-level proxies raises a number of questions. Two of the
most salient are:
1 Does service quality mean the same thing when viewed at the individual and unit
levels of analysis?
2 How is the change in level of analysis justified?

6 Aggregation

When data are aggregated from a lower level to a higher level, the process raises
an important question: Does the aggregated construct have the same meaning as the
lower level construct from which it was derived? In order to answer this question, it
is necessary to specify how the constructs relate to each other at different levels (Roberts
10 R.E. Miller, B.C. Hardgrave and T.W. Jones

et al., 1978). This is accomplished through the use of composition models. According to
Chan (1998), composition models specify how constructs which reference the same
content but are qualitatively different relate to each other across different levels of
analysis. Using service quality as an example, a composition model would explain how
the unit level aggregate is functionally related to the individual level assessments.
Chan (1998) provides a typology of composition models to guide researchers
when specifying relationships between multi-level constructs. The models in the
typology include:
• additive model
• direct consensus model
• referent-shift model
• dispersion model
• process composition model.
Of the five models in Chan’s (1998) typology, the additive model is, by far, the most
straightforward. The additive model specifies that the higher level construct is simply the
summation of its lower level units, regardless the amount of variance among those units.
With additive models, the lower level scores are either summed or averaged to produce
the higher level score. The validity of the sum, or mean, is then used to empirically
support the aggregation.
According to Chan (1998), the direct consensus model is the most popular
composition model used in multi-level research. The direct consensus model specifies
that the higher level construct represents the amount of agreement in the lower level
units. Using this composition model, if individual perceptions of a phenomenon are
aggregated to a higher level, the higher level construct represents the shared perceptions
of the individuals. In order to justify this type of model, the amount of agreement within
the unit must be sufficiently high. To measure such within-unit agreement, measures such
as the rwg index have been proposed by James (1982) and James et al. (1984). These
researchers argue that the rwg should exceed .60 before aggregation based on within-unit
agreement is justified.
The referent-shift model also relies on consensus among lower level units to compose
higher level units. In these models, the consensus is based on within-unit agreement,
making the referent-shift and direct consensus model very similar. The major distinction
between the two models is the change in focal unit that takes place during the aggregation
process. In a referent shift model, the researcher begins by conceptually defining
the construct of interest at the lower level. Working from this conceptualisation, the
researcher then derives a new construct at the same level, but with a new focal unit. The
data drawn for this new focal unit, or referent, is then aggregated. This aggregation
composes the higher level construct which is related to the original lower level units
through the referent shift. As an example, a researcher could start with an individual’s
perception of a phenomenon and then shift the referent to the individual’s beliefs about
the perceptions of others in the unit. This kind of referent shift is important because it
creates a construct which is distinct from the original focal unit. Chan (1998) notes that
referent-shift models have been used by multi-level researchers for such constructs as
team level self-efficacy (Guzzo et al., 1993). Unfortunately, the composition of these
models is seldom made explicit.
Levels of analysis issues 11

The dispersion model and process composition model are designed to handle special
cases making them less applicable in explaining aggregation in service quality research.
Dispersion models view within-unit agreement not as a statistical prerequisite for
aggregation but, instead, as a significant focal unit in its own right. These models actually
use the variance within the unit to operationalise the unit level construct. Process
composition models are more concerned with changes in behaviour than with stable
attributes. In essence, these models are used to explain how a process, working at a lower
level, can be conceptualised at a higher level.
Based on these composition models, how then should service quality aggregation
be justified? Given the widespread use of the direct consensus model, it appears to be
the natural choice. Unfortunately, the level of agreement required by the direct consensus
model runs counter to service quality’s assumption of independence. Using the direct
consensus model to support aggregation assumes that individuals within a unit agree
in their assessment of service quality. Given the heterogeneity of service encounters
and the differences in individual perceptions, this assumption of agreement cannot be
justified. This does not mean, however, that agreement should not be measured during
service quality assessments. In fact, reporting the amount of agreement in a service
quality assessment could be extremely beneficial (as is addressed later in this paper).
What it does mean is that within-unit agreement should not be used to support
aggregation of individual assessments when those assessments are considered
independent of unit membership.
Along with the direct consensus model, the referent-shift, dispersion and process
composition models also seem ill-suited to support aggregation of service quality
assessments. The referent-shift model calls for aggregating the responses based on a
shifted focal unit. This type of shift runs counter to the spirit, and conceptualisation, of
service quality. Following this model draws the emphasis away from the individual’s
perception of the encounter. The dispersion model calls for treating the within-unit
variance as the focal unit. Much like the referent-shift, following such a model draws the
emphasis away from the individual. Finally, process composition models emphasise how
things happen instead of the constructs themselves. As such, they cannot be used to
justify aggregation of individual level assessments to the unit level.
Of Chan’s (1998) five composition models, only the additive model meets service
quality’s requirement of independence. Since the model places no restriction on the
amount of within-unit variance, service quality assessments can be aggregated without
the need for unit-wide consensus. The unit-level construct simply represents the sum or
average of the individual-level assessments regardless how much they may vary.
While the additive model provides a straightforward solution to the problem of
relating individual and unit-level constructs, a problem still exists. In order to use the
unit-level construct in a model like Figure 3, it is necessary to show between-unit
variance. Without sufficient variance between the units in the study, the model would not
be significant. To state the problem simply, while the amount of variance within the units
is irrelevant, the amount of variance between the units is essential. If the aggregated units
do not vary sufficiently, then it could be argued that the unit is not the appropriate level
of analysis. This calls into question the empirical validity of the aggregation. How then is
this unit-level aggregation validated?
12 R.E. Miller, B.C. Hardgrave and T.W. Jones

Researchers such as Ostroff (1992) have used intraclass correlations, ICC(1) and
ICC(2), to address this issue. ICC(1) compares the amount of within-unit variance to the
amount of variance between units. Conceptually, ICC(1) represents the amount of
individual variance accounted for by unit membership. The higher the ICC(1) value, the
more variance that is attributable to the unit. While there is no set cutoff, James (1982)
reported the median ICC(1) value as 0.12 in organisational literature. Like ICC(1),
ICC(2) also examines within-unit variance relative to between-unit variance. ICC(2)
differs in the fact that it indicates whether units can be reliably differentiated based on
their means. According to Glick (1985), the ICC(2) value should be at least 0.60 to justify
unit-level analysis.

7 Agreement and assessment consistency

By using the additive model and intraclass correlations, it is clear that service quality
assessments can be justifiably aggregated for unit-level analysis. Given that the additive
model specifically ignores within-unit consensus, is there any place for agreement in
service quality assessment? As hinted at earlier herein, the answer is ‘Yes’. The addition
of agreement statistics to a service quality assessment could help address one of the
nagging issues in managing quality of service – consistency.
By definition, services are heterogeneous. This means that no two service encounters
will ever be exactly the same: service providers do not always perform to the same level;
consumers do not always perceive the performance the same way; and expectations vary
widely by individual. While the additive model allows for such variance, its use of a
simple average also camouflages potential inconsistency in assessments. As an example,
if a user who assesses service quality as poor (score of 1) is averaged with a user who
assesses the quality as high (score of 7), the resulting score (4) represents neither user’s
assessment. The organisation would be left believing that its level of service was
adequate when, in fact, there was no real consensus among the unit’s members about the
level of service. This situation argues for reporting more than just a simple average. By
adding a measure of agreement, the organisation in this case not only has an assessment
of its quality but also an implied indication of the consistency of service. This
information could be particularly beneficial to managers. In a situation where within-unit
agreement was high, the manager could infer that services were being delivered at a
consistent level. In a situation where agreement was low, the manager could infer that
consistency was a problem and that the aggregation of responses has little value.
While ICC(1) and ICC(2) can be used to justify aggregation of service quality
assessments for unit-level analysis, another measure is needed to report the level of
within-unit agreement. The most popular agreement measure is the rwg index developed
by James in the 1980s (Schneider and White, 2004). The rwg compares the variance
within a unit to an expected distribution of responses. The higher the rwg, the closer the
match between the distributions. An rwg of 1.0 indicates perfect agreement. Given the
heterogeneity of services and the independence of service quality assessment, perfect
agreement is unlikely. However, as an indicator of service quality consistency an
agreement measure could still be extremely valuable.
Levels of analysis issues 13

8 Future research and conclusion

Based on the discussion presented herein, it is clearly important to understand how levels
of analysis impact the assessment of service quality. This paper is the first to address
issues such as level of theory, data collection and statistical analysis for service quality.
Moreover, the paper has examined the aggregation issues related to the use of service
quality in unit-level models. That said, much work is left to be done. Additional research
is needed to investigate the use of intraclass correlations in justifying unit-level
aggregation. Research is also needed in the use and reporting of agreement statistics such
as rwg. Since perfect agreement levels are theoretically and practically unachievable, what
level of agreement should managers work towards? This is one area where researchers
have the opportunity to truly inform management practice.
Future studies should also expand the proposed service quality assessment model to
incorporate constructs beyond the individual level. As an example, it would be interesting
to include an organisational culture construct to ascertain if different cultures affect the
expectations of consumers. If the culture of the organisation emphasises timeliness and
polite behaviour, how does that affect what the consumer expects during the service
encounter? Other constructs for consideration include service climate, organisation size
and type of industry.
Given the significance of the service component in the IS function, the importance of
service quality to IS managers can only be expected to increase. Our knowledge of
service quality is currently limited by our reliance on the research conducted in other
disciplines. IS researchers need to pick up the mantle and begin to explore this important
construct for the benefit of IS practitioners. This paper’s investigation of service quality
issues moves the discipline in that direction.

References
Bitner, M.J. (1992) ‘Servicescapes: the impact of physical surroundings on customers and
employees’, Journal of Marketing, Vol. 56, pp.57–71.
Bitner, M.J. and Hubbert, A.R. (1994) ‘Encounter satisfaction versus overall satisfaction versus
quality’, in R.T. Rust and R.L. Oliver (Eds.) Service Quality: New Dimensions in Theory and
Practice, Thousand Oaks, CA: Sage, pp.72–94.
Bojanic, D.C. (1991) ‘Quality measurement in professional services firms’, Journal of Professional
Services Marketing, Vol. 7, No. 2, pp.27–36.
Cao, M., Zhang, Q. and Seydel, J. (2005) ‘B2C e-commerce web site quality: an empirical
examination’, Industrial Management & Data Systems, Vol. 105, No. 5, pp.645–661.
Chan, D. (1998) ‘Functional relations among constructs in the same content domain at different
levels of analysis: a typology of composition models’, Journal of Applied Psychology,
Vol. 83, No. 2, pp.234–246.
Churchill, G.A., Jr. and Surprenant, C. (1982) ‘An investigation into the determinants of customer
satisfaction’, Journal of Marketing Research, Vol. 19, pp.491–504.
DeLone, W.H. and McLean, E.R. (2003) ‘The DeLone and McLean model of information systems
success: a ten-year update’, Journal of Management Information Systems, Vol. 19, No. 4,
pp.9–30.
14 R.E. Miller, B.C. Hardgrave and T.W. Jones

Glick, W. (1980) ‘Problems in cross-level inferences’, New Directions for Methodology of Social
and Behavioral Science, Vol. 6, pp.17–30.
Glick, W.H. (1985) ‘Conceptualizing and measuring organizational and psychological climate:
pitfalls in multilevel research’, Academy of Management Review, Vol. 10, pp.601–616.
Gronroos, C. (1990) Services Management and Marketing: Managing the Moments of Truth in
Service Competition, Lexington, MA: Lexington Books.
Gummesson, E. (1992) ‘Quality dimensions: what to measure in service organizations’, in
T.A. Swartz, D.E. Bowen and S.W. Brown (Eds.) Advances in Services Marketing and
Management, Greenwich, CT: JAI, pp.177–205.
Gupta, A. and Herath, S.K. (2005) ‘Latest trends and issues in the ASP service market’, Industrial
Management & Data Systems, Vol. 105, No. 1, pp.19–25.
Guzzo, R.A., Yost, P.R., Campbell, R.J. and Shea, G.P. (1993) ‘Potency in groups: articulating a
construct’, British Journal of Social Psychology, Vol. 32, pp.87–106.
Hernon, P. and Altman, E. (1996) Service Quality in Academic Libraries, Norwood, NJ: Ablex.
James, L.R. (1982) ‘Aggregation bias in estimates of perceptual agreement’, Journal of Applied
Psychology, Vol. 67, pp.219–229.
James, L.R., Demaree, R.G. and Wolf, G. (1984) ‘Estimating within-group interrater reliability
with and without response bias’, Journal of Applied Psychology, Vol. 69, pp.85–98.
Kasper, H., van Helsdingen, P. and de Vries, W., Jr. (1999) Services Marketing and Management:
An International Perspective, Hichester, England: Wiley.
Kettinger, W.J. and Lee, C.C. (1994) ‘Perceived service quality and user satisfaction with the
information services function’, Decision Sciences, Vol. 25, No. 5, pp.737–766.
Klein, K.J., Dansereau, F. and Hall, R.J. (1994) ‘Levels issues in theory development, data
collection, and analysis’, Academy of Management Review, Vol. 19, No. 2, pp.195–229.
Klein, K.J. and Kozlowski, S.W.J. (2000) ‘From micro to meso: critical steps in conceptualizing
and conducting multilevel research’, Organizational Research Methods, Vol. 3, No. 3,
pp.211–236.
Lee, S.M., Lee, H., Kim, J. and Lee, S. (2007) ‘ASP system utilization: customer satisfaction and
user performance’, Industrial Management & Data Systems, Vol. 107, No. 2, pp.145–165.
Mangold, W.E. and Babakus, E. (1991) ‘Service quality: the front-stage vs. the back-stage
perspective’, Journal of Marketing, Vol. 5, No. 4, pp.59–70.
Miller, J.G. (1978) Living Systems, New York, NY: McGraw-Hill.
Ostroff, C. (1992) ‘The relationship between satisfaction, attitudes, and performance: an
organizational level analysis’, Journal of Applied Psychology, Vol. 77, No. 6, pp.963–974.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1985) ‘A conceptual model of service quality
and its implications for future research’, Journal of Marketing, Vol. 49, No. 4, pp.41–50.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988) ‘SERVQUAL: a multiple-item scale for
measuring consumer perceptions of service quality’, Journal of Retailing, Vol. 64, No. 1,
pp.12–40.
Roberts, K.H., Hulin, C.L. and Rousseau, D.M. (1978) Developing an Interdisciplinary Science of
Organizations, San Francisco, CA: Jossey-Bass.
Rousseau, D.M. (1985) ‘Issues of level in organizational research: multi-level and cross-level
perspectives’, in L.L. Cummings and B.M. Staw (Eds.) Research in Organizational Behavior,
Greenwich, CT: KAI Press, Vol. 7, pp.1–37.
Rust, R.T. and Oliver, R.L. (1994) ‘Service quality: insights and managerial implications from the
frontier’, in R.T. Rust and R.L. Oliver (Eds.) Service Quality: New Dimensions in Theory and
Practice, Thousand Oaks, CA: Sage, pp.1–19.
Levels of analysis issues 15

Schneider, B. and White, S.S. (2004) Service Quality: Research Perspectives, Thousand Oaks: CA,
Sage Publications.
Shaw, N.C., DeLone, W.H. and Niederman, F. (2002) ‘Sources of dissatisfaction in end-user
support: am empirical study’, Database for Advances in Information Systems, Vol. 33, No. 2,
pp.41–56.
Shostack, G.L. (1987) ‘Service positioning through structural change’, Journal of Marketing,
Vol. 51, pp.34–43.
Zeithaml, V. and Bitner, M.J. (2000) Services Marketing: Integrating Customer Focus Across the
Firm, 2nd ed., Boston, MA: McGraw-Hill.

Você também pode gostar