Você está na página 1de 4

Bibliography for the Higher Education TechQual+ Project Babakus, Emin and Gregory W. Boller. 1992.

"An empirical assessment of the SERVQUAL scale." Journal of Business Research 24:253-268. The definition and measurement of service quality as a 5-dimensional construct, as in SERVQUAL, appears to suffer from a number of methodological shortcomings. A review of the potential problems and the findings from an empirical study are presented in this article. The findings suggest that the dimensionality of service quality may depend on the type of services under study. The use of mixed-item wording and the current operationalization of service quality on the basis of gap scores appear in the process of using SERVQUAL, the results of this study suggest to exercise caution. Suggestions are provided with implications for theory development and measurement in the service marketing area. Bergkvist, Lars and John R. Rossiter. 2007. "The Predictive Validity of Multiple-Item Versus Single-Item Measures of the Same Constructs." Journal of Marketing Research (JMR) 44:175-184. This study compares the predictive validity of single-item and multiple-item measures of attitude toward the ad (AAd) and attitude toward the brand (ABrand), which are two of the most widely measured constructs in marketing. The authors assess the ability of AAd to predict ABrand in copy tests of four print advertisements for diverse new products. There is no difference in the predictive validity of the multiple-item and single-item measures. The authors conclude that for the many constructs in marketing that consist of a concrete singular object and a concrete attribute, such as AAd or ABrand, single-item measures should be used. Cronin, J. Joseph, Jr. and Steven A. Taylor. 1994. "SERVPERF versus SERVQUAL: Reconciling Performance-Based and Perceptions-Minus-Expectations Measurement of Service Quality." Journal of Marketing 58:125-131. The authors respond to concerns raised by Parasuraman, Zeithaml, and Berry (1994) about the relative efficacy of performance-based and perceptions-minus-expectations measures of service quality. They demonstrate that the major concerns voiced by these authors are supported neither by a critical review of their discussion nor the emerging literature. Several research issues relative to service quality measurement and strategic decision making also are identified. Cronin, J. Joseph, Jr. and Steven A. Taylor. 1992. "Measuring Service Quality: A Reexamination and Extension." Journal of Marketing 56:55-68. The authors investigate the conceptualization and measurement of service quality and the relationships between service quality, consumer satisfaction, and purchase intentions. A literature review suggests that the current operationalization of service quality confounds satisfaction and attitude. Hence, the authors test (1) an alternative method of operationalizing perceived service quality and (2) the significance of the relationships between service quality, consumer satisfaction, and purchase intentions. The results suggest that (1) a performancebased measure of service quality may be an improved means of measuring the service quality construct, (2) service quality is an antecedent of consumer satisfaction, (3) consumer satisfaction has a significant effect on purchase intentions, and (4) service quality has less effect on purchase intentions than does consumer satisfaction. Implications for managers and future research are discussed.

Page 1 of 4

Jia, Ronnie and Blaize Horner Reich. 2013. "IT service climate, antecedents and IT service quality outcomes: Some initial evidence." The Journal of Strategic Information Systems 22:51-69. Although many IT service management frameworks exist, we still have limited theoretical understanding of IT service quality within a broader nomological network. Building on recent conceptual work on the IT service climate construct, this study empirically establishes it as a predictor of IT service quality using survey data from both IT units and their clients. Also examined was a set of antecedents which provide a foundation upon which a favorable service climate can be built. The IT service climate instrument, when incorporated into employee feedback initiatives, can provide guidance to IT executives about practices to improve service quality. Kettinger, William J. and Choong C. Lee. 2005. "Zones of Tolerance: Alternative Scales for Measuring Information Systems Service Quality." MIS Quarterly 29:607-623. The expectation norm of Information Systems SERVQUAL has been challenged on both conceptual and empirical grounds, drawing into question the instrument's practical value. To address the criticism that the original IS SERVQUAL's expectation measure is ambiguous, we test a new set of scales that posits that service expectations exist at two levels that IS customers use as a basis to assess IS service quality: (1) desired service: the level of IS service desired, and (2) adequate service: the minimum level of IS service customers are willing to accept. Defining these two levels is a "zone of tolerance" (ZOT) that represents the range of IS service performance a customer would consider satisfactory. In other words, IS customer service expectations are characterized by a range of levels, rather than a single expectation point. This research note adapts the ZOT and the generic operational definition from marketing to the IS field, assessing its psychometric properties. Our findings conclude that the instrument shows validity of a four-dimension IS ZOT SERVQUAL measure for desired, adequate, and perceived service quality levels, identifying 18 commonly applicable question items. This measure addresses past criticism while offering a practical diagnostic tool. Kettinger, William J. and Choong C. Lee. 1997. "Pragmatic Perspectives on the Measurement of Information Systems Service Quality." MIS Quarterly 21:223-240. In this research note, we join the debate between Van Dyke, Kappelman, and Prybutok and Pitt, Watson, and Kavan pertaining to the conceptual and empirical relevance of SERVQUAL as a measure of IS service quality. Adopting arguments from marketing, Van Dyke et al. (1997) question the SERVQUAL gap measurement approach, the interpretation and operationalization of the SERVQUAL expectation construct, and the reliability and validity of SERVQUAL dimensionality. In a response to those arguments, Pitt et al. (1997) defend their previous work (1995) in a point-by-point counterargument that suggests that the marginal empirical benefit of a perceptual-based (SERVPERF) service quality measure does not justify the loss of managerial diagnostic capabilities found in a gap measure. While siding with many of the positions taken by Pitt et al. (1997), we attempt to add value to the debate by presenting discrepancies we have with the two other research teams and by suggesting alternative approaches to resolve, or at least alleviate, problems associated with SERVQUAL. We believe that the theoretical superiority of an alternative IS service quality measure should be backed by empirical evidence in the IS context, hence answering some of the criticism by Van Dyke et al. and offering a construct valid version of the IS-adapted SERVQUAL. From a pragmatic viewpoint, we believe that the justification of using SERVQUAL's gap measure should be driven by more effective ways to utilize expectations in IS service management. To this end, we introduce the newer Parasuraman et al. (1994b)

Page 2 of 4

measures, the concept of a "zone of tolerance" for expectation management and an illustration of its practical use in an IS setting. Overall, we attempt to set the direction of where we think this debate should lead the IS field, namely, toward practical and timely IS service quality measures. Leyland, F. Pitt, Richard T. Watson, and C. Bruce Kavan. 1997. "Measuring Information Systems Service Quality: Concerns for a Complete Canvas." MIS Quarterly 21:209-221. This paper responds to the research note in this issue by Van Dyke et al. concerning the use of SERVQUAL, an instrument to measure service quality, and its use in the IS domain. This paper attempts to balance some of the arguments they raise from the marketing literature on the topic with the well-documented counterarguments of SERVQUAL's developers, as well as our own research evidence and observations in an IS-specific environment. Specifically, evidence is provided to show that the service quality perceptions-expectations subtraction in SERVQUAL is far more rigorously grounded than Van Dyke et al. suggest; that the expectations construct, while potentially ambiguous, is generally a vector in the case of an IS department; and that the dimensions of service quality seem to be as applicable to the IS department as to any other organizational setting. Then, the paper demonstrates that the problems of reliability of difference score calculations in SERVQUAL are not nearly as serious as Van Dyke et al. suggest; that while perceptions-only measurement of service quality might have marginally better predictive and convergent validity, this comes at considerable expense to managerial diagnostics; and reiterate some of the problems of dimensional instability found in our previous research, highlighted by Van Dyke et al. and discussed in many other studies of SERVQUAL across a range of settings. Finally, four areas for further research in this area are identified. Leyland, F. Pitt, Richard T. Watson, and C. Bruce Kavan. 1995. "Service Quality: A Measure of Information Systems Effectiveness." MIS Quarterly 19:173-187. The IS function now includes a significant service component. However, commonly used measures of IS effectiveness focus on the products, rather than the services, of the IS function. Thus, there is the danger that IS researchers will mismeasure IS effectiveness if they do not include in their assessment package a measure of IS service quality. SERVQUAL, an instrument developed by the marketing area, is offered as a possible measure of IS service quality. SERVQUAL measures service dimensions of tangibles, reliability, responsiveness, assurance, and empathy. The suitability of SERVQUAL was assessed in three different types of organizations in three countries. After examination of content validity, reliability, convergent validity, nomological validity, and discriminant validity, the study concludes that SERVQUAL is an appropriate instrument for researchers seeking a measure of IS service quality. Parasuraman, A. Zeithaml Valarie A. 1994. "Alternative Scales for Measuring Service Quality: A Comparative Assessment Based on Psychometric and Diagnostic Criteria." Journal of Retailing 70:201-230. Compares alternative measurement scales for customer service quality based on psychometric and diagnostic criteria. Development of alternative questionnaire formats to address unresolved issues; Results of an empirical study that evaluated the alternative formats in different sectors; Practical implications for management; Directions for future research. Tate, Mary Evermann Joerg. 2010. "The End of ServQual in Online Services Research: Where to from here?" e-Service Journal 7:60-85.

Page 3 of 4

Service quality, and the ServQual model, with origins in face-to-face marketing before the age of the internet, has been drafted into the role of explaining the perceived outcomes of computermediated self-service encounters. These however differ in important ways from face-to-face service encounters. In this conceptual paper, we offer a number of arguments as to why researchers of computer-mediated services should not look back to ServQual for the basis of their theoretical constructs, models and survey items. We suggest by way of alternative that established information systems theory has greater salience and explanatory power for this phenomenon. We also offer some areas of theory that we believe have potential for the study of online service quality that have so far received little attention. Van Dyke, Thomas P. Kappelman Leon A. Prybutok Victor R. 1997. "Measuring Information Systems Service Quality: Concerns on the Use of the SERVQUAL Questionnaire." MIS Quarterly 21:195-208. A recent MIS Quarterly article rightfully points out that service is an important part of the role of the information systems (IS) department and that most IS assessment measures have a product orientation (Pitt et al. 1995). The article went on to suggest the use of an IS-contextmodified version of the SERVQUAL instrument to assess the quality of the services supplied by an information services provider (Parasuraman et al. 1985, 1988, 1991).2; However, a number of problems with the SERVQUAL instrument have been discussed in the literature (e.g., Babakus and Boller 1992; Carman 1990; Cronin and Taylor 1992, 1994; Teas 1993). This article reviews that literature and discusses some of the implications for measuring service quality in the information systems context. Findings indicate that SERVQUAL suffers from a number of conceptual and empirical difficulties. Conceptual difficulties include the operationalization of perceived service quality as a difference or gap score, the ambiguity of the expectations construct, and the unsuitability of using a single measure of service quality across different industries. Empirical problems, which may be linked to the use of difference scores, include reduced reliability, poor convergent validity, and poor predictive validity. This suggests: that (1) some alternative to difference scores is preferable and should be utilized; (2) if used, caution should be exercised in the interpretation of IS-SERVQUAL difference scores; and (3) further work is needed in the development of measures for assessing the quality of IS services.

Page 4 of 4

Você também pode gostar