Você está na página 1de 20

The Measurement of Web-Customer Satisfaction: An Expectation and Disconrmation Approach

Vicki McKinney Kanghyun Yoon Fatemeh Mariam Zahedi*


Sam M. Walton College of Business, Information Systems Department, University of Arkansas, 204 Business Building, Fayetteville, Arkansas 72701-0201 School of Business Administration, University of Wisconsin at Milwaukee, P.O. Box 742, Milwaukee, Wisconsin 53201 School of Business Administration, University of Wisconsin at Milwaukee, P.O. Box 742, Milwaukee, Wisconsin 53201 vmckinney@walton.uark.edu kyoon@uwm.edu zahedi@uwm.edu

nline shopping provides convenience to Web shoppers, yet its electronic format changes information-gathering methods traditionally used by customers. This change raises questions concerning customer satisfaction with the online purchasing process. Web shopping involves a number of phases, including the information phase, in which customers search for information regarding their intended purchases. The purpose of this paper is to develop theoretically justiable constructs for measuring Web-customer satisfaction during the information phase. By synthesizing the expectation-disconrmation paradigm with empirical theories in user satisfaction, we separate Web site quality into information quality (IQ) and system quality (SQ), and propose nine key constructs for Web-customer satisfaction. The measurements for these constructs are developed and tested in a two-phase study. In the rst phase, the IQ and SQ dimensions are identied, and instruments for measuring them are developed and tested. In the second phase, using the salient dimensions of Web-IQ and Web-SQ as the basis for formulating rst-order factors, we develop and empirically test instruments for measuring IQand SQ-satisfaction. Moreover, this phase involves the design and test of second-order factors for measuring Web-customer expectations, disconrmation, and perceived performance regarding IQ and SQ. The analysis of the measurement model indicates that the proposed metrics have a relatively high degree of validity and reliability. The results of the study provide reliable instruments for operationalizing the key constructs in the analysis of Web-customer satisfaction within the expectation-disconrmation paradigm. (Web Customer; Satisfaction; Information Quality; System Quality; Web-Information Satisfaction; Web-System Satisfaction; Construct Validity; MTMM Analysis)

Introduction
In a turbulent e-commerce environment, Internet companies need to understand how to satisfy customers to

*Names listed alphabetically.

sustain their growth and market share. Because customer satisfaction is critical for establishing long-term client relationships (Patterson et al. 1997) and, consequently, is signicant in sustaining protability, a fundamental understanding of factors impacting Web-customer satisfaction is of great importance to
1047-7047/02/1303/0296$05.00 1526-5536 electronic ISSN

Information Systems Research, 2002 INFORMS Vol. 13, No. 3, September 2002, pp. 296315

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

e-commerce. Furthermore, the need for research in Web-customer satisfaction (called e-satisfaction by Szymanski and Hise 2000) has been accentuated by the increasing demand for the long-term protability of dotcom companies and traditional companies that are Net-enhanced (Straub et al. 2002a). Satisfaction is the consequence of the customers experiences during various purchasing stages: (a) need arousal, (b) information search, (c) alternatives evaluation, (d) purchase decision, and (e) post-purchase behavior (Kotler 1997). During the information-search stage, the Internet offers extensive benets to Web customers by reducing their search cost and increasing shopping convenience, vendor choices, and product options (Bakos 1998, Alba et al. 1997). However, the online shopping experience depends on Web site information to compensate for the lack of physical contact and causes customers to rely heavily on technology and system quality to keep them interested and serviced as they explore e-stores with ease and pleasure. In other words, consumers make inferences about product attractiveness on the basis of: (1) information provided by retailers and (2) design elements of the Web site such as ease and fun of navigation (Wolnbarger and Gilly 2001). Palmer and Grifth (1998) observed that Web site design is an interaction between marketing and technological characteristics. Lohse and Spiller (1998) showed designing online stores with user-friendly interfaces critically inuences trafc and sales, and Szymanski and Hise (2000) found product information and site design critical in creating a satisfying customer experience. Given the roles of information content and system design in Web-customer satisfaction, this study focuses on identifying and measuring the constructs for the process by which Web-customer satisfaction is formed at the information search stage. In doing so, we synthesize the information systems (IS) research on user satisfaction with the marketing perspectives on customer satisfaction to explore the role of expectation and disconrmation regarding information quality (IQ) and system quality (SQ), which may shed some light on the process by which Web satisfaction is formed. Insight into this process could help Web-based businesses improve their customers satisfaction, thus enhancing the effectiveness of e-commerce for both
Information Systems Research Vol. 13, No. 3, September 2002

sellers and buyers. Hence, the purpose of this research is to identify key constructs and corresponding measurement scales for examining the expectationdisconrmation effects on Web-customer satisfaction. In the identication and development of constructs, a model of the expectation-disconrmation effects on Web-customer satisfaction (EDEWS) provides the underlying foundation for the measurement model that explains the structure and dimensionality of the proposed constructs.

Theoretical Perspectives
End-user satisfaction is an important area of IS research because it is considered a signicant factor in measuring IS success and use (Ives and Olson 1984, Doll and Torkzadeh 1988, DeLone and McLean 1992, Doll et al. 1994, Seddon 1997). Although many studies in end-user satisfaction do not explicitly separate information and system features when identifying the structure and dimensionality of the user-satisfaction construct, DeLone and McLean (1992) made an explicit distinction between information aspects and system features as determinants of satisfaction. Based on IS success literature, DeLone and McLeans highly cited model (1992) identied IQ and SQ as antecedents of user satisfaction and use. A similar separation of theoretical constructs can be found in marketing. In modeling overall satisfaction, Spreng et al. (1996) identied attribute satisfaction and information satisfaction as antecedents of satisfaction. Information satisfaction is based on the quality of the information used in deciding to purchase a product, whereas attribute satisfaction measures the consumers level of contentment with a product (Spreng et al. 1996, p. 17). Szymanski and Hise (2000) found that aspects associated with product information and Web site design are important determinants in forming customer satisfaction. For online shopping, the experience of using a Web site during the information-search phase could be affected by IQ factors (e.g., a richer product description) and SQ factors (e.g., other links; see Jarvenpaa and Todd 1996, 1997). Considering satisfaction in the Webusage environment, Pitt et al. (1995) observe that information is the dominant concern of the user, while

297

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Figure 1

The Model for Expectation-Disconrmation Effects on Web-Customer Satisfaction (EDEWS)

the delivery mechanism is secondary. Furthermore, Katerattanakul and Siau (1999) and Zhang et al. (2000) note that an important role of Web sites is information delivery and that the quality of information is considered critical in e-commerce. At the same time, the Web sites performance in delivering information can be independent of the quality or nature of the information, thus making it possible to have a clearer distinction between Web site information and its system. While distinguishing between IQ and SQ may not be widespread in traditional IS studies, such a distinction is clearly possible on the Web due to the feasibility of separating content from the content-delivery system. Recognizing and modeling information and system aspects separately may elucidate the process by which Web-customer satisfaction is formed. Based on the nature of Web site development for online shopping and the proposed models by DeLone and McLean (1992) and Spreng et al. (1996), we posit that Web-customer satisfaction has two distinctive sourcessatisfaction with the quality of a Web sites information content and satisfaction with the Web sites system performance in delivering information. Web-customers satisfaction with a Web sites IQ and SQ is in turn affected by their prior expectations

(formed by their prior experiences and exposure to vendors marketing efforts), possible discrepancies (e.g., disconrmation) between such expectations, and the perceived performance of the Web site. This concept is captured in the expectancydisconrmation paradigm, which has been the popular approach for measuring customer satisfaction in marketing. Based on this paradigm, customer satisfaction has three main antecedents: expectation, disconrmation, and perceived performance. When applied to Web-customer satisfaction, Web-IQ satisfaction has three antecedents: IQ expectation, IQ disconrmation, and IQ-perceived performance. Similarly, Web-SQ satisfaction has three antecedents: SQ expectation, SQ disconrmation, and SQ-perceived performance. Figure 1 shows the EDEWS model, which is the conceptual motivation for identifying the key constructs in studying Web-customer satisfaction, as discussed below. Satisfaction. Based on Spreng et al. (1996), Cadotte et al. (1987), and Oliver (1980), we dene overall satisfaction as an affective state representing an emotional reaction to the entire Web site search experience. This denition focuses on the process evaluation associated with the purchase behavior as opposed to the outcome-oriented approach, which emphasizes the
Information Systems Research Vol. 13, No. 3, September 2002

298

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

buyers cognitive state resulting from the consumption experience. IQ satisfaction and SQ satisfaction in this study have an evaluative nature similar to that of overall satisfaction. Furthermore, following DeLone and McLean (1992), we dene Web IQ as the customers perception of the quality of information presented on a Web site and Web SQ as the customers perception of a Web sites performance in information retrieval and delivery. In extending the Delone and McLean model by addressing issues related to the relevance, timeliness, and accuracy of information, Seddon (1997) also emphasized the importance of IQ and SQ in perceived usefulness and user satisfaction. The distinction between IQ satisfaction and SQ satisfaction is useful in developing a business Web site and for gauging customers satisfaction with it. For example, customers dissatised with site retrieval and delivery mechanisms (such as cluttered pages) are likely to leave the site even if the information available on the Web site is of high quality. Conversely, if a Web site lacks the information that customers need, its entertaining design or ease of search will not keep customers from leaving the site. Therefore, the distinction between IQ and SQ pertaining to customer satisfaction has practical implications for the Web-design process. Expectation. When consumers consider buying a product, they utilize prior purchasing experiences or external information to form internal standards of comparison, which are used in forming their expectations (Olson and Dover 1979, Oliver 1980). Expectation is conceptualized as the aggregation of individual belief elements in a consumers cognitive structure (Olson and Dover 1979), and is a precursor in predicting a variety of phenomena involved in buying behaviors and subsequent perceptions. There has been a lack of consensus regarding the conceptual denition of the expectation construct in the expectancy-disconrmation and SERVQUAL literature. In the debate over the validity of expectation measurement in SERVQUAL (Van Dyke et al. 1997, Pitt et al. 1997, Kettinger and Lee 1997), Van Dyke et al. observed that expectation lacks a concise conceptual denition because of its multiple denitions and corresponding operationalizations. For example, three types of expectation have been suggested: the should
Information Systems Research Vol. 13, No. 3, September 2002

expectation, the ideal expectation, and the will expectation (Teas 1993, Boulding et al. 1993, Tse and Wilton 1988). The should expectation highlights a normative standard for performance whereas the ideal expectation characterizes the optimal performance. The will expectation focuses on predicting future performance. Following the conceptual denition by Olson and Dover (1979), we dene customer expectation as their pretrial beliefs about a product (a Web site in the current study). Our denition of expectation is in line with the will expectation suggested by Teas (1993) and with Szajna and Scamells (1993) conceptualization of expectation as a set of beliefs held by IS users about a systems performance and their own performance when using the system. Furthermore, it also corresponds with Spreng et al.s (1996) denition of expectation as beliefs about a products attributes or performance at some time in the future. Perceived Performance. Perceived performance is dened as customers perception of how product performance fullls their needs, wants, and desires (Cadotte et al. 1987). The general role of this construct in the expectation-disconrmation paradigm has been a standard of comparison included in the disconrmation of expectations. In this respect, empirical research has attempted to investigate the impact of perceived performance on satisfaction directly (Churchill and Surprenant 1982, LaTour 1979) or as mediated by disconrmation (Cadotte et al. 1987, Churchill and Surprenant 1982, Churchill 1979, Oliver 1980, Swan and Trawick 1980). Disconrmation. Disconrmation is dened as consumer subjective judgments resulting from comparing their expectations and their perceptions of performance received. This denition is similar to the concept of expectation congruency suggested by Spreng et al. (1996). Specically, once consumers form their expectations, they compare their perceptions of product performance (based on their purchasing experiences) to the pre-established levels of expectation. Disconrmation occurs when consumer evaluations of product performance are different from their pretrial expectations about the product (Olson and Dover 1979). Conceptually, there has been a debate regarding

299

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

how to measure the disconrmation construct. There are two main approaches: (i) to compute disconrmation by subtracting expectation from perceived performance or (ii) to measure disconrmation directly as an independent construct of the perceived gap. Advocating the subtractive approach, Pitt et al. (1997) argue for including expectations when studying quality issues and not relying solely on a perception measurement so that richer diagnostic information can be obtained. Likewise, Swan and Trawick (1980) introduce the subtractive disconrmation approach, based on comparison theory. However, in the literature of SERVQUAL, Van Dyke et al. (1997) advocate the direct measurement of ones perception of service quality with a disconrmation measurement. Furthermore, several studies in marketing use the subjective disconrmation approach, considering disconrmation as an independent construct that inuences consumer satisfaction (Oliver 1977, 1980, Churchill and Surprenant 1982, Spreng et al. 1996, Cronin, Jr. and Taylor 1992). We opted for the direct measurement of disconrmation because it has been the more established approach in the expectationdisconrmation paradigm. Salient Dimensions of Information and System Quality In empirical studies examining expectation-disconrmation constructs and models in marketing, the candidate products salient attributes are easily identiable and directly measurable. For example, in setting up their experiments, Churchill and Surprenant (1982) used a plant and a videodisk player. They chose the number of blossoms and plant size as the important features of the plant and focus and hum as the important features for the videodisk player. Similarly, Spreng et al. (1996) used versatility and video outcome as two salient features of a camcorder. However, in measuring IQ and SQ expectation, performance, and disconrmation, salient dimensions of IQ and SQ are not pre-established, nor are such dimensions directly measurable. Therefore, the salient dimensions of Web IQ and Web SQ should be identied and measured as latent variables. The salient dimensions can then be used to construct second-order factors to represent IQ and SQ expectation, performance, and disconrmation.

Higher order factors have been used in measuring complex constructs. For example, Segars and Grover (1998) developed a second-order factor for measuring the success of strategic planning. Doll et al. (1994) developed a second-order factor to measure end-user computing satisfaction as a multifaceted construct. To measure the second-order constructs, we devised a two-phase process for instrument development. The objective of the rst phase was to identify the salient dimensions of Web IQ and Web SQ. In the second phase, the instrument for construct measurement was developed and the measurement model was tested using controlled lab experiments in which IQ and SQ expectations of participants were manipulated for controlling and measuring the levels of expectation and disconrmation and their impacts on Web-customer satisfaction. This section reports on the results of the rst phase. Phase 1 required the identication of factors considered important by Web customers in judging the IQ and SQ of Web sites. A number of researchers have examined various factors determining Web IQ, but a standard measure has not emerged. After an extensive review of the literature, we identied ve IQ dimensions: (1) relevance, (2) timeliness, (3) reliability, (4) scope, and (5) perceived usefulness (Table 1), and four SQ dimensions: (1) access, (2) usability, (3) navigation, and (4) interactivity (Table 2). The literature search contributed to the content validity of the constructs to be measured.

Methods
Construct Validation To create instruments to measure the constructs of Web IQ and Web SQ, we began the instrument development process with previously tested instruments (Zmud and Boynton 1991, Bailey and Pearson 1983), which has been designated as an efcient practice for IS researchers (Boudreau et al. 2001). The draft instruments used an 11-point semantic differential scale with values ranging from 0 (not important at all) to 10 (extremely important). In accordance with Churchills (1979) general principles for construct development, a draft 42-item instrument was created (33 items as
Information Systems Research Vol. 13, No. 3, September 2002

300

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Table 1

First-Order Factors and Subscales for Web-Information Quality Denition Concerned with such issues as relevancy, clearness, and goodness of the information Applicable, Related, Clear Concerned with the currency of the information Current, Continuously Updated Concerned with the degree of accuracy, dependability, and consistency of the information Believable, Accurate, Consistent Evaluates the extent of information, range of information and level of detail provided by the Web site. This new dimension of information quality, similar to a library search, is needed for Web site evaluation. Sufcient, Complete, Covers a Wide Range, Detailed Users assessment of the likelihood that the information will enhance their purchasing decision Supporting Literature Bailey and Pearson 1983, Bruce 1998, Davis et al. 1989, Doll and Torkzadeh 1988, Eighmey 1997, Eighmey and McCord 1998, Saracevic et al. 1988, Seddon 1997, Wilkerson et al. 1997, Zmud 1978 Abels et al. 1997, Bailey and Pearson 1983, Doll and Torkzadeh 1988, King and Epstein 1983, Wilkerson et al. 1997, Zmud 1978 Bailey and Pearson 1983, Doll and Torkzadeh 1988, Eighmey 1997, Eighmey and McCord 1998, King and Epstein 1983, Wilkerson 1997, Zmud 1978 Bailey and Pearson 1983, Doll and Torkzadeh 1988, King and Epstein 1983, Schubert and Selz 1998, Wilkerson et al. 1997, Zmud, 1978

First-Order Factors Relevance

Subscales for Relevance Timeliness Subscales for Timeliness Reliability Subscales for Reliability Scope

Subscales for Scope Perceived Usefulness

Subscales for Perceived Usefulness

Informative, Valuable, Instrumental

Abels et al. 1997, Bailey and Pearson 1983, Davis et al. 1989, Seddon 1997, Doll et al. 1998, Eighmey 1997, Eighmey and McCord 1998, Larcker and Lessig 1980, Moore and Benbasat 1991, Venkatesh and Davis 1996, Venkatesh and Davis 2000

Table 2

First-Order Factors and Subscales for Web-System Quality Denition Refers to the speed of access and the availability of the Web site at all times Responsive, Loads Quickly Concerned with the extent to which the Web site is visually appealing, consistent, fun and easy to use Supporting Literature Bailey and Pearson 1983, Novak et al. 2000, Selz and Schubert 1998, Wilkerson et al. 1997 Abels et al. 1997, Bailey and Pearson 1983, Davis 1989, Doll et al. 1998, Doll and Torkzadeh 1988, Doll et al. 1994, Dumas and Redish 1993, Eighmey 1997, 1993, Nielsen 1993, Moore and Benbasat 1991, Schubert and Selz 1998, Selz and Schubert 1998, Eighmey and McCord 1998, Venkatesh and Davis 1996, Wilkinson et al. 1997, Zmud 1978 Abels at al. 1997, Wilkinson et al. 1997

First-Order Factors Access Subscales for Access Usability

Subscales for Usability Navigation Subscales for Navigation Interactivity

Simple Layout, Easy to Use, Well Organized, Visually Attractive, Fun, Clear Design Evaluates the links to needed information Adequate Links, Clear Description for Links, Easy to Locate, Easy to Go Back and Forth, a Few Clicks Evaluates the search engine and the personal design, i.e., the shopping cart feature, of the Web site Customized Product, Search Engine, Create List of Items, Change List of Items, Find Related Items

Abels et al. 1997, Eighmey 1997, Eighmey and McCord 1998, Selz and Schubert 1998, Wilkinson et al. 1997

Subscales for Interactivity

Information Systems Research Vol. 13, No. 3, September 2002

301

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

shown in Tables 1 and 2, plus one direct question for each construct). The participants in the measurement process were undergraduate and graduate students at a large metropolitan university. Characterizing Web users as highly educated (88% had at least some college experience) with an average age of 38 years (average age decreased with the increase in number of years on the Internet and level of skill), the GUV WWW survey (1998) described generic Web users with educational proles similar to those of our participants. Web users in the GUV WWW survey (1998) were also quite experienced with the Internet74% had more than one year experience with the Internet. The participants in this study had an average age of about 27 years, and more than 80% had more than two years experience with the Internet. Initially, 10 Internet customers and experts reviewed the instrument for the purpose of evaluating it for face and content validity. The comments collected from the respondents did not indicate any problems. As recommended by a respondent, two versions of the instrument were created and used to avoid order bias. The rst pilot test was performed based on a convenience sample of 47 usable responses. An examination of the factor analysis results showed the existence of additional factors, leading to the addition of understandability and adequacy as two more IQ dimensions, and entertainment as an additional SQ dimension. Furthermore, the easy to locate item had a very low loading. Because its meaning did not correspond with the concept of navigation (the factor it was intended to measure), it was dropped at this stage. Six new items and three general questions (one per new construct) were added. The changes resulted in a puried instrument (Churchill 1979) with 50 items for measuring IQ and SQ dimensions and their importance. A second pilot test was performed to test the modied instrument based on another convenience sample of 47 usable responses. Examination of the ndings found the instrument to be reliable with no major bias. The twice-piloted instrument was used for data collection on Web-IQ and Web-SQ dimensions in the rst phase of the study. Data were collected in two rounds, yielding 330 usable responses in the rst round and

238 in the second round, a total of 568 observations. There were no overlaps between the subjects in the two rounds. Examination of the t-test results gave no indication of item order bias. In the analysis of the IQ dimensions, the construct for timeliness showed extensive cross-loadings with the reliability factor. It seems that Web customers view out-of-date information as unreliable for making purchase decisions. Therefore, for further purication (Churchill 1979), this factor was dropped. One item for usefulness (instrumental) was dropped due to its low factor loading in measuring perceived usefulness. The results indicated six factors for Web IQ (Table 3). The high factor loadings indicate convergent validity, and the lack of noticeable cross loadings supports discriminant validity of the reported factors for Web IQ. The mean importance rating is the average of subject ratings of the items comprising each factor and thus indicates the strength of conviction that the subjects had concerning the importance of the construct. The last row of Table 3 reports the mean importance rating, which is used in the second phase of the study for selecting the salient dimensions. The factor analysis results for Web SQ indicated that the navigation factor should be divided into (internal) navigation and (external) hyperlinks, which is quite meaningful in the context of information search for online shopping. Due to low factor loading in measuring interactivity, the search engine item was removed from the analysis. Table 4 reports the results of the factor analysis of Web-SQ dimensions. Again, the high factor loadings for the reported factors and the absence of signicant cross-loadings support the convergent validity and discriminant validity of the proposed factors. Appendix A includes the instrument and Cronbach alphas for the factors reported in Tables 3 and 4. The alpha values for all factors in Web IQ exceed 0.85. High reliability was also present for usability, entertainment, hyperlinks, and interactivity. However, the Cronbach alpha was 0.51 for access and 0.68 for navigation. Whereas the 0.68 value might be acceptable in exploratory research (Nunnally 1967), the same cannot be said of the 0.51 value. But because usefulness in IQ and navigation, access, and hyperlinks in SQ are twoitem factors, the interitem correlation can be used as an appropriate check for these factors.
Information Systems Research Vol. 13, No. 3, September 2002

302

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Table 3 Constructs

Factor Analysis for Information Quality (N Manifest Variables Applicable Related Pertinent Clear in Meaning Easy to Understand Easy to Read Trustworthy Accurate Credible Sufcient Complete Necessary Topics Wide Range Wide Variety of Topics # of Different Subjects Informative Valuable

568) Factor 1 0.165 0.126 0.188 0.349 0.204 0.168 0.843 0.863 0.856 0.259 0.308 0.194 0.129 0.076 0.004 0.180 0.227 46.6 8.96 Factor 2 0.037 0.183 0.112 0.045 0.106 0.171 0.070 0.037 0.121 0.207 0.151 0.299 0.854 0.924 0.878 0.272 0.161 12.5 6.91 Factor 3 0.815 0.815 0.785 0.290 0.180 0.171 0.174 0.186 0.136 0.175 0.272 0.181 0.128 0.099 0.073 0.246 0.299 7.9 7.80 Factor 4 0.213 0.123 0.204 0.703 0.857 0.823 0.225 0.228 0.156 0.086 0.288 0.253 0.104 0.127 0.061 0.207 0.249 6.4 8.43 Factor 5 0.109 0.173 0.196 0.174 0.197 0.122 0.201 0.189 0.199 0.760 0.659 0.699 0.173 0.141 0.146 0.235 0.285 5.0 8.12 Factor 6 0.140 0.159 0.157 0.250 0.113 0.123 0.133 0.129 0.110 0.164 0.238 0.183 0.081 0.085 0.173 0.799 0.759 3.8 8.15

Relevance (3)

Understandability (3)

Reliability (3)

Adequacy (3)

Scope (3)

Usefulness (2) Variance Explained Mean Importance Rating

Table 4 Constructs Access (2) Usability (4)

Factor Analysis for System Quality (N Manifest Variables

568) Factor 1 0.149 0.084 0.180 0.119 0.125 0.205 0.221 0.187 0.138 0.179 0.179 0.189 0.127 0.799 0.818 0.785 0.770 39.0 7.36 Factor 2 0.204 0.264 0.759 0.791 0.767 0.691 0.218 0.133 0.117 0.132 0.199 0.202 0.294 0.108 0.114 0.174 0.212 10.7 8.17 Factor 3 0.065 0.066 0.025 0.114 0.223 0.247 0.734 0.888 0.866 0.278 0.151 0.119 0.170 0.174 0.072 0.209 0.178 9.0 7.14 Factor 4 0.024 0.023 0.189 0.132 0.091 0.019 0.112 0.149 0.193 0.836 0.839 0.259 0.120 0.063 0.034 0.197 0.189 6.4 6.70 Factor 5 0.041 0.347 0.047 0.134 0.208 0.308 0.195 0.091 0.051 0.121 0.209 0.741 0.699 0.244 0.306 0.091 0.003 5.3 8.09 Factor 6 0.854 0.657 0.161 0.236 0.155 0.034 0.032 0.053 0.076 0.022 0.027 0.193 0.126 0.020 0.102 0.117 0.104 4.5 8.40

Entertainment (3)

Hyperlinks (2) Navigation (2) Interactivity (4)

Responsive Quick Loads Simple Layout Easy to Use Well Organized Clear Design Visually Attractive Fun Interesting Adequate # of Links Clear Description for Links Easy to Go Back and Forth A Few Clicks Create List of Items Change List of Items Create Customized Product Select Different Features

Variance Explained Mean Importance Rating

Information Systems Research Vol. 13, No. 3, September 2002

303

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

We also examined interitem correlations for each factor. These correlations were quite high for Web IQ and are relatively high for Web SQ. Of the two-item factors, usefulness and hyperlinks had relatively high interitem correlations, whereas access and navigation exhibit relatively lower correlation. However, all correlations are statistically signicant. General questions showed high correlations with factor items in most cases. High correlation values among items and with the general question in each factor indicated support for the presence of convergent validity. Construct Measurements, Experimental Design, and the Measurement Model In the second phase, the three most important dimensions of Web IQ and Web SQ were selected for manipulating expectations and measuring perceived performance and disconrmation. As shown in Appendix A, the importance ratings of Web-IQ and Web-SQ dimensions were measured by an 11-point semantic differential scale ranging from 0 (not important at all) to 10 (extremely important). The criterion was to select three factors with the highest mean importance ratings. Importance rating measures the importance of each dimension to subjects. Using such ratings, Brancheau et al. (1996) identied key research issues of IS management. In the same context, Wang and Strong (1996) classied attributes of data quality to create a hierarchical representation of consumers data quality needs. Furthermore, in selecting the most important features of Web IQ and Web SQ based on importance ratings, we have followed the common practice of selecting the most important attributes in designing experiments for testing expectation-conrmation models in marketing. Therefore, based on this criterion, the three most salient dimensions of Web IQ were reliability, understandability, and usefulness. Similarly, access, usability, and navigation were selected as the top three salient dimensions of Web SQ (Tables 3 and 4). The rationale for using three dimensions was based on the fact that second-order constructs had to be created using these factors. Chin (1998, p. x) suggested that: To adequately test the convergent validity, the number of rst-order factors should be four or greater (three while statistically adequate would represent a just-identied model for congeneric models). Kline

(1998) suggested that for a conrmatory factor analysis model with a second-order factor to be identied, at least three rst-order factors are needed. On the other hand, using more than three dimensions would increase the complexity of the measurement model to an unacceptable level in terms of estimation and sample size. In most experiments designed for testing expectation-disconrmation models, only two salient attributes have been used (Churchill and Surprenant 1982, Spreng et al. 1996). Hence, using three salient dimensions for IQ and SQ provides adequate data for testing the EDEWS measurement model while keeping the complexity of the experiments at a manageable level. The selected three salient dimensions of Web IQ and Web SQ were used in developing the measurement model (as shown in Appendix F) as well as in developing the instruments for measuring expectation, perceived performance, and disconrmation for Web IQ and Web SQ, as reported below. Expectation Measurement. Expectations regarding the reliability, understandability, and usefulness of Web sites were measured as rst-order factors, which were used in creating a second-order factor for measuring the IQ-expectation construct. Similarly, the rst-order factors for measuring expectations regarding access, usability, and navigation were used to create a secondorder factor for measuring the SQ expectation. Manifest variables for expectations were measured using an 11-point semantic differential scale ranging from not likely at all to highly likely, as shown in Appendix B. Perceived-Performance Measurement. Conceptually, two different types of denitions for performance construct are possible: perceived or subjective product performance and objective product performance. Because the expectation-disconrmation paradigm focuses on customer-subjective judgments of product performance, this study measured perceived performance. The construct for IQ-perceived performance was measured as a second-order factor using the rst-order factors for perceived performance regarding reliability, understandability, and usefulness. Similarly, the second-order construct for SQ-perceived performance was measured using the SQ rst-order dimensions of access, usability, and navigation. Manifest variables
Information Systems Research Vol. 13, No. 3, September 2002

304

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

for perceived performance were measured using an 11point semantic differential scale ranging from very poor to very good, as shown in Appendix C. Disconrmation Measurement. This study follows the subjective disconrmation approach by measuring disconrmation directly. The rst- and second-order factors for IQ and SQ disconrmation were created in a similar manner to the factors used for measuring IQ and SQ expectation and perceived performance. The instrument for disconrmation measurement directly evaluates disconrmation as an independent construct, with an 11-point semantic differential scale ranging from 0 (much lower than you thought) to 10 (much higher than you thought) with 5 as the neutral midpoint (the same as you expected), as shown in Appendix D. Positive disconrmation is measured by scale values above 5 ( 5 to 10); negative disconrmation is measured by scale values below 5 (0 to 5); and 5 represents 0 disconrmation. Satisfaction Measurement. Using a single-item measure, Westbrook (1980) measured consumer satisfaction on a delightful-terrible scale, measuring consumer focus on degree of delight experienced in consuming a cognitively fullling product. On the other hand, Churchill and Surprenant (1982), following Pfaffs (1977) approach, described overall satisfaction with cognitive and affective models and used multi-item measures of belief and affect for the assessment of satisfaction. Similarly, Spreng et al. (1996) based their definition of satisfaction on a summary evaluation of the entire product-use experience and developed four scales using cognitive and affective components to describe satisfaction. We developed the measurements of Web-IQ satisfaction and Web-SQ satisfaction as well as overall Web-user satisfaction based on the published instruments with Cronbach alpha values greater than 0.96 (Oliver 1989, Spreng et al. 1996).1 Using 4, 11-point semantic differential scales, Spreng et al. (1996) measured satisfaction with 4 scales. As shown in Appendix
1 The assumption in adopting this procedure is that high alphas represent reliable scales. However, it is possible that high alphas (Straub et al. 2000b) could result from common methods bias (Cook and Campbell 1979). It is important to assess whether the instrumentation process used maximally different methods to examine different variable types. In this case, high alphas would represent more reliable scales.

E, we adopted these scales to measure satisfaction with IQ, SQ, and overall satisfaction and added two items to elicit overall satisfaction with a Web site through the intention of reuse and recommendation to others. Experimental Design. The experiment was a 4 4 factorial design, intended to estimate the EDEWS measurement model via conrmatory factor analysis. A total of 16 cells were created for this study4 actual combinations of IQ and SQ levels by 4 manipulating expectations. Churchill and Surprenant (1982) used a similar factorial design by setting up three performance categories for plants and videodisk players and used credible printed messages for manipulating subject expectations in the three performance categories, hence producing a total of nine cells for each product. Such manipulations are needed to create a common standard of comparison and to control the levels of expectation. In this study, four Internet travel agent Web sites were selected to t the high-high, high-low, low-high, and low-low levels of the Web-IQ and WebSQ constructs. Web site selection was based on ratings of Internet travel agents by PC World, ComputerWorld, and Gomez.com.2 The authors evaluated, categorized, and synthesized the quality dimensions and rating information provided by these sources and used the results to create rating reports in implementing the experimental design. High IQ and high SQ indicate that the chosen Web site possesses a high level of IQ and SQ in terms of the three selected salient dimensions. The experimental protocol required the manipulation of subject expectations by setting their expectations to high-high, high-low, low-high, and low-low for IQ and SQ. Expectations were manipulated at the start of the experiment by showing the subjects the rating reports with credible rating information for each salient dimension, along with descriptions regarding the assigned Web site. To ensure the experiments objective of setting expectations, the researchers created one true and three mock rating reports for each Web
2 PC World, ComputerWorld, and Gomez.com compared 9, 6, and 22 online travel agents, respectively. PC World (using a ve-point scale: excellent, very good, good, fair, and poor) and ComputerWorld (using scores of A to F) compared these agents Web sites on various attributes of information and system features. Gomez.com rated Web sites using an 11-point scale (0 to 10) for various criteria and the resulting overall score.

Information Systems Research Vol. 13, No. 3, September 2002

305

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

site. For example, for the travel Web site with high IQ and high SQ, one rating report had true high-high ratings and three mock rating reports indicating highlow, low-high, and low-low levels for IQ and SQ. Subjects were randomly assigned to 1 of the 16 cells. Subjects participated in an information-search experiment, requiring them to purchase an airline ticket over the Web. Usable data was collected from 312 subjects. During the experiment session, each subject received a rating report for a Web site to review. Based on their review of the information, subjects completed a questionnaire (Appendix B) designed to measure their IQ and SQ expectations. The questionnaire also collected demographic information about the subjects and their Web experience. Upon completing the questionnaire, subjects searched the assigned Web sites for 20 minutes. Following the search period, subjects completed a second questionnaire designed to measure their perceived performance, disconrmation as well as Web-IQ and Web-SQ satisfaction. Overall satisfaction was also measured at this time. Appendices CE report the instruments used in developing the above questionnaires.

Analysis and Results


Structural equations modeling (SEM) was the designated tool for estimating the EDEWS measurement model for the conrmatory factor analysis; we used the most recent software for such an analysis, Mplus (developed by Muthen and Muthen 2001 and based on Muthen 1984). The estimation algorithm was the mean-adjusted maximum likelihood, which adjusts the estimation results with respect to nonnormality in the data. The internal validity of the experiments was tested by manipulation checks for verifying that the manipulations had taken (Perdue and Summers 1986). Manipulation checks are intended to measure the extent to which treatments have been perceived by the subjects (Boudreau et al. 2001, p. 5). The manipulations of expectation were investigated to conrm whether different levels of expectations were successfully set. We estimated two logistic regressions with IQ and SQ

manipulations as the dependent (categorical) variables and the factor scores for IQ expectation and SQ expectation as independent variables. The coefcients of the estimated functions were signicant with p value of 0.000 and tests of the signicance of estimated functions had p values of 0.000. The results were further conrmed by additional analysis using ANOVA, in which the F statistics had p values below 0.000. These ndings indicate the successful manipulation of the participants expectations. We estimated the measurement model, containing the conrmatory factor analysis for the constructs. The normed chi-square (ratio of chi-square and the degrees of freedom) for the measurement model was two, below the recommended range of three. The t values (estimated factor loadings divided by their standard errors) for the loadings of manifest variables were very high and well above two, supporting the statistical signicance of the parameter estimations (Muthen and Muthen 2001, p. 74). The t values of the factor loadings in the measurement model ranged from 18 to 119, indicating strong convergent and discriminant validity. Furthermore, the high squared multiple correlations (R2 values) for the indicators support the assertion that indicators are good measures of the constructs (Gefen et al. 2000, Bollen 1989, p. 288). (Appendix F contains details on the measurement model, conrmatory factor loadings, R2 and t values.) Although there is a debate regarding the use of the multitrait-multimethod analysis (MTMM) (Alwin 1973, 1974; Bagozzi et al. 1991; Bagozzi and Yi 1991), we used MTMM to further examine the convergent and discriminant validity of the factors (Campbell and Fiske 1959, Straub 1989). Although no clear criteria exists as to what makes methods maximally dissimilar (Pedhazur and Schmelkin 1991), we applied the approach used by Davis (1989) to argue that an item, say clear in meaning, used for measuring the expectation of understandability (in Appendix B) should be different from the clear in meaning item used for measuring the perceived performance of understandability (in Appendix C) or the disconrmation of understandability (in Appendix D). In this sense, the clear in meaning item may be said to be a method used for measuring different traits: expectation, perceived performance, and disconrmation of understandability,
Information Systems Research Vol. 13, No. 3, September 2002

306

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

and should show low correlations across the traits (heterotrait-monomethod).3 Acknowledging that this denition of method is not a use of maximally different methods, we follow Davis in arguing that items for measuring expectation of understandability (as reported in Appendix B) are different methods measuring the same trait and should have high correlations with each other (monotraitheteromethod). Similar arguments could be made for the other ve IQ and SQ factors. In the satisfaction case, different satisfaction types (IQ, SQ, and overall) were used as traits and the common satisfaction items were used as methods. Thus, seven correlation matrices were created, each corresponding to one of the six IQ and SQ rst-order factors (understandability, reliability, usefulness, access, usability, and navigation) and one for satisfaction. These matrices were examined for the evidence of convergent and discriminant validity of expectation, perceived performance, and disconrmation constructs. To investigate the convergent validity, the monotrait-heteromethod triangles for each construct were examined for high values; 100% of the correlations were signicant for the traits, supporting convergent validity. To examine discriminant validity, each matrix was analyzed individually, resulting in a total of 1,608 comparisons and 20 violations (a 1.2% exception rate), a rate that meets the discriminant validity criterion set by Campbell and Fiske (1959). The evidence for reliability of rst-order factors is reported in Table 5. Cronbach alphas were all above 0.79, with most Cronbach alphas above 0.90. (The interitem correlations for the two-item factors are also reported in Appendices BD.) Table 5 shows the composite factor reliability values for the constructs, which are at or above the recommended threshold of 0.70 (Segars 1997). Average variance extracted (AVE) shows the amount of variance captured by a construct as compared to the variance caused by the measurement error (Fornell and Larcker 1981, Segars 1997). The AVE values for all measures exceeded the recommended threshold of 0.50 (Segars 1997), which indicates that the
3 Note that Straub et al. (2000b) raise a serious concern about Daviss (1989) MTMM analysis in this regard. They argue that Daviss methods were not different and not maximally different as described and demonstrated in Campbell and Fiske (1959).

Table 5

Reliability Measures for Model Constructs Cronbach Composite Factor Average Variance Alpha Reliability Extracted (AVE) 0.95 0.97 0.95 0.95 0.97 0.95 0.94 0.95 0.95 0.80 0.96 0.93 0.80 0.97 0.86 0.79 0.96 0.81 0.97 0.98 0.98 0.90 0.90 0.78 0.88 0.89 0.70 0.88 0.84 0.78 0.87 0.80 0.75 0.91 0.79 0.75 0.90 0.79 0.77 0.90 0.91 0.96 0.74 0.75 0.64 0.71 0.72 0.52 0.71 0.65 0.65 0.60 0.67 0.62 0.74 0.66 0.61 0.71 0.66 0.63 0.69 0.70 0.84

First-Order Factors e-Understandability e-Reliability e-Usefulness P-Understandability P-Reliability P-Usefulness D-Understandability D-Reliability D-Usefulness E-Access E-Usability E-Navigation P-Access P-Usability P-Navigation D-Access D-Usability D-Navigation Web-information satisfaction Web-system satisfaction Overall satisfaction

constructs captured a relatively high level of variance (Column 4). Following Doll et al. (1994) and Segars and Grover (1998), three rst-order factors of understandability, reliability, and usefulness were used to create secondorder factors for IQ and SQ constructs. Based on Doll et al. (1994), R2 values for the second-order factors were computed (Table 6). High R2 values indicate an acceptable level of reliability for the second-order factors (Doll et al. 1994, Bollen 1989, Gefen et al. 2000). Signicant factor loadings for the second-order factors (Appendix F) indicate their validity (Doll et al. 1994).

Implications, Limitations, and Future Directions


In measuring Web-customer satisfaction, a critical task is to identify key constructs of Web-customer satisfaction and to develop validated instruments to measure them. Hence, the results of this study have immediate implications for businesses operating on the Web and for research in Web-customer satisfaction.

Information Systems Research Vol. 13, No. 3, September 2002

307

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Table 6

R 2 Values for Second-Order Factors Second-Order Factors Information Quality Expectation 0.59 0.77 0.88 System Quality Expectation Information Quality Performance 0.64 0.82 0.97 System Quality Performance 0.86 0.81 0.70 Information Quality Disconrmation 0.67 0.85 0.87 System Quality Disconrmation 0.86 0.79 0.78

First-Order Factors Understandability Reliability Usefulness

Access Usability Navigation

0.79 0.88 0.84

Note. The R 2 values were computed based on Doll et al. (1994) using the CALIS procedure in SAS.

Implications for Practice. As online shopping becomes a common practice, the online retailers are increasingly being held to the same business-performance standards as businesses operating in traditional markets. Managers of online retailers need to monitor the satisfaction of customers with their Web sites to compete in the Internet market. In doing so, they need to recognize the distinctive roles of information content and Web site performance in retrieving and delivering product information. This imperative is due to the fact that customers dissatised with Web site information contents will leave the site without making a purchase. Similarly, no matter how thorough the information content of a site is, a customer who has difculty in searching and getting the needed information is likely to leave the site. Therefore, one can add value and create insight by examining Web-customer satisfaction with the information content as well as the system quality. Having access to reliable and scientically tested metrics, the practitioners would be able to examine the structure and dimensionality of Webcustomer satisfaction. Our proposed metrics for separately measuring IQ and SQ constructs can assist managers in this regard because our analysis distinctly focuses on both the information contents and the delivery of the information.

Furthermore, online customers commonly have repeated experiences with various Web sites. Therefore, gauging their expectations and the disconrmation of their expectations can be of value in analyzing Webcustomer satisfaction. Consequently, online retailers are able to examine whether their Web sites meet their customers expectations by examining Web-customers IQ and SQ expectations and disconrmation. Moreover, the introduction of expectation and disconrmation constructs brings the marketing aspect of Web sites into focus for such retailers, an aspect crucial to the effective design of Web sites for online business. Implications for Research. Our work paves the way for researchers to investigate the impact of expectations and disconrmation on Web-customer satisfaction by clearly delineating Web-IQ and Web-SQ dimensions. It shows the complex nature of the constructs and experimental design for accurately analyzing the process by which Web-customer satisfaction is formed and for testing hypotheses regarding relationships among these constructs. In addition, validated measures could provide the consensus among researchers of customer satisfaction and encourage them to develop more rened measurement models (Segars and Grover 1998). This study provides the needed metrics for initiating future studies on Web-customer satisfaction. Limitations. The reported results are obviously limited by the type of subject, the nature of laboratory experimentation, and choice of Web sites. Using students as subjects could have an impact on the results (Szymanski and Henard 2001). Testing the measurement model with other strata of Web customers will add to the generalizability of our results. Second, the nature of lab experiments and the choice of Web sites limit the reported results. Because the purchase of airline tickets is a prevailing practice among Web users, this study employed Web sites of online travel agents for experiments. However, Web-customer satisfaction may depend on the distinctive nature of products or services offered online. The replication of this study for other types of products and services can enhance the generalizability of the reported results. Directions of Future Research. The results of this study facilitate further research in analyzing the antecedents of Web-customer satisfaction. Such an analysis can provide valuable insight into the process by which
Information Systems Research Vol. 13, No. 3, September 2002

308

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Web-customer satisfaction is formed and the identication of factors that could lead to a more satisfying experience at the information phase of online shopping. For success in e-commerce, the information search stage must lead to a purchase decision. Because the present studys focus was on the measurement of Webcustomer satisfaction, the proposed constructs did not contain the purchase intention. However, a comprehensive approach is needed to examine the inuence of satisfaction on purchase intention in the Web context. Furthermore, in his model, Seddon (1997) includes the net benets of the IS to individuals, organizations, and society. It would be instructive to examine these benets in the context of Web sites, and the role of these factors in the formation of expectations about Web sites.

Information Quality Relevance: (Cronbach 0.85) Information that is applicable to your purchase decision is: Information that is related to your purchase decision is: Information that is pertinent to your purchase decision is: In general, information that is relevant to your purchase decision is: Understandability: (Cronbach 0.88) Information that is clear in meaning is: Information that is easy to comprehend is: Information that is easy to read is: In general, information that is understandable for you in making purchase decision is: Reliability: (Cronbach 0.92) Information that is trustworthy is: Information that is accurate is: Information that is credible is: In general, information that is reliable for making your purchase decision is: Adequacy: (Cronbach 0.82) Information that is sufcient for your purchase decision is: Information that is complete for your purchase decision is: Information that contains necessary topics for your purchase decision is: In general, information that is adequate for your purchase decision is: Scope: (Cronbach 0.91) Information that covers a wide range is: Information that contains a wide variety of topics is: Information that contains a number of different subjects is: In general, information that covers a broad scope for your purchase decision is: Usefulness: (Cronbach 0.88, interitem correlation 0.78) Information that is informative to your purchase decision is: Information that is valuable to your purchase decision is: In general, information that is useful in your purchase decision is: System Quality Access: (Cronbach 0.57, interitem correlation A Web site that is responsive to your request is: A Web site that quickly loads all the text and graphics is: In general, a Web site that provides good access is: Usability: (Cronbach 0.84) A Web site that has a simple layout for its contents is: A Web site that is easy to use is: A Web site that is well organized is: A Web site that has a clear design is: In general, a Web site that is user-friendly is: Entertainment: (Cronbach 0.87) A Web site that is visually attractive is: A Web site that is fun to navigate is: A Web site that is interesting to navigate is: In general, a Web site that is entertaining is: 0.40)

Conclusions
In this study, two perspectives from the user-satisfaction literature in IS and the customer-satisfaction literature in marketing were synthesized to identify nine key constructs for analyzing Web-customer satisfaction. Based on IS literature, we argued that measuring Web-customer satisfaction for information quality and system quality provides insight about a customers overall satisfaction with a Web site. By synthesizing IS and marketing theories related to customer satisfaction, key constructs are identied for Web-customer satisfaction with a model for Expectation-Disconrmation Effects on WebCustomer Satisfaction (EDEWS), demonstrating the role these constructs play in the formation of overall Web-customer satisfaction. The EDEWS measurement model provided strong support for the reliability and validity of the proposed metrics for measuring the key constructs of Web-customer satisfaction. Acknowledgments
The authors thank Fred Davis, the Guest Editor, the Associate Editor, and reviewers for their helpful comments on this paper.

Appendix A.

Information Quality and System Quality Measurement Scales and Reliabilities (Phase 1)

All items were measured on a continuous 11-point semantic differential scale, where 0 not important at all, and 10 extremely important. (Each construct has a general question that is reported here but is not used in computing the Cronbach alpha. Additionally, interitem correlations are reported for two-item factors.)

Information Systems Research Vol. 13, No. 3, September 2002

309

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Hyperlinks4: (Cronbach 0.83, interitem correlation A Web site that has an adequate number of links is: A Web site that has clear descriptions for each link is:

0.70)

Navigation: (Cronbach 0.68, interitem correlation 0.52) A Web site, on which it is easy to go back and forth between pages, is: A Web site that provides a few clicks to locate information is: In general, a Web site, on which it is easy to navigate, is: Interactivity: (Cronbach 0.87) A Web site that provides the capability to create a list of selected items (such as shopping cart) is: A Web site that provides the capability to change items from a created list (such as changing contents of a shopping cart) is: A Web site that provides the capability to create a customized product (such as computer conguration or creating clothes to your taste and measurements) is: A Web site that provides the capability to select different features of the product to match your needs is: In general, a Web site, on which one can actively participate in creating your desired product, is:

Access: (Cronbach 0.80, interitem correlation is responsive to your request quickly loads all the text and graphics In general, provides good access Usability: (Cronbach 0.96) has a simple layout for its contents is easy to use is well organized is a clear design In general, is user-friendly Navigation: (Cronbach 0.93, interitem correlation is easy to go back and forth between pages provides a few clicks to locate information In general, is easy to navigate

0.67)

0.86)

Appendix C.

Measurement Scales for Perceived Performance

Appendix B.

Measurement Scales for Expectations

All items were measured on a continuous 11-point semantic differential scale, where 0 very poor, and 10 very good. (The Cronbach alpha is reported for each factor. Additionally, the interitem correlations are reported for two-item factors.) Performance in Information Quality Based on your experience of using the given Web site, please provide your evaluation of its performance in terms of the following features. The Web sites performance in providing information that is: Understandability: (Cronbach 0.95) clear in meaning was easy to comprehend was easy to read was In general, understandable for you in making your purchase decision was Reliability: (Cronbach 0.97) trustworthy was accurate was credible was In general, reliable for making your purchase decision was Usefulness: (Cronbach 0.95, interitem correlation informative to your purchase decision was valuable to making your purchase decision was In general, useful in your purchase decision was Performance in System Quality The Web sites performance that: Access: (Cronbach 0.80, interitem correlation is responsive to your request was quickly loads all the text and graphics was In general, provides good access was Usability: (Cronbach 0.97) has a simple layout for its contents was is easy to use was 0.66) 0.90)

All items were measured on a continuous 11-point semantic differential scale, where 0 not likely at all and 10 highly likely. (The Cronbach alpha is reported for each factor. Additionally, the interitem correlations are reported for two-item factors.) Expectation About Information Quality Based on the reports provided to you about the Web site, do you expect information on the Web site to be: Understandability: (Cronbach 0.95) clear in meaning easy to comprehend easy to read In general, understandable for you in making your purchase decision Reliability: (Cronbach 0.97) trustworthy accurate credible In general, reliable for making your purchase decision Usefulness: (Cronbach 0.95, interitem correlation 0.90) informative to your purchase decision valuable to making your purchase decision In general, useful in your purchase decision Expectation About System Quality Based on the reports provided to you about the Web site, do you expect that the Web site:
4

Since this factor was created after the completion of data collection, it does not have a general question.

310

Information Systems Research Vol. 13, No. 3, September 2002

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Figure F1

The Structure of the Measurement Model

is well organized was is a clear design was In general, is user-friendly was Navigation: (Cronbach 0.86, interitem correlation is easy to go back and forth between pages was provides a few clicks to locate information was In general, is easy to navigate was 0.75)

Cronbach alpha is reported for each factor. Additionally, the interitem correlations are reported for two-item factors.) Disconrmation in Information Quality We are interested in knowing how the Web site performed compared to your expectations in terms of the following features. The Web sites performance in providing information is: Understandability: (Cronbach 0.94) clear in meaning was easy to comprehend was easy to read was In general, understandable for you in making your purchase decision was

Appendix D.

Measurement Scales for Disconrmation

All items were measured on a continuous 11-point semantic differential scale, where 0 much lower than you thought, 5 the same as you expected, and 10 much higher than you thought. (The

Information Systems Research Vol. 13, No. 3, September 2002

311

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Table F1

Factor Loading, T value, and R 2 for First-Order Factors in the Measurement Model Expectation Performance R2 Loading T value R2 Loading Disconrmation T value R2

Items Understandability Clear in meaning (underst-i1) Easy to understand (underst-i2) Easy to read (underst-i3) Reliability Trustworthy (reliab-i1) Accurate (reliab-i2) Credible (reliab-i3) Usefulness Informative (useful-i1) Valuable (useful-i2) Access Responsive (access-i1) Quick loads (access-i2) Usability Simple layout (usability-i1) Easy to use (usability-i2) Well organized (usability-i3) Clear design (usability-i4) Navigation Easy to go back and forth (navigation-i1) A few clicks (navigation-i2)

Loading

T value

E-understandability 1.00 0.0 0.98 45.5 0.90 36.2 E-reliability 1.00 0.0 1.03 50.7 1.07 51.0 E-usefulness 1.00 0.0 1.03 38.5 E-access 1.00 0.0 1.26 17.6 E-usability 1.00 0.0 1.19 33.1 1.26 31.8 1.24 37.6 E-navigation 1.00 0.0 1.03 37.5

0.88 0.92 0.78 0.87 0.93 0.95 0.89 0.90 0.56 0.79 0.73 0.87 0.93 0.93 0.84 0.88

P-understandability 1.00 0.0 1.02 43.0 0.96 36.8 P-reliability 1.00 0.0 1.07 53.5 1.08 55.1 P-usefulness 1.00 0.0 1.01 51.3 P-access 1.00 0.0 0.90 19.8 P-usability 1.00 0.0 1.15 32.6 1.15 33.0 1.08 31.5 P-navigation 1.00 0.0 1.13 20.7

0.89 0.94 0.80 0.88 0.93 0.95 0.87 0.94 0.74 0.59 0.79 0.91 0.96 0.92 0.63 0.91

D-understandability 1.00 0.0 1.02 40.8 0.98 30.9 D-reliability 1.00 0.00 1.07 40.4 0.97 27.1 D-usefulness 1.00 0.0 1.01 43.1 D-access 1.00 0.0 0.89 18.1 D-usability 1.00 0.0 1.09 29.6 1.11 38.2 1.06 34.9 D-navigation 1.00 0.0 1.03 19.7

0.85 0.89 0.80 0.83 0.90 0.84 0.88 0.93 0.73 0.58 0.79 0.84 0.90 0.87 0.72 0.65

Table F2

Factor Loadings, T values, and R 2 for Satisfaction Factors in the Measurement Model Factor Loadings T value R2 Table F3 Factor Loadings (T values) for Second-Order Factors in the Measurement Model

Items for Information-Quality Satisfaction Satised (Inf-sat-i1) Pleased (Inf-sat-i2) Contented (Inf-sat-i3) Delighted (Inf-sat-i4) Information-System Satisfaction Satised (Sys-sat-i1) Pleased (Sys-sat-i2) Contented (Sys-sat-i3) Delighted (Sys-sat-i4) Overall Satisfaction Satised (Sat-i1) Pleased (Sat-i2) Contented (Sat-i3) Delighted (Sat-i4) Will recommend to friends (Sat-i5) Will use the site again (Sat-i6)

1.00 0.97 0.96 0.99

0.0 57.0 34.0 43.2

0.93 0.95 0.80 0.88

First-Order Factors Used to Construct Second-Order Factors

Expectation E-Information Quality 1.00 (0.0) 1.18 (21.7) 1.17 (21.8) E-System Quality 1.00 (0.0) 1.14 (13.6) 1.26 (15.1)

Performance P-Information Quality 1.00 (0.0) 1.03 (21.7) 1.28 (27.1) P-System Quality 1.00 (0.0) 1.00 (23.7) 0.92 (16.1)

Disconrmation D-Information Quality 1.00 (0.0) 0.98 (19.6) 1.21 (21.9) D-System Quality 1.00 (0.0) 0.94 (21.2) 0.83 (16.9)

1.00 0.99 0.93 0.95

0.0 99.2 37.9 49.6

0.96 0.97 0.83 0.89

1.00 0.97 0.94 0.94 1.06 1.09

0.0 46.7 38.0 39.1 40.6 34.0

0.94 0.95 0.87 0.92 0.89 0.83

Information Quality Understandability Reliability Usefulness

System Quality Access Usability Navigation

312

Information Systems Research Vol. 13, No. 3, September 2002

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Reliability: (Cronbach 0.95) trustworthy was accurate was credible was In general, reliable for making your purchase decision was Usefulness: (Cronbach 0.95, interitem correlation informative to your purchase decision was valuable to making your purchase decision was In general, useful in your purchase decision was Disconrmation in System Quality The performance of the Web site that: Access: (Cronbach 0.79, interitem correlation is responsive to your request was quickly loads all the text and graphics was In general, provides good access was Usability: (Cronbach 0.96) has a simple layout for its contents was is easy to use was is well organized was is a clear design was In general, is user-friendly was Navigation: (Cronbach 0.81, interitem correlation is easy to go back and forth between pages was provides a few clicks to locate information was In general, is easy to navigate was 0.68) 0.65) 0.90)

Frustrated vs. Contented Disappointed vs. Delighted Overall Satisfaction (Cronbach 0.98) After using this Web site, I am. . . Very dissatised vs. Very satised After using this Web site, I am. . . Very displeased vs. Very pleased Using this Web site made me. . . Frustrated vs. Contented After using this Web site, I am. . . Terrible vs. Delighted Using this Web site is. . . Will never recommend it to my friends vs. Will denitely recommend it to my friends After using this Web site, I. . . Will never use it again vs. Will denitely use it again

Appendix F.

Conrmatory Factor Loadings in the Measurement Model

The measurement model is shown in Figure F1, and conrmatory factor loadings for the constructs using rst-order factors are reported in Table F1. Table F2 contains the conrmatory factor loadings for satisfaction. Table F3 reports factor loadings for the constructs based on second-order factors. The factor loadings are computed within the estimated model using Mplus. The t values are reported in parentheses. The robust method of estimation in Mplus results in factor loadings above one for high level of loadings.

Appendix E.

Measurement Scales for Satisfaction for Information and Features of the Web Site

References
Abels, Eileen G., Marilyn Domas White, Karla Hahn. 1997. Identifying user-based criteria for Web pages. Internet Res. Electronic Networking Appl. Policy 7(4) 252262. Alba, Joseph, John Lynch, Barton Weitz, Chris Janiszewski, Richard Lutz, Alan Sawyer, Stacy Wood. 1997. Interactive home shopping: Consumer, retailer, and manufacturer incentives to participate in electronic marketplaces. J. Marketing 61(3) 3853. Alwin, Duane. 1973, 1974. Approaches to the interpretation of relationships in the multitrait-multimethod matrix. Sociological Methodology 79105. Bagozzi, Richard, Youjae Yi. 1991. Multitrait-multimethod matrices in consumer research. J. Consumer Res. 17(4) 426439. , , Lynn Phillips. 1991. Assessing construct validity in organizational research. Admin. Sci. Quart. 36(3) 421458. Bailey, James, Sammy W. Pearson. 1983. Development of a tool for measuring and analyzing computer user satisfaction. Management Sci. 29(5) 530545. Bakos, Yannis. 1998. The emerging role of electronic marketplaces on the Internet. Comm. ACM 41(8) 3542. Bollen, K. A. 1989. Structural Equations with Latent Variables. John Wiley, New York. Boudreau, Marie, David Gefen, Detmar Straub. 2001. Validation in IS research: A state-of-the-art assessment. MIS Quart. 25(1) 1 24.

All items (except the last item) are measured on a continuous 11point semantic differential scale. Satisfaction with Information Quality (Cronbach 0.97) Only based on the information provided by the assigned Web site, please indicate your views regarding the overall quality of information in making your purchase decision. After using the Web site, information that you obtained made you: Very dissatised vs. Very satised Very displeased vs. Very pleased Frustrated vs. Contented Disappointed vs. Delighted Satisfaction with System Quality (Cronbach 0.98) Only based on the information provided by the assigned Web site, please indicate your views regarding the overall quality of Web sites features in making your purchase decision. In terms of the features of the Web site that provide the information you need, using the Web site made you: Very dissatised vs. Very satised Very displeased vs. Very pleased

Information Systems Research Vol. 13, No. 3, September 2002

313

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Boulding, William, Ajay Kalra, Richard Staelin, Valarie A. Zeithaml. 1993. A dynamic process model of service quality: From expectations to behavioral intentions. J. Marketing Res. 30(February) 727. Brancheau, J. C., Brian D. Janz, James C. Wetherbe. 1996. Key issues in information systems management: 19941995 SIM Delphi Results. MIS Quart. 11(1) 225241. Bruce, Harry. 1998. User satisfaction with information seeking on the Internet. J. Amer. Soc. Inform. Sci. 49(6) 541556. Cadotte, Ernest R., Robert B. Woodruff, Roger L. Jenkins. 1987. Expectations and norms in models of consumer satisfaction. J. Marketing Res. 24(3) 305314. Campbell, Donald T., Donald W. Fiske. 1959. Convergent and discriminant validation by the multitrait-multimethod matrix. Psych. Bull. 56(2) 81105. Chin, Wynne. W. 1998. Issues and opinion on structural equation modeling. MIS Quart. 22(1) viixvi. Churchill, Gilbert A., Jr. 1979. A paradigm for developing better measures of marketing constructs. J. Marketing Res. 16(1) 6473. , Carol Surprenant. 1982. An investigation into the determinants of customer satisfaction. J. Marketing Research 19(4) 491504. Cook, Thomas D., Donald T. Campbell. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Houghton Mifin, Boston, MA. Cronin Jr., J. Joseph, Steven A. Taylor. 1992. Measuring service quality: A reexamination and extension. J. Marketing 56(3) 5568. Davis, Fred D. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 13(3) 319339. , Richard P. Bagozzi, Paul R. Warshaw. 1989. User acceptance of computer technology: A comparison of two theoretical models. Management Sci. 35(8) 9821003. DeLone, W. H., E. R. McLean. 1992. Information systems success: The quest for the dependent variable. Inform. Systems Res. 3(1) 6095. Doll, William J., Gholamreza Torkzadeh. 1988. The measurement of end-user computing satisfaction. MIS Quart. 12(2) 259274. , Anthony Hendrickson, Xiaodong Deng. 1998. Using Daviss perceived usefulness and ease-of-use instruments for decision making: A conrmatory and multigroup invariance analysis. Decision Sci. 29(4) 839869. , Weidong Xia, Gholamreza Torkzadeh. 1994. A conrmatory factor analysis of the end-user computing satisfaction instrument. MIS Quart. 18(4) 453461. Dumas, Joseph S., Janice C. Redish. 1993. A Practical Guide to Usability Testing. American Institutes for Research, Ablex Publishing Corporation, Norwood, NJ. Eighmey, John. 1997. Proling user responses to commercial Web sites. J. Advertising Res. (June) 5966. , Lola McCord. 1998. Adding value in the information age: Uses and gratications of sites on the World Wide Web. J. Bus. Res. 41(3) 187194. Fornell, C., D. F. Larcker. 1981. Evaluating structural equation models with unobservable variables and measurement error. J. Marketing Res. 18(1) 3950.

Gefen, David, Detmar Straub, Marie-Claude Boudreau. 2000. Structural equation modeling and regression: Guidelines for research practice. Comm. Assoc. Inform. Systems 4(August) 179. GUV WWW Survey. 1998. www.gvu.gatech.edu/user_surveys/ survey-199810 . Ives, Blake, Margrethe H. Olson. 1984. User involvement and MIS success: A review of research. Management Sci. 30(5) 586603. Jarvenpaa, Sirrka L., Peter A. Todd. 1996, 1997. Consumer reactions to electronic shopping on the World Wide Web. Internat. J. Electronic Commerce 1(2) 5988. Katerattanakul, Pairin, Keng Siau. 1999. Measuring information quality of Web sites: Development of an instrument. Proc. Internat. Conf. Inform. Systems. Charlotte, NC, 279285. Kettinger, William J., Choong C. Lee. 1997. Pragmatic perspectives on the measurement of information systems service quality. MIS Quart. 21(2) 223240. King, William R., Barry J. Epstein. 1983. Assessing information system value: An experimental study. Decision Sci. 14(1) 3445. Kline, Rex B. 1998. Principles and Practice of Structural Equation Modeling. Guilford Press, New York. Kotler, Philip. 1997. Marketing Management: Analysis, Planning, Implementation, and Control. Prentice Hall, Engelwood Cliffs, NJ. Larcker, David F., V. Parker Lessig. 1980. Perceived usefulness of information: A psychometric examination. Decision Sci. 11 121 134. LaTour, Stephen A. 1979. Conceptual and methodological issues in consumer satisfaction research. William L. Wilkie, ed. Advances in Consumer Research. Association for Consumer Research, Ann Arbor, MI, 431437. Lohse, Gerald L., Peter Spiller. 1998. Electronic shopping. Comm. ACM 41(7) 8187. Moore, Gary C., Izak Benbasat. 1991. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inform. Systems Res. 2(3) 192222. Muthen, B. O. 1984. A general structural equation model with di chotomous, ordered categorical, and continuous latent variable indicators. Psychometrika 49(1) 115132. , Linda Muthen. 2001. Mplus: The Comprehensive Modeling Pro gram for Applied Researchers User Guide. Muthen and Muthen, Los Angeles, CA. Novak, Thomas P., Donna L. Hoffman, Yiu-Fai Yung. 2000. Measuring the customer experience in online environments: A structural modeling approach. Marketing Sci. 19(1) 2242. Nunnally, J. C. 1967. Psychometric Theory. McGraw-Hill, New York. Oliver, Richard L. 1977. Effect of expectation and disconrmation on postexposure product evaluations: An alternative interpretation. J. Appl. Psych. 62(4) 480486. . 1980. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Marketing Res. 17 460469. . 1989. Processing of the satisfaction response in consumption: A suggested framework and research propositions. J. Consumer Satisfaction, Dissatisfaction and Complaining Behavior 2 116. Olson, Jerry, Philip A. Dover. 1979. Disconrmation of consumer expectations through product trail. J. Applied Psych. 64(2) 179 189.

314

Information Systems Research Vol. 13, No. 3, September 2002

McKINNEY, YOON, AND ZAHEDI The Measurement of Web-Customer Satisfaction

Palmer, Jonathan W., David A. Grifth. 1998. An emerging model of Web site design for marketing. Comm. ACM 41(3) 4551. Patterson, Paul G., Lester W. Johnson, Richard A. Spreng. 1997. Modeling the determinants of customer satisfaction for business-tobusiness professional services. Acad. Marketing Sci. J. 25(1) 4 17. Pedhazur, E. J., Liora Pedhazur Schmelkin. 1991. Measurement, Design, and Analysis: An Integrated Approach. Lawrence Erlbaum Associates, Hillsdale, NJ. Perdue, Barbara C., John O. Summers. 1986. Checking the success of manipulations in marketing experiments. J. Marketing Res. 23(4) 317326. Pfaff, Martin. 1977. The index of consumer satisfaction: Measurement problems and opportunities. H. Keith Hunt, ed. A Conceptualization and Measurement of Consumer Satisfaction and Dissatisfaction. Marketing Science Institute, Cambridge, MA, 3671. Pitt, Leyland F., Richard T. Watson, C. Bruce Kavan. 1995. Service quality: A measure of information systems effectiveness. MIS Quart. 19(2) 173187. , , C. Bruce Kavan. 1997. Measuring information systems service quality: Concerns for a complete canvas. MIS Quart. 21(2) 209221. Saracevic, Tefko, Paul Kantor, Alice Y. Chamis, Donna Trivison. 1988. A study of information seeking and retrieving, I. Background and methodology. J. Amer. Soc. Inform. Sci. 39(3) 161 177. Schubert, Petra, Dorian Selz. 1998. Web assessmentMeasuring the effectiveness of electronic commerce sites going beyond traditional marketing paradigms. Proc. 31st HICSS Conf., Hawaii, www.businessmedia.org/businessmedia/businessmedia.nsf/ pages/publications.html . Seddon, Peter B. 1997. A respecication and extension of the DeLone and McLean model of IS success. Inform. Systems Res. 8(3) 240 253. Segars, Albert H. 1997. Assessing the unidimensionality of measurement: A paradigm and illustration within the context of information systems research. Omega 25(1) 107121. , Varun Grover. 1998. Strategic information systems planning success: An investigation of the construct and its measurement. MIS Quart. 22(2) 139148. Spreng, Richard A., Scott B. MacKenzie, Richard W. Olshavsky. 1996. A reexamination of the determinants of consumer satisfaction. J. Marketing 60(3) 1532. Straub, Detmar W. 1989. Validating instruments in MIS research. MIS Quart. 13(2) 147169. , Marie-Claude Boudreau, David Gefen. 2002b. Validation heuristics for IS positivist research. Working paper, Georgia State University, Atlanta, GA. , Donna Hoffman, Bruce Weber, Charles Steineld. 2002a. Mea-

suring e-commerce in Net-enabled organizations. Inform. Systems Res. 13(2) 115124. Swan, John E., I. Frederick Trawick. 1980. Inferred and perceived disconrmation in consumer satisfaction. Marketing in the 80s. Proc. AMA Educators Conf., Chicago, IL, 97101. Szajna, Bernadette, Richard W. Scamell. 1993. The effects of information system user expectations on their performance and perceptions. MIS Quart. 17(4) 493516. Szymanski, David M., David H. Henard. 2001. Customer satisfaction: A meta-analysis of the empirical evidence. J. Acad. Marketing Sci. 29(1) 1635. , Richard T. Hise. 2000. e-Satisfaction: An initial examination. J. Retailing 76(3) 309322. Teas, R. Kenneth. 1993. Expectations, performance, evaluation, and consumers perceptions of quality. J. Marketing 57(4) 1834. Tse, David K., Peter C. Wilton. 1988. Models of consumer satisfaction formation: An extension. J. Marketing Res. 25(2) 204212. Van Dyke, Thomas P., Leon A. Kappelman, Victor R. Prybutok. 1997. Measuring information systems quality: Concerns on the use of the SERVQUAL questionnaire. MIS Quart. 21(2) 195208. Venkatesh, Viswanath, Fred D. Davis. 1996. A model of the antecedents of perceived ease of use: Development and test. Decision Sci. 27(3) 451481. , . 2000. A theoretical extension of the technology acceptance model: Four longitudinal eld studies. Management Sci. 46(2) 186204. Wang, R., Diane M. Strong. 1996. Beyond accuracy: What data quality means to data consumers. J. Management Inform. Systems 12(4) 534. Watson, Richard, Leyland F. Pitt, C. Bruce Kavan. 1998. Measuring information systems service quality: Lessons from two longitudinal case studies. MIS Quart. 22(1) 6179. Westbrook, Robert A. 1980. A rating scale for measuring product/ service satisfaction. J. Marketing 44(9) 6872. Wilkerson, Gene L., Lisa T. Bennett, Kevin M. Oliver. 1997. Evaluation criteria and indicators of quality for Internet resources. Ed. Tech. 37(May-June) 5259. Wolnbarger, Mary, Mary C. Gilly. 2001. Shopping online for freedom, control, and fun. California Management Rev. 43(2) 3455. Zhang, Xiaoni, Kellie B. Keeling, Robert J. Pavur. 2000. Information quality of commercial Web site home pages: An explorative analysis. Proc. Internat. Conf. Inform. Systems, Brisbane, Australia, 164175. Zmud, Robert E. 1978. An empirical investigation of the dimension of the concept of information. Decision Sci. 9(2) 187195. , Andrew Boynton. 1991. Survey measures and instruments in MIS: Inventory and appraisal. K. Kraemer, J. Cash, and J. Nunamaker, eds. The Information Systems Research Challenge: Survey Research Methods, vol. 3. Harvard Business School, Boston, MA.

Detmar W. Straub, Senior Editor. This paper was received on January 14, 2001, and was with the authors 3 months for 3 revisions.

Information Systems Research Vol. 13, No. 3, September 2002

315

Você também pode gostar