Você está na página 1de 24

The Emerald Research Register for this journal is available at www.emeraldinsight.

com/researchregister

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0968-4883.htm

HEdPERF versus SERVPERF


The quest for ideal measuring instrument of service quality in higher education sector
Firdaus Abdullah
MARA University of Technology, Selangor, Malaysia
Abstract
Purpose The purpose of this paper is to empirically test a new industry-specic scale, HEdPERF (Higher Education PERFormance) to capture the authentic determinants of service quality within higher education sector. Design/methodology/approach The primary goal of this research was to test and compare the relative efcacy of HEdPERF against SERVPERF in order to determine which instrument had the superior measuring capability. Findings In terms of unidimensionality, reliability, and validity, HEdPERF explained variance within the higher education setting better in comparison to SERVPERF. Research limitations/implications Since this study only examined the respective utilities of each instrument within a single industry, in only one national setting, any suggestion that the HEdPERF is generally superior would still be premature. Practical implications The current ndings do provide some important insights into how the instruments of service quality compare with one another, in a typical higher education context. Originality/value An attempt is made in the paper, to develop critical insights into comparative evaluation of service quality measurement instruments. Keywords Service quality assurance, Higher education, Performance measurement (quality) Paper type Research paper

HEdPERF versus SERVPERF 305

Introduction Service industries are playing an increasingly important role in the overall economy of many nations. In todays world of global competition, rendering quality service is a key for survival and success, and many experts concur that the most powerful competitive trend currently shaping marketing and business strategy is service quality (Zeithaml et al., 1996). Since 1980s service quality has been linked with increased protability, and it is seen as providing an important competitive advantage by generating repeat sales, positive word-of-mouth feedback, customer loyalty, and competitive product differentiation. As Zeithaml and Bitner (1996, p. 76) point out, . . .the issue of highest priority today involves understanding the impact of service quality on prot and other nancial outcomes of the organisation. Service quality has since emerged as a pervasive strategic force and a key strategic issue on managements agenda (Bowers, 1997). It is no surprise that practitioners and academics alike are keen on accurately measuring service quality in order to better understand its essential antecedents and consequences, and ultimately, establish methods for improving quality to achieve competitive advantage and build customer loyalty (Bitner, 1993). The pressures driving successful organisations toward top quality services make the measurement of service quality and its subsequent

Quality Assurance in Education Vol. 13 No. 4, 2005 pp. 305-328 q Emerald Group Publishing Limited 0968-4883 DOI 10.1108/09684880510626584

QAE 13,4

306

management of utmost importance (Webster, 1989). However, the problem inherent in the implementation of such a strategy has been compounded by the elusive nature of the service quality construct, rendering it extremely difcult to dene and measure (Parasuraman et al., 1985; Carman, 1990; Bolton and Drew, 1991b). Although researchers have devoted a great deal of attention to service quality, there are still some unresolved issues that need to be addressed, and the most controversial one refers to the measurement instrument (Babakus and Boller, 1992; Buttle, 1996; Robinson, 1999). An attempt to dene the evaluation standard independent of any particular service context has stimulated the setting up of several methodologies. In the last decade, the emergence of diverse instruments of measurement such as SERVQUAL (Parasuraman et al., 1988), SERVPERF (Cronin and Taylor, 1992) and evaluated performance (EP) (Teas, 1993a) has contributed enormously to the development in the study of service quality. SERVQUAL operationalises service quality by comparing the perceptions of the service received with expectations, while SERVPERF maintains only the perceptions of service quality. On the other hand, EP scale measures the gap between perceived performance and the ideal amount of a feature rather than the customers expectations. Diverse studies using these scales have demonstrated the existence of difculties resulting from the conceptual or theoretical component as much as from the empirical component (Carman, 1990; Babakus and Boller, 1992; Boulding et al., 1993; Quester et al., 1995). Nevertheless, many authors concur that customers assessments of continuously provided services may depend solely on performance, thereby suggesting that performance-based measures explain more of the variance in an overall measure of service quality (Oliver, 1989; Bolton and Drew, 1991a, b; Cronin and Taylor, 1992; Boulding et al., 1993; Quester et al., 1995). These ndings are consistent with other research that have compared these methods in the scope of service activities, thus conrming that SERVPERF (performance-only) results in more reliable estimations, greater convergent and discriminant validity, greater explained variance, and consequently less bias than the SERVQUAL and EP scales (Cronin and Taylor, 1992; Parasuraman et al., 1994a; Quester et al., 1995; Llusar and Zornoza, 2000). Whilst its impact in the service quality domain is undeniable, SERVPERF being a generic measure of service quality may not be a totally adequate instrument by which to assess perceived quality in as unique a sector as higher education. Firdaus (2004), on the other hand, proposed HEdPERF (Higher Education PERFormance), a new and more comprehensive performance-based measuring scale that attempts to capture the authentic determinants of service quality within the higher education sector. The 41-item instrument has been empirically tested for unidimensionality, reliability and validity using both exploratory and conrmatory factor analysis. Therefore, the primary issue addressed by this paper is about comparing different measures of the service quality construct within a single, empirical study utilising customers of a single industry namely higher education. Specically, the ability of the more concise HEdPERF scale is compared with that of two alternatives namely the SERVPERF instrument and the merged HEdPERF-SERVPERF as moderating scale. The goal is to assess the relative strengths and weaknesses of each instrument in order to determine which instrument had the superior measurement capability in terms of unidimensionality, reliability, validity, and explained variance of service quality.

Research foundations Many researchers (Parasuraman et al., 1985; Carman, 1990; Bolton and Drew, 1991b) concur that service quality is an elusive concept, and there is considerable debate about how best to conceptualise this phenomenon. Hence, they concomitantly come to an agreement that a comprehensive denition of service quality is notoriously difcult to produce. Lewis and Booms (1983, p. 100) were perhaps the rst to dene service quality as a . . .measure of how well the service level delivered matches the customers expectations. Thereafter, there seems to be a broad consensus that service quality is an attitude of overall judgement about service superiority, although the exact nature of this attitude is still hazy. Some suggest that it stems from a comparison of performance perceptions with expectations (Parasuraman et al., 1988), while others argue that it is derived from a comparison of performance with ideal standards (Teas, 1993a) or from perceptions of performance alone (Cronin and Taylor, 1992). In terms of measurement methodologies, a review of literature provides plenty of service quality evaluation scales. Some stem from the realisation of conceptual models produced to understand the evaluation process (Parasuraman et al., 1985), and others come from empirical analysis and experimentation on different service sectors (Cronin and Taylor, 1992; Franceschini and Rossetto, 1997b; Parasuraman et al., 1988). The most widely used methods applied to measure perceived quality can be characterised as primarily quantitative multi-attribute measurements. Within the attribute-based methods, a great number of variants exist and among these variants, the SERVQUAL and SERVPERF instruments have attracted the greatest attention. Generally, most researchers acknowledge that customers have expectations and these serve as standards or reference points to evaluate the performance of an organisation. However, the unresolved issues of expectations as a determinant of perceived service quality have resulted in two conicting measurement paradigms: the disconrmation paradigm (SERVQUAL) which compares the perceptions of the service received with expectations, and the perception paradigm (SERVPERF) which maintains only the perceptions of service quality. These instruments share the same concept of perceived quality. The main difference between these scales lies in the formulation adopted for their calculation, and more concretely, the utilisation of expectations and the type of expectations that should be used. Most research studies do not support the ve-factor structure of SERVQUAL posited by Parasuraman et al. (1988), and administering expectation items is also considered unnecessary (Carman, 1990; Parasuraman et al., 1991a; Babakus and Boller, 1992). Cronin and Taylor (1992) were particularly vociferous in their critiques, thus developing their own performance-based measure, dubbed SERVPERF. In fact, the SERVPERF scale is the unweighted perception components of SERVQUAL, which consists of 22 perception items thus excluding any consideration of expectations. In their empirical work in four industries, Cronin and Taylor (1992) found that unweighted SERVPERF measure (performance-only) performs better than any other measure of service quality, and that it has greater predictive power (ability to provide an accurate service quality score) than SERVQUAL. They argue that current performance best reects a customers perception of service quality, and that expectations are not part of this concept. Likewise, Boulding et al. (1993) reject the value of an expectations-based SERVQUAL, and concur that service quality is only inuenced by perceptions.

HEdPERF versus SERVPERF 307

QAE 13,4

308

Quester et al. (1995) perform similar analysis to Cronin and Taylor in the Australian advertising industry, and their empirical tests show that SERVPERF performs best, while SERVQUAL performs worst, although the differences are small. Teas (1993a) on the other hand, discusses the conceptual and operational difculties of using the expectations minus performance approach, with a particular emphasis on expectations. His empirical test subsequently produces two alternatives of perceived service quality measures namely EP and normed quality (NQ). He concludes that the EP instrument, which measures the gap between perceived performance and the ideal amount of a feature rather than the customers expectations, outperforms both SERVQUAL and NQ. A review of service quality literature brings forward diverse arguments in relation to the advantages and disadvantages in the use of these instruments. In general, the arguments make reference to aspects related to the characteristics of these scales notably their reliability and validity. Recently, Llusar and Zornoza (2000) conrmed that SERVPERF results in more reliable estimations, greater convergent and discriminant validity, greater explained variance, and consequently less bias than the EP scale. These results are consistent with earlier research that had compared these methods in the scope of service activities (Cronin and Taylor, 1992; Parasuraman et al., 1994a). In fact, the marketing literature appears to offer considerable support for the superiority of simple performance-based measures of service quality (Mazis et al., 1975; Churchill and Surprenant, 1982; Carman, 1990; Bolton and Drew, 1991a, b; Boulding et al., 1993; Teas, 1993a; Quester et al., 1995). Research methodology Research objectives On the basis of the conceptual and operational concerns associated with the generic measures of service quality, the present research attempts to compare and contrast empirically the HEdPERF scale against two alternatives namely the SERVPERF and the merged HEdPERF-SERVPERF scales. The primary goal is to assess the relative strengths and weaknesses of each instrument in order to determine which instrument had the superior measurement capability in terms of unidimensionality, reliability, validity and explained variance of service quality. The ndings were eventually used in transforming HEdPERF into an ideal measuring instrument of service quality for higher education. The various steps involved in this comparative study are shown by means of ow chart (Figure 1). Research design Data were collected by means of a structured questionnaire comprising four Sections namely A, B, C, and D (see Appendixsee Table AI-III). Section A contained nine questions pertaining to student respondent prole. While Sections B and C required respondents to evaluate the service components of their tertiary institution, in which only perceptions data were collected and analysed. Specically, Section B consisted of 22 perception-items extracted from the original SERVPERF scale (Cronin and Taylor, 1992), and modied to t into the higher education context. Section C, on the other hand, is composed of 41 items extracted from the original HEdPERF (Firdaus, 2004), a scale uniquely developed to embrace different aspects of a tertiary institutions service offering. As the items were generated and validated within

HEdPERF versus SERVPERF 309

Figure 1. Comparing HEdPERF, SERVPERF and HEdPERF-SERVPERF: the quest for ideal measuring instrument

QAE 13,4

310

higher education context, no modication was required. All the items in Sections B and C were presented as statements on the questionnaire, with the same rating scale used throughout, and measured on a seven-point, Likert-type scale that varied from 1 strongly disagree to 7 strongly agree. In addition to the main scale addressing individual items, respondents were asked in Section D to provide an overall rating of the service quality, satisfaction level and future visits. There were also three open-ended questions allowing respondents to give their personal views on how any aspect of the service could be improved. The draft questionnaire was eventually subjected to pilot testing with a total of 30 students, and they were asked to comment on any perceived ambiguities, omissions or errors concerning the draft questionnaire. The feedback received was rather ambiguous thus only minor changes were made. For instance, technical jargon was rephrased to ensure clarity and simplicity. The revised questionnaire was subsequently submitted to three experts (an academician, a researcher and a practitioner) for feedback before being administered for a full-scale survey. These experts indicated that the draft questionnaire was rather lengthy, which in fact coincided with the preliminary feedback from students. Nevertheless, in terms of number of items in the questionnaire, the current study conforms broadly with similar research work (Cronin and Taylor, 1992; Teas, 1993a; Lassar et al., 2000; Mehta et al., 2000; Robledo, 2001) that attempted to compare various instruments for measuring service quality. In the subsequent full-scale survey, data were collected from students of six higher learning institutions (two public universities, one private university and three private colleges) in Malaysia for the period between January and March 2004. Data had been collected using the personal-contact approach as suggested by Sureshchandar et al. (2002) whereby contact persons (Registrar or Assistant Registrar) have been approached personally, and the survey explained in detail. The nal questionnaire together with a cover letter was then handed personally or mailed to the contact persons, who in turn distributed it randomly to students within their respective institutions. A total of 560 questionnaires were distributed to six tertiary institutions. Of these, 390 were returned and nine were discarded due to incomplete responses, leading to a response rate of 68 per cent. The number of usable questionnaires were 381 for a population size of nearly 400,000 students in Malaysian tertiary institutions was in line with the generalised scientic guidelines for sample size decisions as proposed by Krejcie and Morgan (1970). Hence, the sample size for the analysis is N 381. Results and discussion Developing the merged HEdPERF-SERVPERF scale The literature appears to offer considerable support for the superiority of SERVPERF in comparison with other generic instruments. HEdPERF on the other hand, has been empirically tested as the more comprehensive and industry-specic scale, which was uniquely designed for higher education. Thus, it would seem rational to combine the two nest scales in developing, possibly, a superior one, and subsequently to determine which of these three instruments had the superior measurement capability in terms of unidimensionality, reliability, validity, and explained variance of service quality. In developing the new scale, factor analysis was used to determine a new dimensional structure of service quality by merging HEdPERF and SERVPERF items.

Specically, this technique allowed reduction of a large number of overlapping variables to a much smaller set of factors. One critical assumption underlying the appropriateness of factor analysis is to ensure that the data matrix has sufcient correlations to justify its application (Hair et al., 1995, p. 374). A rst step is visual examination of the correlations, identifying those that are statistically signicant. If visual inspections reveals no substantial number of correlations greater than 0.30, then factor analysis is probably inappropriate. Therefore, visual inspection of the correlation matrix reveals that practically all correlations are signicant at p 0.01 with no correlations lower than 0.30, and this certainly provides an excellent basis for factor analysis. The next step involves assessing the overall signicance of the correlation matrix with Bartletts test of sphericity, which provides the statistical probability that the correlation matrix has signicant correlations among at least some of the variables. The results were signicant, x 2 50; N 381 13; 073; ( p 0.01), a clear indication of suitability for factor analysis. Another measure to quantify the degree of intercorrelations among the variables and the appropriateness of factor analysis is Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. The measure can be interpreted with the following guidelines: 0.90 or above, marvelous; 0.80 or above, meritorious; 0.70 or above, middling; 0.60 or above, mediocre; 0.50 or above, miserable; and below 0.50, unacceptable (Hair et al., 1995, p. 374). Thus, KMO measure of sampling adequacy was computed, and the results indicated an index of 0.89, a meritorious sign of adequacy for factor analysis (Kaiser, 1970). As for the adequacy of the sample size, there is a 7-to-1 ratio of observations to variables in this study, which falls within acceptable limits. HEdPERFs proposed measure of service quality was a 41-item scale, consisting of 13 items adapted from SERVPERF, and 28 items generated from literature review and various qualitative research inputs namely focus groups, pilot test and expert validation (Firdaus, 2004). In developing the merged HEdPERF-SERVPERF scale, all the 50 items (28 HEdPERFs and 22 SERVPERFs items) were subjected to factor analysis. The decision to include a variable in a factor was based on factor loadings greater than ^ 0.3. The choice regarding factor loadings greater than ^ 0.3 was not based on any mathematical proposition but relates more to practical signicance. Factor loadings greater than ^ 0.3 are considered to meet the minimal level; loading ^ 0.4 are considered more important; and if the loading are ^ 0.5 or greater, they are considered practically signicant. According to Hair et al. (1995, p. 385), factor loadings of 0.3 and above are considered signicant at p 0.05 with a sample size of 350 respondents (N 381 in this study). Scree test was used to identify the optimum number of factors that can be extracted. It is derived by plotting the latent roots or eigenvalues against the number of factors in their order of extraction, and the shape of the resulting curve is used to evaluate the cut-off point. The point at which the curve rst begins to straighten out is considered to indicate the maximum number of factors to extract. The scree test provided four factors, which were subsequently rotated using a varimax procedure. The communalities for each variable, which represent the amount of variance accounted for the factor solution for each variable were also assessed to ensure acceptable levels of explanation. It was proposed that at least one-half of the variance of each variable must be accounted for. The results showed that communalities in 13

HEdPERF versus SERVPERF 311

QAE 13,4

312

variables (B3, B4, B9, B10, B11, B12, B17, B22, C9, C14, C17, C20, and C38 from the questionnaire in Appendix) were below 0.50, . . .too low for having sufcient explanation (Hair et al., 1995, p. 387). Consequently, a new factor solution was derived with the nonloading variables eliminated and the results yielded four factors, which accounted for 41.0 per cent of the variation in the new factor solution (compared to 37.2 per cent of the variance explained in the rst factor solution). Table I shows the results of the factor analysis in terms of factor name, the variables loading on each factor and the variance explained by each factor. Factor 1 non-academic aspects. This factor contains variables that are essential to enable students fulll their study obligations, and it relates to duties and responsibilities carried out by non-academic staff. In other words, it is concerned with the ability and willingness of administrative or support staff to show respect, provide equal treatment, and safeguard condentiality of information. Additionally, this factor describes the importance of being approachable and accessible, having positive attitudes and good communication skills, allowing fair amount of freedom to students, and providing services within stipulated time frame. Factor 2 academic aspects. This factor represents the responsibilities of academics, and it highlights key attributes such as having positive attitude, good communication skill, allowing sufcient consultation, and being able to provide regular feedback to students. Other important elements center on the academic reputation of the institution, notably its ability to offer prestigious and wide ranging programmes with exible structure, degrees that are recognised locally and internationally, and nally having highly educated and experienced academic staff. Factor 3 reliability. This factor consists of items that put emphasis on the ability to provide the pledged service on time, accurately and dependably. It is also concerned with the ability to fulll promises, and the willingness to solve problems in a sympathetic and reassuring manner. Factor 4 empathy. This factor relates to the provision of individualised and personalised attention to students with clear understanding of their specic and growing needs while keeping their best interests at heart. The four-factor structure is certainly related to the determinants of service quality and support the existing literature. Factor 1 (non-academic aspects) and Factor 2 (academic aspects) were identied as important quality indicators (Surprenant and Solomon, 1987; Crosby et al., 1990; Soutar and McNeil, 1996; Leblanc and Nguyen, 1997; Firdaus, 2004). While, reliability (Factor 3) is extensively described as an important determinant of service quality in wide-ranging service sector, and higher education is no exception (Parasuraman et al., 1985; Garvin, 1983; Watts, 1987, p. 56; Haywood-Farmer, 1988; Gronroos, 1988; Stewart and Walsh, 1989; Gaster, 1990, p. 75; Cronin and Taylor, 1992; Owlia and Aspinwall, 1997; Mehta et al., 2000). Empathy (Factor 4), on the other hand, was also identied as an equally important dimension (Parasuraman et al., 1988; Cronin and Taylor, 1992). Nevertheless, it is important to note that the four factors identied did not conform exactly with either the six-factor structure of HEdPERF or the ve-factor structure of SERVPERF. In fact, the new dimensions extracted were the result of the amalgamation between HEdPERF and SERVPERF scales, in which two factors (non-academic aspects and academic aspects) were found in HEdPERF and the other two (reliability and empathy) were identied in SERVPERF.

Variables 0.38 0.65 0.57 0.52 0.74 0.51 0.44 0.47 0.45 0.31 0.32 0.68 0.74 0.66 0.55 0.33

Factor 1: non-academic aspects

Factor 2: academic aspects

Factor 3: reliability

Factor 4: empathy

0.32

0.31 0.31

0.57 0.66 0.75 0.68 0.56 0.62 0.43 0.60 0.56 0.50 0.37 0.36 0.36

0.51 0.51 0.53 0.50 0.51 0.75 0.56 0.65 0.57 0.53 0.36 0.42 10.29 26.2 26.2 0.32

1. Promises kept 2. Sympathetic and reassur]ing in solving problems 3. Dependability 4. On-time service provision 5. Responding to request promptly 6. Trust 7. Feeling secured with the transaction 8. Politeness 9. Individualised attention 10. Giving personalised attention 11. Knowing student needs 12. Keeping student interests at heart 13. Knowledge in course content 14. Showing positive attitude 15. Good communication 16. Feedback on progress 17. Sufcient and convenient consultation time 18. Excellent quality programmes 19. Variety of programmes/specialisations 20. Flexible syllabus and structure 21. Reputable academic programmes 22. Educated and experience academicians 23. Efcient/prompt dealing with complaints 24. Good communication 25. Positive work attitude 26. Knowledge of systems/procedures 27. Providing service within reasonable time 28. Equal treatment and respect 29. Fair amount of freedom 30. Condentiality of information 31. Easily contacted by telephone 32. Counseling services 33. Students union 34. Feedback to improve service performance 35. Standardised and simple delivery procedures 0.32 0.33 0.33 0.35 2.79 5.9 32.2 2.68 5.8 38.0 1.72 3.0 41.0

Eigenvalues Percentage of variance Cummulative percentage of variance

HEdPERF versus SERVPERF 313

Table I. Results of factor analysis (factor loadings)

QAE 13,4

314

Comparative test of unidimensionality A highly mandatory condition for construct validity and reliability checking is the unidimensionality of the measure, which is referred to the existence of a single construct/trait underlying a set of measures (Hattie, 1985; Anderson and Gerbing, 1991). The importance of unidimensionality has been stated succinctly by Hattie (1985, p. 49), . . .that a set of items forming an instrument all measure just one thing in common is a most critical and basic assumption of measurement theory. In order to perform a comparative check of unidimensionality, a measurement model is specied for each construct (factor/dimension) of the three scales, and conrmatory factor analysis is run for all the constructs by means of structural equation modelling within LISREL framework (Joreskog and Sorbom, 1978). Specically, LISREL 8.3 (Scientic Software International, Chicago, IL) for windows was used to analyse and compare the underlying factor model of the three scales where individual items in the model are examined to see how closely they represent the same construct. Table II presents the measures of model t for all the three scales. The overall t of the model to the data was evaluated in various ways. Specically, an exact t of a model is indicated when the p for chi-square (x 2) is above a certain value (usually set to p . 0.05) as well as indicated by other goodness-of-t measures. While chi-square is sensitive to sample size and tends to be signicant in large samples, a relative likelihood ratio between a chi-square and its degrees of freedom was used. According to Eisen et al. (1999), a relative likelihood ratio of ve or less was considered an acceptable t, a prerequisite attained by all the three scales. A number of goodness-of-t measures were proposed to eliminate or reduce the dependence on sample size. The purpose of assessing a models overall t is to determine the degree to which the model is consistent with the empirical data at hand. LISREL 8.3 provides many t indices of which the goodness-of-t index (GFI), the adjusted goodness-of-t index (AGFI), the comparative t index (CFI), the non-normed t index (NNFI), and the incremental t index (IFI) were used. The GFI, which is an indicator of the relevant amount of variances and covariances accounted for by the model, is generally considered as the most reliable measure of absolute t in most circumstances (Diamantopoulos and Siguaw, 2000, p. 88). These goodness-of-t indices have values ranging between zero and one, with higher values indicating a better t. Table II shows
Measures of t Chi-square (x 2) at p 0.01 Degree of freedom (df ) Relative likelihood ratio (x 2/df ) GFI AGFI CFI NNFI IFI RMSEA HEdPERFa 2404.57 726 3.31 0.75 0.71 0.95 0.95 0.95 0.07 SERVPERF 613.89 200 3.07 0.87 0.83 0.91 0.89 0.91 0.08 HEdPERF-SERVPERF 1938.79 540 3.59 0.76 0.72 0.92 0.92 0.92 0.08

Table II. Unidimensionality check

Note: aThe overall t assessment was computed with the exclusion of dimension Understanding due to its poor t, RMSEA 0.08

the indices for the three scales and all the values are close to on, indicating that there is evidence of unidimensionality for the scales (Byrne, 1994, p. 117). The next measure to consider is the root mean square error of approximation (RMSEA), which is the measure of the discrepancy per degree of freedom. In other words, it shows how well would the model, with unknown but optimally chosen parameter values, t the population covariance matrix if it were available (Brown and Cudeck, 1993, pp. 137-8). The RMSEA is generally regarded as one of the most informative t indices (Diamantopoulos and Siguaw, 2000, p. 95), and the criteria for approximate model t are RMSEA , 0.05 close t, RMSEA . 0.05 to , 0.08 fair t, RMSEA . 0.08 to 0.10 poor t (Brown and Cudeck, 1993, p. 135; Kelloway, 1998, p. 95; Chow et al., 2001). As illustrated in Table II, the RMSEA value for the HEdPERF was 0.07 (dimension Understanding removed), an evidence of fair t to the data. While both the SERVPERF and the moderating HEdPERF-SERVPERF scales showed a poor t of 0.08. Therefore, it was concluded that the modied HEdPERF model ts fairly well and represents a reasonably close approximation in the population. Comparative test of reliability Unidimensionality alone is not sufcient to ensure the usefulness of a scale. The reliability of the composite score should be assessed after unidimensionality has been acceptably established. As Anderson and Gerbing and Anderson (1988, p. 187) put it, . . .even a perfectly unidimensional scale would be of little or no practical use if the resultant composite score were determined primarily by measurement error, with the values of the scores widely uctuating over repeated measurements. Reliability of a scale indicates the stability and consistency with which the instrument measures the concept. In this study coefcient alpha or Cronbachs alpha, which is an estimator of internal consistency developed by Cronbach (1951), was computed for the service quality dimensions of all the three instruments. As a guideline, an alpha value of 0.70 and above is considered to be the criterion for demonstrating internal consistency of new scales and established scales, respectively (Nunnally, 1988, p. 96). Cronbachs alpha for HEdPERF dimensions ranged from 0.81 to 0.92, with the exception of the dimension Understanding (a 0.63). Owing to its low reliability score, the dimension Understanding was removed as part of the scale modication process. The results indicated that the reliability scores of the modied HEdPERF were comparatively superior to SERVPERFs range of 0.68 to 0.76, and slightly better than HEdPERF-SERVPERFs range of 0.77 to 0.91. In fact, previous replication studies (Carman, 1990; Babakus and Boller, 1992; Finn and Lamb, 1991) that compared various measuring instruments within a single setting were unable to demonstrate a superiority of a single scale (Table III). Comparative test of validity While internal consistency estimates of reliability show higher values for the modied HEdPERF scale, the next comparative test involves assessing the validity of the three instruments. Validity is the extent to which a measure or set of measures correctly represents the concept of study. For the purpose of this study, two validity tests were conducted and the following denitions were used:

HEdPERF versus SERVPERF 315

316

QAE 13,4

HEdPERF dimensions Cronbach alpha (a) SERVPERF dimensions Cronbach alpha (a) HEdPERF-SERVPERF dimensions Cronbach alpha (a) 0.92 0.89 0.85 0.88 0.81 0.63 Responsiveness Assurance Empathy Tangible Reliability 0.68 0.75 0.76 0.73 0.74 Non-academic aspects Academic aspects Reliability Empathy 0.91 0.87 0.88 0.77

Non-academic aspects Academic aspects Reputation Access Programme issues Understandinga

Note: aDimension discarded

Table III. Reliability coefcients

(1) Criterion validity concerns . . .the degree of correspondence between a measure and a criterion variable, usually measured by their correlation. . .When the criterion exists at the same time as the measure, this is called concurrent validity (Bollen, 1989, p. 186). (2) Construct validity concerns the degree to which . . .a measure relates to other observed variables in a way that is consistent with theoretically derived predictions (Bollen, 1989, p. 188). The criterion variable used to compare the three scales was the respondents global assessment of service quality (question D1). Whereas the variable used in the construct validity tests was the global preference, which was measured by the summation of the overall satisfaction and future visit intentions (questions D2 and D3, respectively). This approach concurred with recommendations from Zeithaml et al. (1996) and Teas (1993a, b) who implied that service quality can be expected to affect such factors as business growth, market share, customer preferences and customer loyalty. The degree of criterion and construct validity was subsequently computed using the pairwise correlations between the global assessment of service quality, the global preference and each of HEdPERF, SERVPERF and SERVPERF-HEdPERF measures. The validity coefcients for the three scales are all signicant at p 0.01 level. The criterion and construct validity coefcients were 0.58 and 0.57 respectively for the modied HEdPERF scale, 0.27 and 0.34 respectively for SERVPERF scale, and 0.53 and 0.57, respectively for the moderating SERVPERF-HEdPERF scale. The results indicated that the validity coefcients for the modied HEdPERF scale were greater than both the SERVPERF and SERVPERF-HEdPERF scales. In other words, these ndings demonstrated yet again the superiority of the modied HEdPERF in terms of criterion and construct validity (Table IV). Comparative regression analysis The objectives of performing regression analysis in this study are twofold: (1) to assess the overall effect of the three instruments on service quality level (in other words, how well they predicted service quality); and (2) to determine the relative importance of the individual dimensions of the scales. The effect size. The regression model considered the global assessment of service quality (question D1) as a dependent variable and the service quality scores for the individual dimensions of HEdPERF, SERVPERF and SERVPERF-HEdPERF as the independent variables. A multiple regression analysis was subsequently conducted to evaluate how well these scales predicted service quality level. The linear combination

HEdPERF versus SERVPERF 317

Service quality scales HEdPERF SERVPERF HEdPERF-SERVPERF


a

Criterion validity coefcient 0.58 0.27 0.53

Construct validity coefcient 0.57 0.34 0.57

Note: aValidity coefcients were identical at 0.56 before the exclusion of dimension understanding

Table IV. Criterion and construct validity coefcients

QAE 13,4

318

of the modied ve-dimension HEdPERF (dimension Understanding dropped) was signicantly related to the service quality level, R 2 0.35, adjusted R 2 0.34, F(5, 354) 38.4, p 0.01. The sample multiple correlation coefcient was 0.59, indicating that approximately 34.8 per cent of the variance of the service quality level in the sample can be accounted for by the linear combination of the ve dimensions of HEdPERF scale Likewise, the ve dimensions of SERVPERF was also signicantly related to the service quality level, R 2 0.24, adjusted R 2 0.23, F(5, 354) 22.1, p 0.01. However, the sample multiple correlation coefcient of 0.49 was much lower than HEdPERFs, thus indicating that only 24 per cent of the variance of the service quality level can be accounted for by the linear combination of the ve dimensions of SERVPERF scale. As for the moderating SERVPERF-HEdPERF scale, the linear combination of the four dimensions was also signicantly related to the service quality level, R 2 0.31, adjusted R 2 0.30, F(4,355) 28.92, p 0.01. The sample multiple correlation coefcient was 0.55, slightly lower than HEdPERFs, thus indicating that approximately 30.5 per cent of the variance of service quality level can be explained by the four dimensions. The ndings from the regression analysis once again demonstrated that the modied HEdPERF scale was better in explaining the variance of service quality level. The relative inuence. Table V shows the results of the relative inuence of the individual dimensions of the three scales. The dependent variable was the global assessment of service quality level. The resultant output of HEdPERF had an adjusted R 2 of 0.34 ( p 0.01) and yielded only one signicant dimension namely Access, which concurred with the ndings by Firdaus (2004). It alone accounted for 13.7 per cent (0.372 0.14) of the variance of service quality level, while the other

Measuring scales HEdPERF (R 0.34) Non-academic aspects Academic aspects Reputation Access Programme issues SERVPERF (R 2 0.23) Responsiveness Assurance Empathy Tangible Reliability HEdPERF-SERVPERF (R 2 0.30) Non-academic aspects Academic aspects Reliability Empathy
a 2

Standardised coefcients (b) 2 0.11 2 0.02 0.29 0.41 0.17 2 0.06 0.28 2 0.10 0.21 0.09 0.26 0.22 0.11 2 0.07

Signicant ( p) 0.39 0.85 0.03 0.01 0.12 0.27 0.01 0.06 0.01 0.12 0.03 0.01 0.32 0.10

Table V. Effect size and relative importance of the individual dimensions

Note: aThe modied version with dimension Understanding removed

dimensions contribute an additional 21.1 per cent (34.8-13.7 per cent). This implied that the dimensions Non-academic aspects, Academic aspects, Reputation, and Program issues did not contribute signicantly towards explaining the variance in the overall rating. SERVPERF scale on the other hand had an adjusted R 2 of 0.23 ( p 0.01) and yielded two signicant dimensions namely Tangible and Assurance, which explained almost all of the variance in the service quality level. The other dimensions namely Responsiveness, Reliability and Empathy did not contribute signicantly towards explaining the variance in the overall rating. As for the SERVPERF-HEdPERF scale, the adjusted R 2 was 0.30, and yielding two signicant dimensions namely Non-academic aspects and Academic aspects. The two dimensions accounted for 23.0 per cent ((0.26 0.22)2 0.23( of the variance of service quality level, while the other two dimensions contribute an additional 7.5 per cent (23.0-30.5 per cent). This implied that the dimensions Reliability and Empathy did not contribute signicantly towards explaining the variance in the overall rating. The modied HEdPERF scale The empirical analysis indicated that the modied ve-factor structure with 41 items resulted in more reliable estimations, greater criterion and construct validity, greater explained variance, and consequently a better t. Besides the better quantitative results, the modied HEdPERF scale also had the advantage of being more specic in areas that are important in evaluating service quality within the higher education sector. Hence, service quality in higher education can be considered as a ve-factor structure with conceptually clear and distinct dimensions namely Non-academic aspects, Academic aspects, Reputation, Access, and Programme issues. The current study also showed that SERVPERF performed poorly. Although SERVPERF was developed and has subsequently proven to be the superior generic scale to measure service quality in a wide range of service industries, it did not provide a better perspective for the higher education setting (Table VI). Conclusions and implications The objectives of this research were two-fold. The primary goal was to test and compare the relative efcacy of the three conceptualisations of service quality in order to determine which instrument had the superior measurement capability in terms of unidimensionality, reliability, validity, and explained variance. The other objective is concerned with enhancing the HEdPERF scale, thus transforming it into an ideal instrument for measuring service quality for the higher education sector. The tests

HEdPERF versus SERVPERF 319

Criteria Cronbach alpha (a) range Criterion validity Construct validity Adjusted R 2 Root mean squared error of approximation (RMSEA)

Modied HEdPERF 0.81-0.92 0.58 0.57 0.34 0.07

SERVPERF 0.68-0.76 0.27 0.34 0.23 0.08

HEdPERF-SERVPERF 0.77-0.91 0.53 0.57 0.30 0.08

Table VI. Comparison of the three scales

QAE 13,4

320

were conducted utilising sample students from Malaysian tertiary institutions, and the ndings indicated that the three measuring scales did not perform equivalently in this particular setting. In fact, the results led us to conclude that the measurement of service quality by means of the HEdPERF method resulted in more reliable estimations, greater criterion and construct validity, greater explained variance, and consequently better t than the other two instruments namely SERVPERF and HEdPERF-SERVPERF. In short, the ndings demonstrated an apparent superiority of the modied ve-factor structure of HEdPERF scale in this context. Likewise, the regression analyses compared the three scales so as to determine how well they predicted service quality level. Although the modied ve-factor structure of HEdPERF clearly outperformed the SERVPERF and the moderating HEdPERF-SERVPERF dimensions in terms of explaining the variances in service quality level, the implications of these ndings are less clear. Since this study only examined the respective utilities of each instrument within a single industry, any suggestion that the HEdPERF is generally superior would still be premature. Nonetheless, the current ndings do provide some important insights into how these instruments of service quality compare with one another, in the Malaysian higher education context. The current results also suggest that the dimension Access is the most important determinant of service quality in higher education, thus reinforcing the recommendation made by Firdaus (2004). In other words, students perceived Access to be more important than other dimensions in determining the quality of service they received. As the only HEdPERF dimension to achieve signicance, Access is concerned with such elements as approachability, ease of contact and availability of both the academics and non-academics staff (Firdaus, 2004). Tertiary institutions should therefore concentrate their efforts on the dimension perceived to be important rather than focusing their energies on a number of different attributes, which they feel are important determinants of service quality. While the idea of providing adequate service on all dimensions may seem attractive to most service marketers and managers, failure to prioritise these attributes may result in inefcient allocation of resources. In conclusion, the current study provides empirical support in favour of the idea that the modied ve-factor structure of HEdPERF with 41 items may be the superior instrument in measuring service quality within the higher education context. Limitations and suggestions for future research The current study allows us to understand how three measuring instruments of service quality compare to one another. To date, these scales have not been compared and contrasted empirically thus making this research a unique contribution to the services marketing literature. However, this work may pose more questions than it provides answers. The present ndings suggest that the modied HEdPERF scale is better suited to higher education service settings. Caution is necessary in generalising the ndings although considerable evidence of relative efcacy was found in the modied HEdPERF scale. Given that the current study is limited to one service industry, this assertion would need to be validated by further research. Future studies should apply the measurement instrument in other countries, in other industries, and with different types of tertiary institution

in order to test whether the results obtained are general and consistent across different samples. Likewise, it may be worthwhile to compare different measuring instruments from a different perspective that is from other customer groups namely internal customers, employers, government, parents, and general public. Although in higher education, students must now be considered primary customers (Crawford, 1991), the industry generally has a number of complementary and contradictory customers. This study has concentrated on the student customer only, but it is recognised that education has other customer groups which must be satised.
References Anderson, J.C. and Gerbing, D.W. (1991), Predicting the performance of measures in a conrmatory factor analysis with a pretest assessment of their substantive validities, Journal of Applied Psychology, Vol. 76 No. 5, pp. 732-40. Babakus, E. and Boller, G.W. (1992), An empirical assessment of the SERVQUAL scale, Journal of Business Research, Vol. 24 No. 3, pp. 253-68. Bitner, M.J. (1993), Tracking the evolution of the service marketing literature, Journal of Retailing, Vol. 69, pp. 61-103. Bollen, K.A. (1989), Structural Equations with Latent Variables, Wiley, New York, NY. Bolton, R.N. and Drew, J.H. (1991a), A longitudinal analysis of the impact of service changes on customer attitudes, Journal of Marketing, Vol. 55, pp. 1-9. Bolton, R.N. and Drew, J.H. (1991b), A multi stage model of customers assessments of service quality and value, Journal of Consumer Research, Vol. 17, pp. 375-84. Boulding, W., Kalra, A., Staelin, R. and Zeithaml, V.A. (1993), A dynamic process model of service quality: from expectations to behavioural intentions, Journal of Marketing Research, Vol. 30, pp. 7-27. Bowers, M.R. (1997), Improving service quality: achieving high performance in the public and private sectors by Milakovich ME, Journal of the Academy of Marketing Science, Vol. 25 No. 3, pp. 265-6. Brown, M.W. and Cudeck, R. (1993), Alternative ways of assessing model t, in Bollen, K.A. and Long, J.S. (Eds), Testing Structural Equation Models, Sage, Newbury Park, CA. Bryne, B.M. (1994), Structural Equation Modelling with EQS and EQS/Windows-Basic concepts, Applications and Programming, Sage, Thousand Oaks, CA. Buttle, F. (1996), SERVQUAL: review, critique, research agenda, European Journal of Marketing, Vol. 30 No. 1, pp. 8-32. Carman, J.M. (1990), Consumer perceptions of service quality: an assessment of the SERVQUAL dimensions, Journal of Retailing, Vol. 66, pp. 33-55. Chow, J.C.C., Snowden, L.R. and McConnell, W. (2001), A conrmatory factor analysis of the BASIS-32 in racial and ethnic samples, The Journal of Behavioural Health Services & Research, Vol. 28 No. 4, pp. 400-11. Churchill, G.A. and Surprenant, C. (1982), An investigation into the determinants of custome satisfaction, Journal of Marketing Research, Vol. 19, pp. 491-504. Crawford, F. (1991), Total Quality Management, Committee of Vice-Chancellors and Principals Occasional Paper, London, December.

HEdPERF versus SERVPERF 321

QAE 13,4

322

Cronbach, L.J. (1951), Coefcient alpha and the internal structure of tests, Psychometrika, Vol. 6, pp. 297-334. Cronin, J.J. and Taylor, S.A. (1992), Measuring service quality: reexamination and extension, Journal of Marketing, Vol. 56, pp. 55-68. Crosby, L.A., Evans, K.R. and Cowles, D. (1990), Relationship quality in services selling: an interpersonal inuence perspective, Journal of Marketing, Vol. 54, pp. 68-81. Diamantopoulos, A. and Siguaw, J.A. (2000), Introducing LISREL, Sage, London. Eisen, S.V., Wilcox, M. and Leff, H.S. (1999), Assessing behavioural health outcomes in outpatient programs: reliability and validity of the BASIS-32, Journal of Behavioural Health Sciences & Research, Vol. 26 No. 4, pp. 5-17. Finn, D.W. and Lamb, C.W. (1991), An evaluation of the SERVQUAL scale in a retailing setting, in Holman, R. and Solomon, M.R. (Eds), Advances in Consumer Research, Association for Consumer Research, Provo, UT, pp. 483-90. Firdaus, A. (2004), The development of HEdPERF: a new measuring instrument of service quality for higher education sector, paper presented at the Third Annual Discourse Power Resistance Conference: Global Issues Local Solutions, University of Plymouth, Plymouth, 5-7 April. Franceschini, F. and Rossetto, S. (1997b), On-line service quality control: the Qualitometro method, De Qualitac, Vol. 6 No. 1, pp. 43-57. Garvin, D.A. (1983), Quality on the line, Harvard Business Review, Vol. 61, pp. 65-73. Gaster, L. (1990), Can quality be measured?, Going Local, School for Advanced Urban Studies, No. 15. Gerbing, D.W. and Anderson, J.J. (1988), An updated paradigm for scale development incorporating unidimensionality and its assessment, Journal of Marketing Research, Vol. 25, pp. 186-92. Gronroos, C. (1988), Service quality: the six criteria of good perceived service quality, Review of Business, Vol. 9 No. 3, pp. 10-13. Hair, J.F. Jr, Anderson, R.E., Tatham, R.L. and Black, W.C. (1995), Multivariate Data Analysis With Readings, Prentice-Hall International Editions, Englewood Cliffs, NJ. Hattie, J. (1985), Methodology review: assessing unidimensionality of tests and items, Applied Psychological Measurement, Vol. 9, pp. 139-64. Haywood-Farmer, J. (1988), A conceptual model of service quality, International Journal of Operations and Production Research, Vol. 8 No. 6, pp. 5-9. Joreskog, K.G. and Sorbom, D. (1978), Analysis of Linear Structural Relationships by Method of Maximum Likelihood, National Educational Resources, Chicago, IL. Kaiser, H.F. (1970), A second-generation little jiffy, Psychometrika, Vol. 35, pp. 401-15. Kelloway, E. (1998), Using LISREL for Structural Equation Modeling: A Research Guide, Sage, Thousand Oaks, CA. Krejcie, R. and Morgan, D. (1970), Determining sample size for research activities, Educational and Psychological Measurement, Vol. 30, pp. 607-10. Lassar, W.M., Manolis, C. and Winsor, R.D. (2000), Service quality perspective and satisfaction in private banking, Journal of Services Marketing, Vol. 14 No. 3, pp. 244-71. Leblanc, G. and Nguyen, N. (1997), Searching for excellence in business education: an exploratory study of customer impressions of service quality, International Journal of Education Management, Vol. 11 No. 2, pp. 72-9.

Lewis, R.C. and Booms, B.H. (1983), The marketing aspects of service quality, in Berry, L., Shostack, G. and Upah, G. (Eds), Emerging Perspectives on Services Marketing, American Marketing, Chicago, IL, pp. 99-107. Llusar, J.C.B. and Zornoza, C.C. (2000), Validity and reliability in perceived quality measurement models: an empirical investigation in spanish ceramic companies, International Journal of Quality & Reliability Management, Vol. 17 No. 8, pp. 899-918. Mazis, M.B., Ahtola, O.T. and Klippel, R.E. (1975), A comparison of four multi-attribute models in the prediction of consumer attitudes, Journal of Consumer Research, Vol. 2, pp. 38-52. Mehta, S.C., Lalwani, A.K. and Han, S.L. (2000), Service quality in retailing: relative efciency of alternative measurement scales for different product-service environments, International Journal of Retail & Distribution Management, Vol. 28 No. 2, pp. 62-72. Nunnally, J.C. (1988), Psychometric Theory, McGraw-Hill, Englewood Cliffs, NJ. Oliver, R.L. (1989), Processing of the satisfaction response in consumption: a suggested framework and research propositions, Journal of Consumer Satisfaction, Dissatisfaction, and Complaining Behaviour, No. 2, pp. 1-16. Owlia, M.S. and Aspinwall, E.M. (1997), TQM in higher education a review, International Journal of Quality & Reliability Management, Vol. 14 No. 5, pp. 527-43. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1985), A conceptual model of service quality and its implications for future research, Journal of Marketing, Vol. 49, pp. 41-50. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), SERVQUAL: a multiple-item scale for measuring consumer perceptions of service quality, Journal of Retailing, Vol. 64 No. 1, pp. 12-40. Parasuraman, A., Berry, L.L. and Zeithaml, V.A. (1991a), Renement and reassessment of the SERVQUAL scale, Journal of Retailing, Vol. 67 No. 4, pp. 420-50. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1994a), Reassessment of expectations as a comparison standard in measuring service quality: implications for future research, Journal of Marketing, Vol. 58, pp. 111-24. Quester, P., Wilkinson, J.W. and Romaniuk, S. (1995), A Test of Four Service Quality Measurement Scales: The Case of the Australian Advertising Industry, Graduate School of Management, Nantes, Working Paper 39, Centre de Recherche et dEtudes Appliquees, Group esc Nantes Atlantique. Robinson, S. (1999), Measuring service quality: current thinking and future requirements, Marketing Intelligence & Planning, Vol. 17/1, pp. 21-32. Robledo, M.A. (2001), Measuring and managing service quality: integrating customer expectations, Managing Service Quality, Vol. 11 No. 1, pp. 22-31. Soutar, G. and McNeil, M. (1996), Measuring service quality in a tertiary institution, Journal of Educational Administration, Vol. 34 No. 1, pp. 72-82. Stewart, J.D. and Walsh, K. (1989), In Search of Quality, Local Government Training Board, Luton, November. Surprenant, C. and Solomon, M. (1987), Predictability and personalization in the service encounter, Journal of Marketing, Vol. 51, pp. 73-80. Sureshchandar, G.S., Rajendran, C. and Anantharaman, R.N. (2002), Determinants of customer-perceived service quality: a conrmatory factor analysis approach, Journal of services Marketing, Vol. 16 No. 1, pp. 9-34.

HEdPERF versus SERVPERF 323

QAE 13,4

324

Teas, R.K. (1993a), Expectations, performance evaluation, and consumers perceptions of quality, Journal of Marketing, Vol. 57 No. 4, pp. 18-34. Teas, R.K. (1993b), Consumer expectations and the measurement of perceived service quality, Journal of Professional Services Marketing, Vol. 8 No. 2, pp. 33-54. Watts, R.A. (1987), Measuring Software Quality, The National Computing Centre, Oxford. Webster, C. (1989), Can consumers be segmented on the basis of their service quality expectations?, Journal of Services Marketing, Vol. 3 No. 2, pp. 35-53. Zeithaml, V.A. and Bitner, M.J. (1996), Services Marketing, McGraw-Hill, Singapore. Zeithaml, V.A., Berry, L.L. and Parasuraman, A. (1996), The behavioural consequences of service quality, Journal of Marketing, Vol. 60, pp. 31-46. Further reading Bryne, B.M. (1998), Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic Concepts, Applications and Programming, Lawrence Erlbaum Associates, Mahwah, NJ. Cattell, R.N. (1966), The scree test for the number of factors, Multivariate Behavioural Research, Vol. 1, pp. 245-76. Parasuraman, A., Berry, L.L. and Zeithaml, V.A. (1991b), More on improving service quality measurement, Journal of Retailing, Vol. 69 No. 1, pp. 140-7. Appendix. Survey questionnaire Section A The following personal information is necessary for validation of the questionnaire. All responses will be kept condential. Your co-operation in providing this information will be greatly appreciated. Please circle the response. A1 Gender: (1) Female (2) Male A2 University/institute/college: (1) Public institution (2) Private institution A3 Age: (1) 20 years old and below (2) 21-35 years old (3) 36-45 years old (4) 46 years old and above A4 Ethnicity: (1) Bumiputera (2) Chinese (3) Indian (4) Others (please specify): . . .. . .. . .. . .. . .. . .

A5 status: (1) Full-time (2) Part-time (3) Distance learning A6 Level of study: (1) Certicate (2) Diploma (3) Bachelor degree (4) Master degree (5) PhD A7 Course: (1) Accounting and nance (2) Business administration (3) Information technology (4) Engineering (5) Applied science (6) Others (please indicate): . . .. . .. . .. . .. . .. . . A8 Current year of study: (1) Year 1 (2) Year 2 (3) Year 3 (4) Year 4 (5) Year 5 (6) Year 6 A9 Highest qualication planned for: (1) Certicate (2) Diploma (3) Bachelor degree (4) Master degree (5) PhD Sections B and C These sections (Tables AI and AII) are related to certain aspects of the service that you experience in your Institution (University/institute/college). For each of the following statements, please circle the number which best reects your opinion of such service.

HEdPERF versus SERVPERF 325

QAE 13,4

Section B
Strongly disagree B1 The institution has up-to-date equipment The institutions physical facilities are visually appealing The institutions employees are well dressed and appear neat The appearance of the physical facilities of the institution is in line with the type of service provided When the institution promises to do something by certain time, it does so When you have problems, the institution is sympathetic and reassuring The institution is dependable The institution provides its services at the time it promises to do so The institution keeps its records accurately The institution does not tell its students exactly when services will be performed You do not receive prompt service from the institutions employees Employees of the institution are not always willing to help students Employees of the institution are too busy to respond to student requests promptly You can trust employees of the institution You can feel safe in your transaction with the institutions employees Employees of the institution are polite Employees get adequate support from the institution to do their jobs well The institution does not give you individual attention Employees of the institution do not give you personal attention Employees of the institution do not know what your needs are The institution does not have your best interests at heart The institution does not have operating hours convenient to all their customers 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Strongly agree 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7

326

B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12 B13 B14 B15 B16 B17 B18 B19 B20 B21 B22

Table AI.

Section C
Strongly disagree C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17 C18 C19 C20 C21 C22 C23 C24 C25 C26 C27 C28 C29 Academic staff have the knowledge to answer my questions relating to the course content Academic staff deal with me in a caring and courteous manner Academic staff are never too busy to respond to my request for assistance When I have a problem, academic staff show a sincere interest in solving it Academic staff show positive attitude towards students Academic staff communicate well in the classroom Academic staff provide feedback about my progress Academic staff allocate sufcient and convenient time for consultation The institution has a professional appearance/ image The hostel facilities and equipment are adequate and necessary Academic facilities are adequate and necessary The institution runs excellent quality programmes Recreational facilities are adequate and necessary Class sizes are kept to minimum to allow personal attention The institution offers a wide range of programmes with various specialisations The institution offers programmes with exible syllabus and structure The institution has an ideal location with excellent campus layout and appearance The institution offers highly reputable programmes Academic staff are highly educated and experience in their respective eld The institutions graduates are easily employable When I have a problem, administrative staff show a sincere interest in solving it Administrative staff provide caring and individual attention Inquiries/complaints are dealt with efciently and promptly Administrative staff are never too busy to respond to a request for assistance Administration ofces keep accurate and retrievable records When the staff promise to do something by a certain time, they do so The opening hours of administrative ofces are personally convenient for me Administrative staff show positive work attitude towards students Administrative staff communicate well with students 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Strongly agree 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7

HEdPERF versus SERVPERF 327

6 7 (continued )

Table AII.

QAE 13,4
C30 Administrative staff have good knowledge of the systems/procedures I feel secure and condent in my dealings with this institution The institution provides services within reasonable/expected time frame Students are treated equally and with respect by the staff Students are given fair amount of freedom The staff respect my condentiality when I disclosed information to them The staff ensure that they are easily contacted by telephone The institution operates an excellent counseling services Health services are adequate and necessary The institution encourages and promotes the setting up of Students Union The institution values feedback from students to improve service performance The institution has a standardised and simple service delivery procedures 1 1 1 1 1 1 1 1 1 1 1 1

Strongly disagree 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5

Strongly agree 6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7

328

C31 C32 C33 C34 C35 C36 C37 C38 C39 C40 C41

Table AII.

Section D (Table AIII) Please circle the number which best reects your feelings about your Institution (University/institute/college).
Very poor 1 Very dissatised D2 My feelings towards the institutions services can best be described as My visit to the institution on future occasions will be 1 Not at all 1 2 2 3 3 4 4 Excellent 5 6 7 Very satised 5 6 7 Very frequent 5 6 7

D1

The quality of the institutions services is

Table AIII.

D3

D4.I would praise the institution for . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . D5.I would criticise the institution for . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . D6. If I were in charge of the institution, I would . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .

Você também pode gostar