Você está na página 1de 7

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/237124434

Improving user satisfaction: The questionnaire


for user interaction satisfaction version 5.5

Article · January 1993

CITATIONS READS

80 470

2 authors, including:

Kent L. Norman
University of Maryland, College Park
111 PUBLICATIONS 3,357 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Writing a textbook: The Psychology of Video Games and Entertainment View project

All content following this page was uploaded by Kent L. Norman on 10 June 2016.

The user has requested enhancement of the downloaded file.


IMPROVING USER SATISFACTION:
THE QUESTIONNAIRE FOR USER INTERACTION SATISFACTION VERSION 5.5

Ben D. Harper Kent L. Norman


University Of Maryland, College Park

ABSTRACT

The Questionnaire for User Interaction Satisfaction(QUIS) is a usability testing tool


designed to gauge computer user's subjective satisfaction with the computer interface.
The QUIS contains a demographic questionnaire, an overall measure of satisfaction, and
measures of user satisfaction in four specific interface aspects (screen factors,
terminology and system feedback, learning factors, and system capabilities). The current
study establishes some normal values for QUIS evaluations. Undergraduate students
attending one of six classes in the AT&T Teaching Theater evaluated their experiences
using a variety of software packages in the course of one semester. A new version of the
on-line QUIS 5.5 ran in the Teaching Theater's Windows™ 3.1 environment and
collected the students satisfaction data. The on-line version of the QUIS and the results it
yielded are discussed.

INTRODUCTION
The QUIS

The Questionnaire for User Interaction Satisfaction (QUIS) is a tool developed by a


multi-disciplinary team of researchers in the Human / Computer Interaction Lab (HCIL)
at UMCP (Chin, Diehl, & Norman, 1988). The QUIS was designed to assess user's
subjective satisfaction with specific aspects of the human/computer interface. Previous
efforts to build tools for evaluating the human /computer interface suffered from
validation, reliability, and standardization problems (Ives, Olson, & Baroudi, 1983). The
QUIS team successfully addressed these problems, creating a measure that is highly
reliable across many types of interfaces.

The QUIS is currently licensed to 76 sites. Approximately 44% of these are


commercial/industrial users, 32% are international education and research users, and
24% are domestic education and research users. Most sites use the QUIS in conjunction
with a usability testing lab.

The QUIS contains a demographic questionnaire, a measure of overall system


satisfaction along six scales, and hierarchically organized measures of four specific
interface factors (screen factors, terminology and system feedback, learning factors, and
system capabilities). Each area measures the overall satisfaction with that facet of the
interface, as well as the factors that make up that facet, on a 9-point scale.
Even though the QUIS is a powerful tool for interface evaluation, the on-line
version has been limited because of some interface issues. Previous versions of the
QUIS have been laid out in a linear fashion, with one question per screen. This format
fails to represent the hierarchical nature of the question sets, limiting continuity between
questions. QUIS 5.5 presents related sets of questions on the same screen, lending
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Copyright © 1993 Kent L. Norman. All rights reserved.
continuity to the set, and reducing the amount of time subjects spend on navigation
between questions. Users of the QUIS often avoid the on-line version because it didn't
record specific comments about the system. These comments are often vital for usability
testing. In response to this need QUIS 5.5 collects and stores comments on-line for each
set of questions. Users were also frustrated with the format of the output of the QUIS,
making it time consuming and confusing to analyze the QUIS data. QUIS 5.5 now stores
data in a format that is easily imported into most popular spreadsheets and statistical
programs. By far the most important change in QUIS 5.5 is it's new flexibility. Past
versions required experimenters to use all of the questions in all of the areas. Most often,
only a sub-set of the 80 questions are applicable to the interface being tested. QUIS 5.5
allows experimenters to select sub-sets of the QUIS that are of interest to them. This
saves both subjects, and experimenters time and effort.

The AT&T Teaching Theater

The AT&T Teaching Theater provides an electronically supported learning


environment for undergraduate education. Currently classes in a wide range of subjects
(e.g., psychology, computer science, mathematics, business and management) are taking
advantage of the networked computers and multi-media environment using both
commercial and custom software.

METHOD
Subjects

Students (55 males and 48 females with an average age of 26 years) completed the
QUIS at the conclusion of a 12 week course in one of six courses offered in the teaching
theater. Each course was taught by a different instructor on a different subject often
using different software in the classroom.

Materials

Subjects completed the QUIS on personal workstations in the Teaching Theater.


The short version of the QUIS 5.5 ran in Spinnaker Plus™ environment operating in
Microsoft Windows™ 3.1 operating system. Keyboard and mouse input devices were
used to complete the questionnaire.

Procedure
Instructors of each of the six courses administered the QUIS in the last week of
classes. Students were asked to evaluate the software they used in the Teaching Theater
and were instructed in using the QUIS. Subjects completed the QUIS and their answers
were written to a central file server.

RESULTS

Summary data for each question in the short version of QUIS 5.5 are presented in
Table 1. Each of the six general satisfaction questions, and 22 specific questions are
listed by question number.
Table 1.

User satisfaction ratings for the AT&T Teaching Theater. (Scores are on a 1 to 9 scale.)
Sec. Sec. Sec.
# Mean SD # Mean SD # Mean SD
General Satisfaction 4.4 6.958 1.557 6.2 6.866 1.599
3.1 6.930 1.725 6.3 6.554 1.786
3.2 6.430 1.964 Terminology & Sys. Info. 6.4 6.828 1.457
3.3 6.614 2.000 5.1 6.955 1.484 6.5 6.312 1.763
3.4 7.050 1.486 5.2 6.849 1.620 6.6 6.000 2.198
3.5 6.660 1.953 5.3 6.949 1.582
3.6 6.284 1.858 5.4 6.719 1.665 System Capabilities
5.5 6.071 2.022 7.1 6.404 2.281
Screen 5.6 5.846 2.175 7.2 6.336 2.062
4.1 7.825 1.256 7.3 7.970 1.488
4.2 7.374 1.454 Learning 7.4 6.800 1.871
4.3 7.188 1.356 6.1 7.089 1.543 7.5 6.211 1.850

The relationship between each questionnaire item is shown in Table 2. Questions


within the general satisfaction measure are highly related, and specific questions also
tend to be highly correlated with the general satisfaction questions. There are weaker
correlations between specific questions from different interface areas.

Table 2.

The correlation matrix for all the measures in the QUIS 5.5 short version.

3.1 3.2 3.3 3.4 3.5 3.6 4.1 4.2 4.3 4.4 5.1 5.2 5.3 5.4 5.5 5.6 6.1 6.2 6.3 6.4 6.5 6.6 7.1 7.2 7.3 7.4
3.1 1
3.2 .76 1
3.3 .73 .75 1
3.4 .53 .67 .41 1
3.5 .61 .63 .76 .43 1
3.6 .41 .59 .56 .5 .52 1
4.1 .44 .48 .41 .37 .48 .28 1
4.2 .57 .43 .42 .45 .42 .47 .45 1
4.3 .64 .54 .44 .55 .45 .57 .43 .78 1
4.4 .55 .49 .46 .33 .49 .51 .31 .39 .66 1
5.1 .69 .67 .61 .61 .6 .58 .46 .75 .76 .54 1
5.2 .67 .52 .59 .41 .57 .45 .49 .52 .71 .74 .64 1
5.3 .64 .55 .43 .47 .5 .5 .46 .58 .74 .68 .75 .65 1
5.4 .65 .57 .47 .44 .53 .56 .47 .46 .71 .73 .64 .69 .84 1
5.5 .59 .48 .41 .41 .52 .59 .34 .58 .72 .69 .67 .7 .7 .68 1
5.6 .56 .49 .46 .31 .46 .6 .23 .48 .59 .68 .61 .55 .72 .61 .73 1
6.1 .51 .49 .3 .66 .35 .46 .4 .43 .67 .53 .57 .56 .65 .7 .52 .38 1
6.2 .63 .54 .6 .47 .52 .47 .35 .42 .58 .61 .58 .66 .65 .68 .49 .58 .64 1
6.3 .53 .47 .5 .45 .44 .44 .37 .47 .58 .43 .63 .7 .49 .43 .48 .31 .63 .59 1
6.4 .53 .55 .55 .5 .42 .71 .28 .55 .63 .55 .67 .64 .53 .51 .52 .54 .7 .6 .76 1
6.5 .64 .59 .54 .43 .52 .66 .27 .45 .63 .68 .69 .63 .77 .76 .77 .77 .68 .71 .55 .7 1
6.6 .59 .5 .61 .35 .51 .62 .24 .43 .52 .57 .59 .65 .55 .51 .73 .66 .49 .62 .66 .69 .82 1
7.1 .41 .43 .45 .13 .58 .52 .4 .32 .49 .56 .53 .7 .58 .51 .62 .66 .32 .42 .49 .53 .6 .6 1
7.2 .55 .45 .42 .34 .6 .34 .62 .39 .61 .49 .5 .73 .54 .51 .62 .42 .4 .4 .47 .34 .46 .46 .69 1
7.3 .38 .34 .43 .18 .56 .24 .32 .15 .18 .16 .36 .34 .24 .42 .22 .18 .35 .4 .29 .3 .35 .3 .35 .34 1
7.4 .55 .43 .47 .24 .5 .46 .4 .43 .68 .66 .58 .75 .65 .7 .65 .56 .59 .63 .57 .6 .67 .59 .66 .65 .44 1
7.5 .47 .35 .26 .4 .27 .59 .33 .37 .6 .46 .53 .53 .65 .69 .69 .52 .6 .43 .42 .5 .7 .64 .46 .51 .24 .67

Mean scores for each class were also measured. Table 3 shows the overall score
that each software package or group of programs received.

Table 3.

Scores for each class, listing the software used. Scores are on a 1 to 9 scale.

Software Mean SD
Vision Quest™, Chat, WordPerfect™ 5.1 7.401 0.709
Paradox™ 6.671 1.114
QuatroPro™, SAS™ 7.492 0.766
GEDS™ 7.358 1.188
HyperCourseware in Spinnaker Plus™ 5.969 1.320
Vision Quest™ 6.499 1.099

DISCUSSION

This study establishes a set of norms that can be used for the on-going evaluation of
the AT&T Teaching Theater. As courseware is developed to take advantage of the
unique strengths of the theater, improvement can be gauged in respect to past efforts.
The QUIS is proving to be an excellent tool to guide the development of new software.

The hierarchical nature of the QUIS is also demonstrated here. The four separate
interface factors measured in the QUIS are hypothesized to explain unique facets of total
satisfaction. The six general satisfaction questions are intended to predict total
satisfaction from a different approach. It is not surprising that the four specific interface
factors each relate strongly to the general satisfaction questions because they both
measure the same quantity, total satisfaction. Similarly, it is not surprising that questions
from different interface factors are not highly related. Each factor measures a unique
part of total satisfaction, and therefore questions from different factors measure different
aspects of the system.

Despite the utility of these data to research in the AT&T Teaching Theater, the
unique nature of the classroom must be taken into account before any generalizations can
be made. First, the unique hardware found in the classroom may tend to facilitate, or
deter the satisfaction found with the software. The large workstation screens used in the
classroom may have tended to increase satisfaction in some instances. On the other hand,
some applications, specifically Spinnaker Plus™, tended to run so slowly on the system
that satisfaction measures may be overpowered by this single effect. Secondly, the fact
that class instructors administered the QUIS may tend to bias the results to be more
favorable than they should be.

Future research should focus on obtaining normative data for real world software
packages and applications. Anchor points need to be established so that satisfaction can
be compared in terms of contemporary competitors. And even though the QUIS 5.5 has
been re-designed be itself more satisfying, the QUIS needs to stand up to it's own light
and be subject to investigation. A meta QUIS study could improve the QUIS, as well as
examine some issues of halo effect.

ACKNOWLEDGMENTS

Partial support for this project was provided by a grant from AT&T Information
Systems and the Computer Science Center at the University of Maryland. We wish to
thank Walt Gilbert, Project Director of the AT&T Teaching Theater, and Ellen Yu,
Project Manager, for their support. Finally, we wish to thank the instructors who
allowed their students to participate in this study.

REFERENCES

Chin, J. P., Diehl, V. A., & Norman, K. L. (1988). Development of an instrument


measuring user satisfaction of the human-computer interface. In CHI '88
Conference Proceedings: Human Factors in Computing Systems, (pp. 213-218),
New York: Association for Computing Machinery.

Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information
satisfaction. Communications of the ACM, 26, 785-793.

View publication stats

Você também pode gostar