Você está na página 1de 24

Understanding Human-Computer Interaction for Information Systems Design

Author(s): James H. Gerlach and Feng-Yang Kuo


Source: MIS Quarterly, Vol. 15, No. 4 (Dec., 1991), pp. 527-549
Published by: Management Information Systems Research Center, University of Minnesota
Stable URL: http://www.jstor.org/stable/249456 .
Accessed: 24/02/2014 07:23
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp

.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.

Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to
digitize, preserve and extend access to MIS Quarterly.

http://www.jstor.org

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

UnderstandingHumanComputer Interaction
for Information
Systems Design
By: James H. Gerlach
Graduate School of Business
Administration
University of Colorado at Denver
Campus Box 165
P.O. Box 173364
Denver, Colorado 80217-3364
Feng-Yang Kuo
Graduate School of Business
Administration
University of Colorado at Denver
Campus Box 165
P.O. Box 173364
Denver, Colorado 80217-3364

Abstract
Over the past 35 years, information technology
has permeated every business activity. This
growing use of information technology promised
an unprecedented increase in end-user productivity. Yet this promise is unfulfilled, due primarily to a lack of understanding of end-user
behavior. End-user productivity is tied directly to
functionality and ease of learning and use. Furthermore, system designers lack the necessary
guidance and tools to apply effectively what is
known about human-computer interaction (HCI)
during systems design. Software developers
need to expand their focus beyond functional requirements to include the behavioral needs of
users. Only when system functions fit actual work
and the system is easy to learn and use will the
system be adopted by office workers and
business professionals.
The large, interdisciplinary body of research
literature suggest HCI's importance as well as its
complexity. This article is the product of an extensive effort to integrate the diverse body of HCI
literature into a comprehensible framework that
provides guidance to system designers. HCI

design is divided into three major divisions:


system model, action language, and presentation language. The system model is a conceptual depiction of system objects and functions.
The basic premise is that the selection of a good
system model provides direction for designing action and presentation languages that determine
the system's look and feel. Major design recommendations in each division are identified along
with current research trends and future research
issues.
Keywords: User-computer interface, user mental model, human factors, system
model, presentation language, action
language
ACM Categories: D.2.2, H.1.2, K.6.1

Introduction
The user is often placed in the position of
an absolute master over an awesomely
powerful slave, who speaks a strange and
painfully awkward tongue, whose obedience is immediate and complete but
woefully thoughtless, without regard to the
potential destruction of its master's things,
rigid to the point of being psychotic, lacking sense, memory, compassion, andworst of all-obvious consistency (Miller
and Thomas 1977, p. 512).
The problems of human-computer interaction
(HCI),such as cryptic error messages and inconsistent command syntax, are well-documented
(Carroll, 1982; Lewis and Anderson, 1985;
Nickerson, 1981) and trace back to the beginning
of the computer revolution (Grudin, 1990). The
impact of problematic HCI designs is magnified
greatly by the advent of desk top computers,
employed mainly by professionals for enhancing
their work productivity. A faulty HCIdesign traps
the user in unintended and mystifying circumstances. Consequently, the user may not
adopt the system in his or her work because
learning and using the system are too difficultand
time-consuming; the business loses its investment in the system.
As concern about HCI problems grew, research
was conducted by both practitionersand scholars
to find solutions. Initially,researchers focused on
enhancing programming environments in order

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

527

Interaction
Human-Computer

to improveprogrammers'productivity.Withthe
proliferation of desk-top computers, it was
discovered that non-technical users were not
satisfied withthe same type of environmentthat
programmersused. Research has since expanded beyond technical considerations to investigating behavioral issues involvinghuman
motor skills, perception, and cognition for
developingfunctional,usable, and learnablesoftware. HCIis nowan importantscientificdiscipline
built upon computer science, ergonomics,
linguistics, psychology, and social science.
Today'ssystem designers are expected to apply
these interdisciplinary
principlesto improveuser
This is a formidable
satisfactionand productivity.
task because HCIdevelopmentis not an aspect
of software design that can be illuminatedby a
single design approach.Moreimportantly,there
is a lack of guidance in applying HCIresearch
findingsto design practice.Considera typicalinterface design based upon many decisions:
whichfunctionsand objects to include;howthey
are to be labeled and displayed;whetherthe interface should use command language, menus,
or icons; and how online help can be provided.
As will be discussed later, each of these decisions involvesconsiderationof complicated,and
sometimes conflicting,humanfactors. When all
decisions are considered at once, interface
design becomes overwhelming.Therefore,our
firstobjectivein writingthis articleis to separate
HCIdesign into majordivisions and identifythe
most relevantdesign goals and humanfactors.
In each division, design subtasks are analyzed
withinthe context of currentHCIresearch. The
intentof this classificationis to assist designers
in relatingthe researchfindingsto the HCIdesign
process.
Earlyresearch emphasized the developmentof
design guidelines. But, after attempts to both
writeand use guidelines, it was recognized that
when a design is highly dependent upon task
context and user behavior, the usefulness of
guidelines diminishes (Gouldand Lewis, 1985;
Moran,1981). The answer to this problemfor a
design is to modelthe behaviorof users
particular
doing specific tasks. The model providesa basis
for analyzingwhy a design works or fails. This
leads to the emphasis of understandingcognitive
processes employed in HCI;ModelHumanProcessor (Card,et al., 1983), SOAR(Laird,et al.,
1987), and Task Action Grammars(Payne and

Green, 1986) are examples of HCI theoretic


models for studying user behavior (to be discussed later).These models providea basis for
explainingwhysome design guidelineswork.Our
second objective is to elaborate existing
guidelines with their task constraints and
theoreticbases so a designer can relatethem to
new, untested situations.
Ourthirdand last objective is to identifyopportunitiesfor HCIresearch. An exhaustive review
of guidelinesand theoriesin user interfacedesign
reveals gaps in our knowledgeregardingthe impact of design choices on human behavior. By
noting these opportunities,we hope to interest
both practitionersand research scholars in furtheringour knowledgeof user interfacedesign.
We begin with a frameworkfor organizing HCI
design and several theoretic approaches to investigatingHCIissues. This is followedby design
recommendationsand researchopportunitiesfor
each issue in the framework, and our conclusions.

Overview of User Interface


Frameworkand Theories
Card,et al. (1983)proposethe user's recognitionactioncycle as the basic behaviorfor understanding the psychology of HCI.This cycle includes
three stages: the user perceives the computer
presentationand encodes it, searches long and
short-termmemoryto determinea response, and
then carries out the response by sending his or
her motorprocessorsin motion.A moreelaborate
seven-stage HCImodel is proposed by Norman
(1986) (see Figure1). Norman'smodel expands
the memory stage to include mental activities,
such as interpretationand evaluationof system
response, formulationof personal goals and intentions, and specificationof action sequences.
Four cognitive processors are employed in the
elaborated recognition-action cycle: motor
movements, perception,cognition,and memory
(Olson and Olson, 1990). Except for long-term
memory,these processors have limitedcapacity and constrainusers' behaviorand, thus, HCI
design. Mostobviousis the need to satisfyusers'
motor and perceptual needs: signals must be
perceivable,and responses should be withinthe
range of a user's motorskills. But more importantly,the interfacemust empowerthe memory

528 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

Figure 1. Physical and Mental Processes in Operating a Computer


(Adapted from Norman, 1986, and reprinted from Olson and Olson, 1990,
p. 229, by permission of Lawrence ErlbaumAssociates)
and cognitive capacity of its users to learn and
reasoneasily aboutthe system's behavior.Otherwise, the user interfacewillhinderthe user's ability to learn all aspects of the system; a bad
interfacemeans the user willnot use the system
to solve new, difficultproblems.

Overviewof the framework


WhileHCIobjectives are clear, it is less obvious
howthe designer shouldgo aboutdevelopinginterfaces that meet these objectives. Recent
research suggests that a system model be

MIS Quarterly/December1991 529

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-Computer Interaction

employed as the basis of HCI design (Norman,


1986). The system model is a conceptual depiction of the set of objects, permissible operations
over the objects, and relationships between objects and operations underlying the interface
(Jagodzinski, 1983).
Norman (1986) points out that the selection of a
good system model enables the development of

Expectation

clear and consistent interfaces. This is the


premise of the interface design framework
described in Figure 2. The conceptual aspect of
the framework concerns design of the system
model such that the underlying process the computer is performing is directly pertinent to the user
in a manner compatible with the user's own
understanding of that process (Fitter, 1979). The
physical aspect of the framework involves the

-l Eva uationli j
AtiSystem

(
nterpretat^^r

Model:

^*Task
\Ii^^^-i*Metaphor/Abstract

analysis

-- --

model analysis

Interpretation
rwshX.X....?_Pcm~~;PFV~~?O~

Conceptual Design

Physical Design

Presentation
Language:
*Object representation
*Presentation format
*Spatial layout
*Attention and confirmation
*User assistance

Action Language:
*dialog style
*syntax
*protection mechanism

Figure 2. The HCI Design Framework

530

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

design of action and presentation languages,


which consist of patternsof signs and symbols
enabling the user to communicateto and from
the system (Bennett, 1983).
Designing action and presentation languages
based on a coherent system model enables the
user to easily develop a mental model of the
system throughrepetitiveuse. The mentalmodel
is the user's own conceptualizationof the system
components,theirinterrelations,and the process
that changes these components (Carrolland
Olson, 1988).The mentalmodelprovidespredictive and explanatorypowerforunderstandingthe
interaction,enabling the user to reason about
how to accomplish goals (Halasz and Moran,
1983; Norman, 1986). Hence, the closer the
system model is matched to user expectations,
the more easily and quicklyuser learningtakes
place. Developingthe system model, therefore,
requires a study of what the user expectations
are.
A system model providesdirectionfordesigning
action and presentation languages that determine the system's look and feel. When there is
close correspondencebetweenthe system model
and these two languages, the user can manipulate all partsof the system withrelativeease. This
creates an interface of "naive realism"
(diSessa, 1985): one that the user operates
unaware of the computational technicalities
embedded inthe system software.Butthis naive
realism cannot be easily achieved because
technologicalrestrictionslimitthe choice of dialog
style and impose rigidsyntax rules and recovery
procedures. Hence, in specifying an action
language, design tradeoffs must be made between satisfying the user's cognitive requirements and satisfying technological
constraints. The presentation language complements the action language by displayingthe
results of system execution such that the user
can easily evaluate and.interpretthe results. It
also involves design tradeoffs in choosing proper object representations,data formats,spatial
layout, confirmative mechanisms, and user
assistance facilities.
Note that in Figure 2 the system model serves
as the basis fordevelopingaction and presentation languages. The importanceof this principle
is illustratedby the user interfacesof twospreadsheet packages: IFPS (Execucom, 1979) and
1-2-3 (Lotus, 1989). IFPS's system model

resembles linearalgebrawitha Fortran-like


programminglanguage; 1-2-3's resembles a paper
spreadsheet and an electronic calculator.The
system model choice results in clear differentiation in the action and presentationlanguages of
these two packages. IFPS's action language requires the user to follow strict syntax rules to
enter a spreadsheet model. Its presentation is
that of an accounting reportthat can only be
viewed in a top-downmanner.Also, user actions
and system presentationsare clearlydisjointed
in IFPS;that is, the user firstenters the algebraic
formulae,waits for the system to process them,
and receives the output when the system is
finished.
In contrast, 1-2-3's action and presentation
languages are intertwined.1-2-3allowsthe user
to enter the spreadsheet by movingto any cell,
row, or column in any order to enter data or
specify formulae. Its presentation utilizes the
same row-columnformatused forinput;the user
obtainsan instantresultforeach action.The properties of 1-2-3's action and presentation
languages are more generally accepted than
those of IFPS,even though both providesimilar
capability.Hutchins,et al. (1986) attributesthe
success of spreadsheet packages like 1-2-3 to
theiruse of a conceptualmodelthat matches the
user's understandingof spreadsheet tasks.

Cognitivemodeling
As previouslymentioned,developingthe system
modelrequiresa studyof user expectations.One
approachis to create prototypes,which provide
an environment for testing and refining the
system model. This, however, is expensive and
time-consuming.Alternatively,several cognitive
models can be used to analyze and clearly
describe user behavior.This type of theoretical
analysis can help designers select the best
design fromseveral alternatives,resultingin less
time needed for HCIdesign (Lewis,et al., 1990).

GOMS Model

A familyof cognitivemodels based on the GOMS


model is proposed by Card, et al. (1983) for
predicting user performance.A GOMS model
consists of fourcognitivecomponents:(1) goals
and subgoals for the task; (2) operators, including both overt operators (like key presses)

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

531

Interaction
Human-Computer

and internaloperators (like memory retrieval);


(3) methods composed of a series of operators
for achieving the goals; and (4) selection rules
for choosing among competing methods to
achieve the same goal. The majorityof GOMS
research has centered on the study of experts
performingwell-learned,repetitivetasks. Thishas
led to the discoveryof parameters,such as times
for keystrokeentry and the scanning of system
outputs, useful for predictingskilled-userperformance (Card,et al., 1983). But other important
aspects of user behaviorcannot be easily modeled in GOMS, such as the productionof and
recoveryfromerrors(Olsonand Olson, 1990)and
the use of sub-optimalgoals or methods in performingroutineediting tasks, even when more
efficientgoals or methods are known(Young,et
al., 1989).
SOAR
SOAR(Laird,et al., 1987) is a general cognitive
architectureof human intelligence. Althoughit
has not been appliedextensivelyin HCIresearch,
SOARhas the potentialforansweringquestions
not addressed by GOMS. SOAR is an applicationof artificialintelligencethat models users doing both routineand new tasks. In additionto a
knowledge base and an engine that performs
tasks it knows,SOARhas a learningmechanism.
It provides an account of how a user evaluates
system responses and formulatesa new goal or
intention.WithSOAR,one can estimatehow long
it takes a user to recognize an impasse in his or
her skill and set up a new goal and action sequence to overcome that impasse.
Formal Grammars
Formal grammars expressed in Backus-Naur
form(BNF)can be used to describe the rules of
an action language. Fromthese, an analyst can
predictthe cognitive effort needed to learn the
language by examining the volume and consistency of the rules (Reisner, 1981). Task Action Grammars (TAG) are similar languages,
which make explicit the knowledge needed for
a user to comprehendthe semantics and syntax
of a user interface(Payne, et al., 1986). Inaddition to identifyingthe consistency of grammar
rules, TAGcan be appliedto study how well the
task features of the language match user goals.

Discussion
GOMS,SOAR,and formalgrammarscollectively provide guidance in the design of system
models and action and presentationlanguages.
Forexample,GOMSsuggests thatsystem model
design shouldbe guidedby analysisof user goals
in orderto identifymethods for achieving these
goals; SOAR demonstrates the importanceof
modeling user knowledge of the system model
forsolvingnew, difficultproblems;TAGindicates
how an action language's organizationaffects
user learning.
Itshouldbe notedthateach of these theoriescan
explain some, but not all, aspects of human
behaviorin HCI.Forexample, the GOMSmodel
can explainthe task of selecting an optionfrom
a listof choices, but itfailsto predicterrorsa person makes when using a line editor;TAGprovides a reasonwhyerrorsmightoccurbutcannot
predictmoment-by-moment
performance.Inaddition, psychological attributes, such as
preferenceand attitude,and cognitivefunctions,
such as mentalimageryand cognitivestyle, are
not considered in these theories (Olson and
Olson, 1990). The specificity of each of these
theories results in areas of uncertaintyin HCI
design, restrictingour abilityto apply them to
practice.A great need for integratingtheoryand
practice remains in HCIresearch.

System Model Design


Centralto the entire HCIdesign question is the
design of the system model, a conceptual
description of how the system works. This requires an analysis of user tasks so the system
model can be organized to match the user's
understanding of these tasks (Carroll and
Thomas, 1982; Halaszand Moran,1982; Moran,
1981). Italso requiresan analysis of metaphors
and abstractmodels that can adequatelyportray
system functionality(Carroll,et al., 1988). The
resultof the latteranalysismayalso help in selecting representationsforsystem objects/functions
and in user training.

Analysis of task
The work by Card, et al. (1983) and Norman
(1986)indicatesthatduringcomputerinteraction,
the user's mental activities center around goal
determinationand action planning. To ensure

532 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

that the system model supportsthese activities,


task analysis should emphasize identifyinguser
goals and the methods and objects employedto
achieve these goals (Grudin,1989; Phillips,et al.,
1988).
Work Activities and Scenarios
Goals, methods, and objects can be discovered
by analyzing users acting out work-related
scenarios (Young,et al., 1989). A scenario is a
recordof a user interactingwithsome device in
response to an event, which is carefully constructedso that the user performsa definiteaction (like reorderingparagraphsof a document
or computing the returnon a financial investment). A carefully constructed set of events
assures thata comprehensiverangeof situations
is studied and the results are applicableto brief,
real-life work situations (Young and Barnard,
1987). Scenario analysis produces records of
user actions from which specific user goals,
methods, and objects needed to achieve these
goals are identified.Inaddition,recordsof several
users completingthe same scenario enable the
designer to comparedifferentapproachesto the
same work situation and generate a set of
methods and objects for a wide range of users.
Routine Tasks and Complex Work
Taskanalysisproceedsby studyingcognitiveprocesses involvedin handlingthe events. Researchers have observed that users' mental
processes occur at two levels (Bobrow,1975).
Low-level processing involves well-learned,
rehearsedproceduresforhandlingroutineoperations such as data entryor worddeletion. Highlevel processing, which relies upon knowledge
of the system model, is used to generate plans
of action to handle non-routinetasks.
To supportlow-levelprocessing, objects need to
be organizedintologicalchunks, and operations
need to matchthe actions users normallymake
with these objects in the real world(Phillips,et
al., 1988). In so doing, learning to associate
operations with objects is easy; with practice,
operationscan be appliedalmost automatically,
and even in parallel,because examinationof data
content and the meaning of each user action is
unnecessary (Shiffrinand Schneider, 1977). For
example, the spreadsheet system model sup-

portslow-levelprocessing by organizingspreadsheets intocells, rows,and columns;operations


like "delete" can be appliedto any of these data
levels with simple cursor movement and the
same menu action choices.
High-levelprocessing is top-downand is guided
by user goals and motives; planning is slow,
serial, and conscious (Newelland Simon, 1972;
Rasmussen, 1980). A plan of action is a goal
structurethat describes how the user decomposes the probleminto a sequence of methods
which,when executed, properlyhandlesthe work
situation. When facing a complex task, a user
may divide the entire task into many subtasks
and performthese subtasks separately at different times (diSessa, 1986). Thus, to support
higher-levelprocessing, one must ensure that
nearly all user goals can be easily achieved
throughcombinationsof operationsdescribed in
the system model in either a sequential or
distributedmanner.This flexibilitycan be seen
in Xerox'sStarWorkstation,
whereoperationsfor
one goal (likecreatinga document)can be easily suspended to performoperationsfor another
goal (likecreatinga spreadsheet)(Bewley,et al.,
1983). Star also allows the user to cut a portion
of one object (likea spreadsheet) and paste it to
anotherobject(a document)to achieve a higherlevel goal of creating a report.
Task analysis results can be documented using
GOMS, BNF, TAG,or SOAR. To complete the
interfacedesign, details of the methods and the
operationsto be performedon the objects need
to be specified later during physical design.

Analysisof metaphorsand abstract


models
Indesigning the system model, it is beneficialto
search for metaphors analogical to the system
model. Presenting metaphors to users helps
them relatethe concepts in the system modelto
those alreadyknownby a wide set of users. This
enables the user to make inferences regarding
what system actions are possible and how the
system model will respond to a given action.
Metaphors and Composite Metaphors
Metaphorscan be drawnfromtools and systems
that are used in the task domain and the

MIS Quarterly/December1991 533

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

common-sense realworld(Carroll,et. al., 1988).


For example, many use a typewriter as a
metaphorfor a word processor. Unfortunately,
the analogy between a word processor and a
typewriterbreaksdownfordepictingblockinsertion and deletion in word processing. Forthese
actions, the word processor works more like a
magnetictape splicer. Hence, complex systems
can be more completely described by a composite of several metaphors, each examined
closely forits correspondenceto the system's actual goal-actionsequence. Since users generally develop disjointed, fragmented models to
explain different kinds of system behavior
(Waren, 1987), it is easy for them to accommodate composite metaphors in learning the
system (Carrolland Thomas, 1982).
Even with composite metaphors, mismatches
may still occur. Typical computer systems are
more powerfulthan manualtools and may containfeaturesnotembodiedin the metaphors,and
vice versa. These mismatchesmay lead the user
to form misconceptions about how the system
works (Halasz and Moran,1982). For example,
in word processing, document changes need to
be saved or the entireworksession is lost;there
is no such concept applicableto typewriters.Explicitlypointingout the mismatches to the user
should preventsuch misconceptions(Carroll,et
al., 1988.)
Abstract Models
Abstract models explicitly represent a system
model as a simple, abstract mechanism, which
the user can mentally"run"to generateexpected
system responses (Young, 1981). Forexample,
a hierarchicalchartdepictingthe organizationof
messages, folders, and files serves as the
abstract model of storage for an electronic mail
system, while a file cabinet serves as the
metaphor (Sein and Bostrom, 1989). Like a
metaphor,the abstractmodel is not intendedto
fullydocument every detail of a system model;
and
rather,bothprovidea semanticinterpretation
a frameworkto which the user can attach each
new system concept (Carroll,et. al., 1988; Mayer,
1981). But unlikea metaphor,there is a one-toone mappingfromthe attributesof an abstract
modelto those of the system model,althoughnot
useful
vice versa.Abstractmodelsare particularly
for depicting system models that have no realworldcounterparts;forinstance,a pictorialdepic-

534

tion of interactionsamong memory,instructions,


input,and outputcan providea useful high-level
descriptionof a BASICprogram'sexecution.
Applying Metaphors and Abstract Models
Metaphors and abstract models are powerful
means for conveying the system model to
novices. Mayer(1981) reportsthat novices who
lack requisite knowledge are aided by learning
abstract models, which enable them to
understandsystem concepts duringinteractions
with the system. Sein and Bostrom(1989) find
that abstract models workbest for novices who
are able to create and manipulatementalimages.
Forothernovices, the metaphoris better.Hence,
the choice betweenmetaphorand abstractmodel
is dependentuponthe user's task knowledgeand
the abilityto conceptuallyvisualize the system
model.
Inconceptualdesign, candidate metaphorsand
abstractmodels can be identifiedto providethe
designer with buildingblocks for constructinga
consistent, logicalsystem model based uponthe
user's task model (Waren,1987). Butbasing the
system model entirelyon metaphorsmay be too
limitingfor harnessingthe fullpowerof the computer.The designer's objectiveshould be to properlybalance the users' descriptivemodelof the
task, the normativemodel of how the task ought
to be done, and the new opportunitiesprovided
by computertechnology.

Iterativesystemmodeldevelopment
methodologiesand tools
Task and metaphor analysis must be usercentered and iterative.Initialattempts produce
a crude system model; iterative design and
testing reworkthis crude modelintoa successful
system model. Forexample,questionnaireshelp
determinethe basic attributesof the user group
like age, computertraining,and education. Interviewscan be used to identifythe basic system
capabilities (Olson and Rueter, 1987). Other
usefulapproaches includepsychologicalscaling
methodologies and simulation and protocol
analysis.
Psychological Scaling Methodologies
To identifythe groupingof objects/methods,the
designer can solicit user similarityjudgmentson

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

all pairs of objects/operationsbased upon user


judgmentof frequencyof occurrence, temporal
distance, or spatial distance (McDonald and
Schvaneveldt, 1988). Fromthis similaritymeasurement, clusters of objects/methods can be
identified by applying psychological scaling
methodologies, such as hierarchicalclustering,
multidimensionalscaling, and networkstructuringtechniques(e.g., Pathfinder)
(McDonaldet al.,
1988; Olson and Rueter, 1987). These methodologies can be appliedto organizesystem documentation or menu hierarchy. For example,
Kelloggand Breen(1987)developedusers' views
of howvariouselementsof documents(footnotes,
captions, etc.) are interrelated;McDonaldand
Schvaneveldt(1988) organized UNIXdocumentation according to perceived functionality.
Simulation ana Protocol Analysis
Requiring users to describe their work requirements in their own language can identify
useful metaphorsand abstract models (Mayer,
1981). Pencil-and-paper simulations of a
proposed interface enable the user to act out
typicalworkscenarios (Gouldand Lewis, 1985).
Thistechnique,coupledwiththink-aloudprotocol
analysis, makes it possible to determine how
workis actuallydone. Itis useful for derivingan
initialestimateof the users' set of basic functions
and data objects.
Anotherapproachis calledthe Wizardof Oz (Carrolland Aaronson,1988).Thisapproachemploys
two linked machines, one for the user and the
other for the designer. Both the user's display
and the designer'sdisplayshow a simulatedview
of the system. To attempta task, the user enters
a command, which is routed to the designer's
screen. The designer simulatesthe computerby
evaluatingthe user inputand sending a response
to the user's display.This approachhas the advantage of puttingthe user in a work-likesituation well before the final system is fully
programmed.Finally,user interfacemanagement
systems like GUIDE,Domain/Dialog,and Prototyper(Hartsonand Hix, 1989) or hypermedia
tools like Hypercard(Halasz, 1988) can be used
forrapidprototyping
to evaluateuser needs. They
are, however, more expensive than the Wizard
of Oz interms of manpowerand time needed for
creating the prototype.

Discussion
Much research is still needed if we are to
thoroughlyunderstandsystem modeldesign. Our
knowledgeof cognitive processes in HCIis still
limited,although recent emphases in this area
indicate an increasing awareness of its
significanceamong researchersand practitioners
(Olsonand Olson, 1990). One importantstrategy
is to applytheories likeGOMS,TAG,and SOAR
to study a broad range of computer tasks for
understandingmentalactivitiesinvolvedin solving routineand novelproblems.Anattemptat this
research has been underway;an Al programincooporatingmeans-ends analysis and multiple
problemspaces has been used to analyze user
task knowledge(Youngand Whittington,1990).
This analysis can alertthe designer to potential
problems of a proposed interface.
Another important strategy is to improve
psychologicalmethods for studyingusers' prior
knowledge and cognitive processes. The
methods may be applied to investigate how a
user forms a mental model of a system and to
evaluate the discrepancies between the user's
mental model and the system model. This provides feedback regardingthe qualityof system
modeldesign to designers,whocan then improve
their design strategies.
In addition, guidance is needed for applying
metaphorsto system model design. Whetheror
not system models are based upon metaphors,
users are likely to generate metaphoric comparisonson theirown (Mack,et al., 1983). What
happens if this comparisoncreates user confusion because of the discrepancy between the
designer's metaphorchoice and the user's own
comparativeidea? Strategiesare needed forportrayingmetaphorsso that the metaphoriccomparison is obvious but not distracting.There is
also a need for methodologies for evaluating
alternative metaphors. Carroll, et al. (1988)
hypothesizethatthe user transformsmetaphors
intoa precise understandingof the system model
via a three-stage process: (1) establishing a
metaphoriccomparison;(2) elaboratingaspects
of the metaphoriccomparisonmap meaningfully to the system model; and (3) consolidatingto
producea system model fromwhatwas learned
from each comparison. However, it is unclear
how this theory can be applied to analyze
metaphorlearnability.

MIS Quarterly/December1991 535

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

Finally,user confusion may arise when system


concepts have no analogicaldescriptions,such
as the differencebetween a linewraparoundand
a hardcarriagecontrol.Howcan abstractmodels
be useful in these situations?Research is needed to provideprinciplesto guide the development
of abstractmodels and strategiesfor using these
models effectively in user training.

Action Language Design


The next componentof the HCIframeworkto be
addressed is the action language design. It involves the creation of a means for the user to
easily translatehis or her intentionsto actionsaccepted by the system. Because naturallanguage
is not yet a viable option, designers must rely
upon dialog styles unnaturalto novices, relying
primarilyon keyboards and pointing devices.
Designers mustalso choose a syntaxand vocabularyfor action specifications, and mechanisms
for protecting the user from unintentionally
destroying completed work.

Dialog style
Many conversation-based dialog styles have
been employed in HCI.In Table 1, these styles
are classified according to who inititates the
dialogsand choices availableforactionspecifications (Millerand Thomas, 1977). Recently,direct
manipulationstyles using pointingand graphics
devices have become popular;they differfrom
conversationalstyles in manyaspects (see Table
2) (Hutchins,et al., 1986; Shneiderman, 1987).
The system model,when designed in accordwith
user perceptionof howtasks are conducted,may
suggest the dialogstyle. Forexample,the "form"
style is the naturalchoice for a system involving
database inquiries because forms are widely
used for storingdata manuallyand, as a consequence, become the metaphorfor that system.
But choosing a dialog style often requiresconsidering human factors other than the system
model. The tasks may be complex, suggesting
that no single style is sufficient.Forexample, accounting applicationinterfaces are often a mix
of forms,menus, and commandlanguages, each
tailoredto specific task requirements.User difference also plays an importantrole. Performance on relatively low-skill,computer-based

tasks can varyas muchas 9:1 (Egan,1988).This


variancein user performancecan be partiallyattributableto individualdifferences such as skill
level,technicalaptitude,age, and cognitivestyle.
The level of user experience and technical skill
is a dominantfactor in selecting an appropriate
dialog style (Mozeico, 1982). For novices,
computer-guided,constrained-choiceinterfaces
are betterbecause the time spent on mentalactivities,shown in Figure1, is reduced.Conversely, withexperience comes a clear understanding
of how tasks can be achieved, decreasing the
need fora computer-guidedinterfaceand creating a preference for a user-initiatedlanguage.
Directmanipulation
styles, likeStar'siconicdesktop interface, are easy to learn because they
closely reflect the system model, which in turn
closely matchesthe user's task knowledge.They
are easy to use for both novices and experts
because of simplepush-buttonactionsand a continuousdisplayof the "systemstates" thatguide
user actions (Shneiderman, 1987). Still, direct
manipulationstyles may be slower than conversationalstyles forexpertsto use (Hutchins,et al.,
1986).
Novices can become expertthroughexperience.
This transitionis easier if the user possesses
technical aptitude, which involves high spatial
memory and visualization and/or deductive
reasoning ability.These abilities help the user
remember, visualize, and locate objects and
generate syntacticallycorrectinstructions(Egan
1988).
Cognitivestyle and age also affectthe dialogstyle
decision. A study by Fowler,et al. (1985) shows
that field-independentusers, autonomous and
self-reliant, prefer a user-initiated command
structure, while field-dependent users tend to
preferconstrainedinterfaces.Age is a significant
factorin predictinguser performance,particularly
for interfaces requiringthe user to possess a
technical aptitude(Egan, 1988). The loss in performancedue to aging can be counteredwith a
simplifiedinterfacethat reduces the necessity of
visualizing importantdisplays.
Multi-styleinterfacescan be employedto satisfy
users varyingin skill level, cognitive style, and
age. Forexample, styles rangingfromquestionanswerto menu and commandlanguage can all
be includedwithinthe interface;the user can then
choose any style to achieve betterperformance

536 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

Table 1. Taxonomy of Dialog Styles Based on Initiation and Choice


Choice Initiation
User-guided

Free-Response
Database language
Commandlanguage
Data mnemonics
Text (word)processing
Question/freeanswer
Formfilling

System-guided

Forced-Choice
Expertsystem questions
Input-in-the-context-of-output

Question/forcedanswer
Commandmenu selection
Data menu selection
Embedded menu
Accelerated menu

Table 2. Comparison of Conversational and Direct ManipulationStyles


Conversational Style
Sequential dialog, which requires the user
to enter parts of an instructionin a
predeterminedorder
Language of strict syntax to describe the
user intention

Direct ManipulationStyle
Asynchronusdialog, which enables the
user to enter parts of an instructionin
virtuallyany order
Direct manipulationof objects

Complete specification of user intentionis


required

Incrementalspecification of user intention


is allowed

Discrete display of states of system


executions; this includes errors if the
command fails to execute

Continuousupdate of objects to reflect


system execution results; few error
messages are needed

Single-threadeddialogs, which force the


user to performtasks serially

Multi-threadeddialogs, which permitthe


user to switch back and forth between
tasks

Commandfirst, object next is typical

Object first, command next is typical

Modes are often used to increase keystroke


efficiency

Modeless user operations, which are less


confusing to the user

and satisfaction(Mozeico,1982).Recently,an implementationintegratingnaturallanguage with


direct manipulation(Cohen, et al., 1989) and
another combining command language and
direct manipulation(Gerlach and Kuo, 1991)
show the practicalityof this approach.

User interface syntax


In interactingwith a computer, the user is requiredto translatehis or her goals and intentions
into actions understoodby the system. Hence,
in syntax design, designers must select words
that not only representsystem objects and functions butalso matchuser expectations.Likewise,

the action sequence of entering these words


needs to be specified so it can be easily recognized and remembered by users.
Vocabulary
One way to select vocabularyis fordesigners to
select keywordsbased upon the system model.
This approachto vocabularydesign, althoughintuitivelyappealing, is shown to be impractical
because designers' word choices vary significantly among themselves and may differfrom
users' choices (Carroll,1985). Barnard(1988)
suggests user testing for obtaining specific
words.

MIS Quarterly/December1991 537

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

Novices prefer general, frequentlyused words


that are not representativeof system concepts
(Blackand Sebrechts, 1981; Bloom, 1987). Differentnovices often assign differentwordsto the
same concept (Good, et al., 1984; Landauer,et
al., 1983). As a result, words used by some
novices may not help others learn the action
language.
A betteralternativeis to have expertusers select
terms that are highly representativeof system
concepts; these terms can then be evaluated by
novices for learnability(Bloom,1987).To accommodate both novices' and experts' preferences,
synonymsshould be includedas a partof the actionlanguage(Good,et al., 1984).The alternative
word choices, even if synonyms are not implemented,can be presentedto novice users for
learningthe concept of the chosen word(Bloom,
1987).
Action Consistency
Consistentkeystrokeswithinand across different
systems lend themselves to easy memorization,
resulting in faster, easier learning. This helps
users intransferringknowledgeof a well-learned
system to a new system (Polson, 1988; Poison,
et al., 1986). Italso reduces user errorsand the
time and assistance needed to enter commands
(Barnard,et al., 1981).
Actioninconsistencytypicallyoccurs in systems
employing modes. For example, line editors
typicallyhave two modes: one for inputand the
otherforediting.Modesare confusingto novices
because identicalkeystrokesequences generate
different results in different modes (Norman,
1983). However,they are efficientforapplications
in whichthe numberof commands exceeds the
numberof keys available.Withpractice, modes
allow experts to use fewer keystrokes for command entry;eliminationof modes may penalize
the experienceduser. Normanrecommendsthat
modes be employedjudiciously.We suggest that
techniquesforfocusing user attention(discussed
later)should be used to make modes obvious to
the user to reduce confusion.
An action language's consistency is affected by
its orthogonality.Inan orthogonallanguage,each
basic keystrokecomponentis assigned a unique
meaningrepresentinga single actionparameter,
which can be an operation, an object, or any

otherqualifier(Bowden,et al., 1989).A single set


of rules determineshow these unique keystroke
components can be combined to form commands. For example, in a word processing
system, commands must obey the rule: first,
next,object(e.g., LEToperation(e.g., DELETE);
and
direction
last,
qualifier(e.g., RIGHT).
TER);
Inan orthogonallanguage, keystrokesper command increases in proportionto the size of the
commandset; more time is thereforeneeded to
enter commands. But less effort is needed to
memorizeand recalleach keystroke'smeaning.
This reduction in mental effort and time may
make the memorability-efficiency tradeoff
beneficialifease of learningis criticalto the user.
Action Efficiency
Many system implementationsconcentrate on
minimizingkeystrokesto reduce motoractivities
throughthe use of functionkeys, command abbreviations,and recognitionof an option's first
letter. But as noted earlier,keystrokeefficiency
is also a functionof memorizingand recallingthe
keystrokes.Forexample, when a functionkey is
given multiple meanings whose interpretation
depends upon the context in which it is applied,
a user can be easily confused because of the increased mentalload in recall(Morland,1983).Offeringbothwhole and abbreviatedcommands is
one wayto increasemotorefficiencywhilereducing the mentalload. Withthese options,the user
can initiallyenter the whole commandand then
quickly make use of abbreviated commands
(Landauer,et al., 1983).The importanceof reducing the mentalload is furtherillustratedby Lerch,
et al.'s (1989)studyof spreadsheet users performing financialplanningtasks. They found that
users performbetter using relative referencing
of spreadsheet variables (e.g., PREVIOUS
REVENUES)than when using absolute rowand
column coordiantes. Absolute row and column
coordinatesrequireless keystroketime to enter
but additionalmentaloverhead. Overall,relative
referencing schemes reduce user errors and
allowthe user to devote mentalcapacityto planning the task solution.
Another way of increasing efficiency is for a
system to offer multiplemethods for doing the
same type of task;the efficiencyof each method
varies in accordance withthe task situation.But
the user may fail to choose the method that requiresthe least numberof keystrokesfora given

538 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

task because of the additionalmental cost expended in choosing betweentwo methods(Olson


and Nilsen, 1987). Further investigation may
focus on trade-offdecisions betweenusinga wellrehearsed single general method and learning
and employingseveral context-specificmethods.

Protection mechanisms
The majorityof beginners act recklessly; they
make littleeffortto read user manualsto acquire
system knowledge.A surveyshows thattrial-anderrorlearningis mostwidelyused (Hiltzand Kerr,
1986). A majorconcern, therefore, is to ensure
that the action language protects the user from
being penalized for tryingthe system.
One commontechnique forthis is to providethe
user with an "undo" function that reverses a
series of actions. Anotheris to promptthe user
to reconsider planned actions that can lead to
damaging, irreversibleresults, such as deleting
a file.
A third, more interestingapproach is "training
wheels," which encourage novices to explore
system features duringthe initiallearningstage
while protectingthem fromdisaster (Carrolland
Carrithers,1984). They block invocationof nonelementarysystem features and respond witha
message stating that the feature is unavailable.
The "trainingwheels" approacheffectivelysupports exploratory learning by reducing the
amountof time users spend recoveringfromtheir
errors. But they do not help the learneracquire
system concepts needed forperformingtasks not
attempted previously(Catramboneand Carroll,
1987). Research is needed to study what users
learnor do not learnfromtheirmistakes.Another
interestingquestionis the effect of combiningthe
abstract model and the "trainingwheels" approach for providingthe user with an interface
for learningthe system model. We hypothesize
this combination will result in deeper user
understandingof system concepts.

Discussion
An importantissue of action-language design
concerns trade-offsbetween efficiencyand consistency. Keystrokeconsistency may increase
learnabilityfor novices but decrease efficiency

for experts. This issue requiresfurtherresearch


in understandingthe user's cognitiveprocesses
formemorizationand recallwhen interactingwith
a computer.
Anotherresearch issue concerns how to design
an interface or suite of interfaces to satisfy all
users. Forexample, multi-styleinterfacescan be
created so all styles are equallyfunctional.The
user can then express the same intentionin his
or her preferredstyle. To do so, research must
address questions relatedto how interfacescan
assist users in transferringknowledgefromone
dialogstyle to another.Howcan one buildmultistyle interfacesso that masteryof one style is instrumentaland perhapssufficientto facilitateprogress to another?Can users move froma style
that is system-initiated to one that is userintiated?Futureresearchshouldfocus on understanding cognitive processes for knowledge
transfer, buildingon the work by Kieras, et al.
(e.g., Kierasand Bovair,1984;Kierasand Polson,
1985).
Finally,there is a need fordevelopingprinciples
to guide the use of speech and gesture devices.
studies have shownthat users prefer
Preliminary
these devices (Hauptmann,1989; Weimerand
of such
Ganapathy,1989).Effectiveincorporation
devices in the action language requiresfurther
studies to assess theirimpacton the motor,sensory, perceptual,and cognitiveprocesses of the
user.

Presentation Language
Design
The last section of the HCIframeworkconcerns
presentation language design. An important
design objectiveis for interfacedisplaysto guide
user actions (Bennett, 1983). This objective requiresselecting representationsthatfitthe user's
task knowledge;the formatof data producedby
the system must satisfy task needs and preferences. A display's layout is to be organized so
thatthe collectivepresentationof variousoutputs
eases user perceptionand interpretation.
Presentationsalso convey feedbackto attractthe user's
attentionand confirmuser actions.Finally,online
assistance must be designed to help users learn
system operations and correct their errors.

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

539

Interaction
Human-Computer

Objectrepresentation
If the presentation is to adequately reflect the
metaphorson whichthe system model is based,
the designer must choose a displayappearance
that assists users in establishingthe analogy between that displayand the metaphors.A familiar
appearance enables the user to recognize and
interpretthe representationeasily. Examples of
this principleare found in the spreadsheet-like
interfacesof 1-2-3and the electronicdesk top of
Star.
Icons can represent much informationand be
easily differentiated(Blattner,et al., 1989). An
icon can be a concrete picture replicate of a
familiarobject,such as the trashcan icon in Star.
System concepts having no pictoralreplicates
can be depicted by abstract icons composed of
geometric shapes and figures. Concrete and
abstract icons may also be combined to create
hybridicons, e.g., Ix for deleting a character.
Unlikeconcrete icons, abstractand hybridicons
must be taught to the user. Once learned,
however,they are effective on conveying important system concepts.

Presentationformats:table vs.
graph
Presenting results in graph or table formatsto
satisfy both user decision style and task requirements is of great interest to designers of
decision support systems. When the task requires a large volume of data, graphs are more
effectivethantables forallowingthe user to summarizethe data (Jarvenpaaand Dickson, 1988).
Graphsare also good fortasks (such as interpolation,trendanalysis, and forecasting)that require
identificationof patterns from large volumes of
data. Conversely,ifthe task requirespinpointing
data withprecision,tables are better.Tables also
outperformgraphsforsimpleproductionscheduling decisions. Butforcomplexdecisions, graphs
are superior(Remus, 1984; 1987). Finally,combininggraphand table formatscan resultin better decisions, albeit with slower performance,
comparedto using eitherdisplayalone (Powers,
et al., 1984).
Ourunderstandingof the cognitiveprocesses involved in handling tables and graphs is still
limited.Johnson and Payne (1985)and Johnson,

et al. (1988) demonstrate that if informationis


presentedin a formatdifficultforthe user to comprehend,the user mayemployan easier but less
effectivedecision strategythan one that requires
moresophisticatedreasoningbut leads to a better result. Lohse (1991) shows that graphs and
tables differ in their cognitive effort. Lohse's
research is interestingbecause it is based on a
cognitivemodelthat includes perceptualstores,
short-termmemory,algorithmsfordiscrimination
and encoding,and timingparameters.The model
can predictthe time needed fora user to understand a graph.Itcan be an advisorytoolforchoosing formats to match task needs and has the
potentialto answerquestions regardinghow and
when graphs and tables can be applied to
faciliate problemsolving.

Spatiallayout
User productivityis enhanced when all needed
informationis readily available. To display as
much informationas possible in a limitedarea,
the designer should consider informationchunking, placementconsistency, and the use of windows and 3-D displays.

Chunking
The display, partitioned into well-organized
chunks that match the user's expectations and
naturalperceptionabilities,providesa basis for
the user to select and evaluate actions (Mehlenbacher, et al., 1989). Chunks can be identified
using the psychologicaltechniques discussed in
the system model section. The layout can be
organizedfollowingGestalt principles:the prirnciples of proximityand closure suggest enclosing each chunk of objects in a separated area;
the principleof similaritysuggests usingthe same
font or colorforobjects of the same chunk.Also,
spatial consistency of chunks is important
because memorizationof location is effortless
(Mandler,et al., 1977); labels can be used with
chunking to improve recognition and recall
(Burns, et al., 1986; Jones and Dumais, 1986).
Placement Consistency
One wayproposedto reducethe timein searching
menu items is arrangingmenus accordingto fre-

540 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

quency of use (Witten,et al., 1984). Butthis approach may have only a short-termadvantage
over a menuwithfixedconfiguration;it mayeven
cause slower performancebecause the mental
effort for searching the menu increases with
change and the user becomes disoriented
(Somberg, 1987; Trevellyanand Browne,1987).
Inthe long term, a fixed configurationfacilitates
searching betterthan, or as well as, a dynamic
menu. The fixed configurationlends itself to
memorization,and, therefore,menu selection is
effortless once it is learned by the user.
Windows and 3-D Displays
A window is a clearly defined portion of the
screen that provides a workingarea for a particulartask. Windowinghas several benefits. Using multiple windows enables the user to
simultaneouslyperformmultipletasks that may
be unrelated.The content of the unfinishedtask
in a windowis preserved so the user can easily
continue that task later. Windowsalso serve as
visible memorycaches for integratinginformationfrommultiplesources or monitoringchanges
in separate windows.These benefits collectively enable windowingto supportseparate butconcurrenttask execution.
A drawbackof windowingis thatoperatingmultiple windows demands higher cognitive processes, i.e., memory, perception, and motor
skills. Overuse of windows can cause information overloadand loss of user controlsuch that
the user may employ an inefficient search
strategy in scanning multiple windows (Hendrickson, 1989). Window manipulationis also
shownto be difficultforthe user, probablycaused
by the complexity in arrangingwindows (Carrolland Mazur,1986). Users performtasks more
slowly, althoughmore accurately,withwindows
(Hendrickson, 1989). Thus, operations for
managingwindowsshouldbe simplified.Thewindow design should employconsistentplacement
and avoid overcrowded window to ease user
perceptionand memory load.
Also, 3-D displays can be used to accommodate
and condense a large volume of data (Card,et
al., 1991). A 3-D displayis dividedintomany3-D
rooms, each used for a distinctapplication.The
user can manipulateobjects in the 3-D space to
differentiateimages, investigateforhiddeninformation, and zoom in for details.

Attentionand confirmation
Video and audio effects are useful in drawinga
user's attentionto importantsystem responses
and confirminguser actions. Bothare important
for helpingthe user judge the status of his or her
actions.
People typicallyhave an orientingreflexto things
that change in their visual periphery. Hence,
videoeffects such as color,blinking,flashing,and
brightnesscontrastcan stimulateuser curiosity
for critical information(Benbasat, et al., 1986;
Morland,1983). Audio effects can be used to
complementvideo effects or reveal information
difficultto represent with video (Gaver, 1986;
1989). In addition,audio feedback can reduce
space needs and synchronize user input with
system response (Nakatani,et al., 1986).
Often there is delay between user actions and
system presentations.Inthis situation,confirmatory feedback, such as immediate cursor
response and changing shapes and shades of
icons, is useful (Bewley, et al., 1983; Gould, et
al., 1985).Similarlyusefulare progressindicators
to display the percentage of work completed.
Graphic-based progress indicators, like a
percent-donethermometeror a clock, are considered fun to use (Myers, 1985). Progress indicatorsalso aid in conductingmultipletasks. For
example, a user informedthat a long time is required for printinga document may decide to
spend that time editinganotherfile or retrieving
a cup of coffee.
Both visual and auditory cues are shown to
motivate users to explore unknown system
features(Malone,1984).Incorporating
bothvideo
and audiofeedback may have significantimpact
on user learningand satisfaction.Auditoryicons,
or "earcons,"provideintuitiveways to use sound
for presenting informationto users (Blattner,et
al., 1989; Gaver, 1986; 1989). Likevisual icons,
auditoryicons can be constructed by digitizing
naturalsounds with which the user is familiar;
abstract auditoryicons can also be created by
composing a series of sound pitches (Blattner,
et al., 1989). Forexample,in SonicFinder(Gaver,
1989), a wooden sound is used foropening a file
and a metal sound for opening an application,
while a scrapingsould indicatesthe draggingof
an object. The research in this area could focus
on creating game-like interfacesthat are fun to

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

541

Interaction
Human-Computer

learn(Carrolland Mazur,1986) and on assisting


visually impairedusers.

User assistance
Three types of informationhave been shown to
be valuable for providinguser assistance (Carrolland Aaronson1988;Kierasand Bovair,1984).
One is "how-to-do-it"informationthat defines
specific action steps for operatingthe system.
Another is "what-it-is-for"information that
elaborateson the purposeof each step;this helps
users associate steps withindividualgoals. Third
is "how-it-works"informationthat explains the
system model;this is usefulforadvancedtroubleshootingand creativeuse of the system. Allthree
can be used in writingonline
types of information
errormessages and user instructions.

1990). To do so, the designer conducts a GOMS


analysis of user tasks. The resultis then applied
to organize the manualbased on possible user
goals; for each goal, specific "how-to-do-it" informationon methods and operatorsis then provided. Erroravoidanceand recoveryinformation
can be included to improveuser performance.
Query-in-Depth
Query-in-depthis a technique designed to provide multi-level assistance to help users at
various levels of expertise learn the system
(Gaines,1981;Houghton,1984).Itslow-levelhelp
includesbrief"how-to-do-it"
and "what-it-is-for"
informationthat instructs users' immediateactions. Ifnot satisfied, the user can request more
information
fortroubleadvanced"how-it-works"
shooting.

ErrorCorrection
When novices make errors and are uncertain
aboutwhatto do next, they often lookforinstructions from the system message (Good, et al.,
1984).Thus,errormessages shouldpinpointcorrective, "how-to-do-it"informationand state
"what-it-is-for"
(Carrolland Aaronson, 1988). In
addition, immediate feedback on user errors
facilitateslearningbetterthan delayed feedback
because a user can easily associate the correct
action withthe exact pointof error(Catrambone
and Carroll,1987). The style of errormessages
is also important:they should reflect users'
words, avoid negative tones, and clearlyidentify
the portionof the action in error(Shneiderman,
1987).
Online Manuals
When users knowthe task they wish to perform,
brief "guided explorationcards" (Catrambone
and Carroll,1987) help users performbetterthan
long manuals. Specific "how-to-do-it"information can be includedfor novices to do complete
tasks quicklyinthe begkRning
(Carrolland Aaronson, 1988; Catrambone,1990). In addition, instructionsdescribinggeneral rulesof the system
modelencouragenovicesto inferunstateddetails
of the interface,resultingin betteruser learning
of the system (Black, et al., 1989).
The GOMSmodeldescribed earliercan be used
to create online manuals (Gong and Elkerton,

Discussion
In the past 10 years, engineers have created
sophisticated video and audio technologies for
computer input and output. New technologies,
likeVirtualRealityand Speech I/O,willlikelybe
integratedinto normalpresentations. To effectivelyapplythem, we need to better understand
how they affect the user in performingwork.
Studies have shown that while auditorymemory
has less storage capacity than visual memory,
it retainssignals morethantwiceas longas visual
memory(Cowan,1984). These differences in attention and memory phenomena must be examined withinthe context of human-computer
interaction.Whatis the impacton user cognitive
process given that only limited capacity is
availableformotionand perception?Howshould
the variousdevices be integrated?Whatare the
costs and benefitsintermsof hardware,software,
user training,and actualuser performance?Providingguidance in designing video and audio interfacesis challengingbutcriticalin HCIresearch
in the near future.
Windowingoffersmanyadvantagesin actionand
presentationlanguage design that have yet been
explored. For example, one way to implement
multi-styleinterfacesis to alloweach style to be
operated in a separate window.Or, to adapt to
a user's patternof menu usage, a windowforthe
most recentlyused menuoptions,anotherforthe
most frequentlyused options, and a thirdforthe

542 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

regularmenu options can be used in combination. Windowsare ideal for user assistance: error messages, online manual, or confirmatory
feedback can be located in windowsseparated
from workdialogs. Complex tasks can also be
supportedby allowingsubtasks in separate windows or 3-D rooms. Again, research is needed
to study how windowsand 3-D rooms can be effectivelyappliedforthese variouspurposes. The
central issue is to understandhow they can impactthe user's cognitiveprocesses, as discussed
in the work by Card, et al. (1991).
Finally,there is a need for research in online advising. Researchso farhas shownthatonlineadvising, even that providedby an expertusing the
Wizard-of-Oztechnique, is of limiteduse forthe
novice user (Carrolland Aaronson, 1988). The
difficultissues to be addressed are what information should be given and when, what ideas
should be leftto user inference, and how to use
motivationalfeedback to make learning enjoyable. Studies could also explore the use of
video and audio feedback in assisting the user.

Conclusion
Interfacesare complex, cybernetic-likesystems
that can be builtquicklybut are difficultto build
well. Theircomplexities necessitate the decompositionof the entire user-interfacedesign problem intosmall, manageablesubproblems,along
witha reexaminationof theirinterrelationships
into a whole. The frameworkpresented in this article serves this purpose; it organizes research
findingsintothree majordivisions:system model,
actionlanguage,and presentationlanguage.This
articlereviewscurrentHCIresearchfindingsand
illuminatestheir practicalimplications.The aim
of this workis to enable HCIdesign practiceto
become more systematic and less intuitivethan
it is today.
Throughoutthe literaturetwo majorphilosophies
of interface design and research can be identified.One is that interfacedesign is often driven
by technologicaladvancement;research is conducted to address problems that occur after a
design is implemented. This generated the
mouse, voice, windows,and graphics.The other
is thatwe stillknowlittleaboutthe psychological
make-upof the user. The workon the psychology
of HCIby Card,et al. (1983) and Norman(1986)
provide a solid theoretic beginning; much

research is needed to expand these theories so


they can be useful in addressing a wide range
of interfacedesign issues based upon user and
task considerations.
Great challenges remain ahead in interface
research. We should not limitourselves to the
study of problems concerning only existing
technologies. We should explore new, creative
uses of advanced technologies to know what,
when, and howto applythem effectively.We can
save substantialresearcheffortby ceasing to emphasize problems inherentin poorlydeveloped
technologiesunless they illuminatecognitiveprocesses that willbe importantto interfacesof the
future (Wixon,et al., 1990).
We need to broaden research concerning how
people organize, store, and retrieve concepts
(Carrolland Campbell, 1986; Newell and Card,
1985; 1986). Theoriesof exemplarmemory,prototype memory, episodic memory,and semantic memory are probably applicable to HCI
research. We also need to investigate
psychological attributes(such as attitude and
factors(such as fatigue
preference),work-related
and organizationalculture),and certainphysical
limitations(such as hearing and vision impairment).We muststudyhow user interfacesshould
cope withthe limitationsimposedby varyinguser
characteristics.Moreimportantly,
we mustfocus
on what aspects of user characteristicsare important,howthey are relatedto each stage of HCI
design, and when duringthe design stage they
must be considered. This focus ensures the applicabilityto research findings to design.
Finally,we must interrelatethe researchfindings
if we are to develop comprehensivetheories for
the design, implementation,and testing of functional, usable, and learnable interfaces. In this
pursuit,the role of the designer in documenting
his or her design rationalesis especially important.A design rationaleis a recordof design afternatives and explanation of why some specific
choice is made. To furtherour understandingof
HCI,design rationalesshould be a co-productof
the design process (Maclean,et al., 1989). Comparing and contrasting design rationales of
varioussystems enables us to capturethe range
of constraintsaffectingthe HCIdesign and gain
insightsintowhya choice worksordoes notwork.
Some excellent exploratoryworkhas been done
inthisarea. Forexample,Wixon,et al. (1990)pro-

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

543

Human-ComputerInteraction

pose collectionof usabilitydata in the context of


user tasks to identifybothgeneral principlesand
detailed guidelines for HCIdesign. Carrolland
Kellogg(1989) and Carroll(1990) emphasize the
identificationof psychologicalclaims embodied
in an interfaceand the applicationof artifactsas
bases for assessing appropriateness of these
claims. Inconclusion,data regardinguser tasks,
user achievement and problems, and changes
in the overallenvironmentshouldbe collectedon
a continuous basis. Assumptions about the
psychology of the user performingthe task and
limitationsof technologymustbe explicitlystated.
The collection of design rationalescan then be
used to develop practicalguidelines and principles, which should be repeatedlyevaluated to
develop theories governing HCIdesign.

Acknowledgements
We are indebtedto the anonymousreviewersfor
theirconsiderableeffortin reviewingthis article.
We are particularlythankful to the associate
editor,JudithOlson, forher insightsintothe field
of HCI.Theirmanyrecommendationscontributed
significantlyto this article's development.

References
Barnard,P.J. "CommandNames," in Handbook
of Human-Computer
Interaction,M. Helender,
(ed.), Elsevier Science Publishers, Amsterdam, 1988, pp. 181-199.
Barnard,P.J., Hammond,N. V., Morton,J., Long,
J.B., and Clark,I.A. "Consistencyand Compatibilityin Human/ComputerDialogue," InternationalJournalof Man-MachineStudies
(15), 1981, pp. 87-134.
Benbasat, I., Dexter,A.S., and Todd, P. "AnExperimental Program Investigating ColorEnhancedand GraphicalInformationPresentation:An Integrationof the Findings,"Communicationsof the ACM (29:11), December
1986, pp. 1094-1105.
Bennett,J. "Analysisand Design of the User Interface for Decision Support Systems," in
BuildingDecision SupportSystems, J. Bennett (ed.), Addison-Wesley, Reading, MA,
1983, pp. 41-64.
Bewley, W.L., Roberts, T.L., Schroit, D., and
Verplank,W.L."HumanFactorsTestinginthe

Design of Xerox's 8010 STARWorkstation,"


Proceedings of CHI'83 Human Factors in
ComputingSystems, Boston, MA,1983, pp.
72-77.
Black,J.B., Bechtold,J.S., Mitrain,M, and Carroll,J.M. "On-lineTutorials:WhatKindof Inference Leads to the Most Effective
Learning?"Proceedings of CHI'89 Human
Factors in ComputingSystems, Austin, TX,
1989, pp. 81-83.
Black, J.B. and Sebrechts, M.M. "Facilitating
Communication,"Applied
Human-Computer
Psycholinguistics(2), 1981, pp. 149-177.
Blattner,M.M.,Sumikawa,D.A.,and Greenberg,
R.M."Earconsand Icons:TheirStructureand
Common Design Principles," HumanComputerInteraction(4:1), 1989, pp. 11-44.
Bloom, C.P. "Procedures for Obtaining and
Testing User-Selected Terminologies,"
Human-Computer Interaction (3:2),
1987-1988, pp. 155-177.
Bobrow,D.G. "Dimensionsof Representations,"
in Representation and Understanding,D.G.
Bobrowand A. Collins(eds.), AcademicPress,
New York, NY, 1975, pp. 1-34.
Bowden,E.M.,Douglas,S.A., and Stanford,C.A.
"Testing the Principle of Orthogonalityin
LanguageDesign,"HumanComputerInteraction (4:2), 1989, pp. 95-120.
Burns,M.J.,Warren,D.L.,and Rudisill,M. "FormattingSpace-Related Displays to Optimize
Expert and Nonexpert User Performance,"
Proceedings of CHI'86 Human Factors in
ComputingSystems, Boston, MA,1986, pp.
274-280.
Card, S.K., Moran, T.P., and Newell, A. The
Interaction,
Psychology of Human-Computer
LawrenceErlbaumAssociates, Hillsdale,NJ,
1983.
Card,S.K., Robertson,G.G., and Mackinlay,J.D.
"The InformationVisualizer:An Information
Workspace,"Proceedings of CHI'91Human
Factorsin ComputingSystems, New Orleans,
LA, 1991, pp. 181-188.
Carroll,J.M. "TheAdventureof Gettingto Know
a Computer,"IEEEComputer(15:11), Nov.
1982, pp. 49-58.
Carroll,J.M. What'sin a Name, Freeman, New
York, NY, 1985.
Carroll,J.M. "InfiniteDetailand Emulationin an
OntologicallyMinimizedHCI,"Proceedingsof
CHI'90 Human Factors in Computing
Systems, Seattle, WA, 1990, pp. 321-327.

544 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

Carroll,J.Mand Aaronson,A.P. "Learningby Doing with Simulated IntelligentHelp," Communications of the ACM (31:9), September
1988, pp. 1064-1079.
Carroll,J.M.and Carrithers,C. "TrainingWheels
in a User Interface,"Communicationsof the
ACM(27:8), August 1984, pp. 800-806.
Carroll,J.M. and Campbell,R.L. "SofteningUp
Hard Science: Reply to Newell and Card"
Human Computer Interaction (2:3), 1986,
pp. 227-249.
Carroll, J.M. and Kellogg, W.A. "Artifactas
Theory-Nexus:HermeneuticsMeets TheoryBased Design," Proceedings of CHI'89,
Human Factors in Computing Systems,
Austin, TX, 1989, pp. 7-14.
Carroll,J.M., MackR.L.,and Kellogg,W.A. "Interface Metaphors and User Interface
InDesign," in Handbookof Human-Computer
teraction,M. Helander,(ed.), ElsevierScience
Publishers, Amsterdam,1988, pp. 67-86.
Carroll,J.M. and Mazur,S.A. "Lisa Learning,"
IEEEComputer(19:11), November1986, pp.
35-49.
Carroll,J.M. and Olson, J.R. "MentalModels in
Human-ComputerInteraction,"in Handbook
of Human-Computer
Interaction,M.Helander
Elsevier
Science
Publishers, Amster(ed.),
dam, 1988, pp. 45-65.
Carroll,J.M. and Thomas, J.C. "Metaphorand
the Cognitive Representationof Computing
Systems," IEEE Transactionson Systems,
Man, and Cybernetics (12:2), 1982, pp.
107-116.
Catrambone,R. "Specific Versus General Procedures in Instructions,"
HumanComputerInteraction (5:1), 1990, pp. 49-93.
Catrambone,R. and Carroll,J.M. "Learninga
Word Processing System with Training
Wheels and Guided Exploration," Proceedings of CHI + GI 1987 HumanFactors
in Computing Systems, Toronto, Ontario,
1987, pp. 169-174.
Cohen, P.R., Dalrymple, M., Moran, D.B.,
Pereira,F.C.N.,Sullivan,J.W., Gargan,R.A.,
Jr., Schlossberg, J.L., and Tyler, S.W.
"SynergisticUse of DirectManipulationand
NaturalLanguage," Proceedings of CHI'89
Human Factors in Computing Systems,
Austin, TX, 1989, pp. 227-234.
Cowan,N. "OnShortand LongAuditoryStores,"
PsychologicalBulletin(96), 1984, pp. 341-470.
diSessa, A.A. "A PrincipledDesign for an In-

tegrated Computational Environment,"


Interaction(1:2), 1985, pp.
Human-Computer
1-47.
diSessa, A.A."Modelsof Computation,"in User
Centered System Design, D.A. Normanand
S.W. Draper (eds.), Lawrence Erlbaum
Associates, Hillside,NJ, 1986, pp. 201-218.
Egan, D.E. "IndividualDifferences in HumanComputerInteraction,"in CognitiveScience
and its Applicationfor Human-ComputerInteraction,H. Helendar(ed.), ElsevierScience
Publishers B.V., Hillsdale, NJ, 1988, pp.
543-568.
Execucom Systems Corporation.Cases and
Models Using IFPS, Execucom, Austin, TX,
1979.
Fitter, M. "Towards More Natural Interactive
Systems," InternationalJournal on ManMachine Studies (11:3), 1979, pp. 339-350.
Fowler,C.J.H., Macaulay,L.A.,and Fowler,J.F.
"The RelationshipBetween CognitiveStyle
and DialogueStyle:An ExplorativeStudy,"in
People and Computers:Designing the Interface, P. Johnson and S. Cook (eds.), CambridgeUniversityPress, New York,NY, 1985,
pp. 186-198.
Gaines, B. "The Technology of InteractionDialogueProgrammingRules," International
Journalof Man-MachineStudies (14:1),1981,
pp. 133-150.
Gaver,W. "AuditoryIcons:UsingSound in ComInteracputerInterfaces,"Human-Computer
tion (2:2), 1986, pp. 167-177.
Gaver,W.W."TheSonicFinder:An Interfacethat
Uses AuditoryIcons," Human-ComputerInteraction(4:1), 1989, pp. 67-94.
Gerlach, J.H. and Kuo, F.Y. "FormalDevelopment of HybridUser-ComputerInterfaceswith
Advanced Forms of User Assistance," Journal of Systems and Software(16:3),November
1991, pp. 169-184.
Gong, R. and Elkerton,J. "Designing Minimal
Documentation Using a GOMS Model: A
Usability Evaluationof an EngineeringApproach,"Proceedings of CHI'90HumanFactors in Computing Systems, Seattle, WA,
1990, pp. 99-106.
Good, M.D., Whiteside, J.A., Wixon, D.R., and
Jones, S.J. "Buildinga User-DerivedInterface," Communicationsof the ACM(27:10),
October 1984, pp. 1032-1043.
Gould,J.D., Lewis, C., and Barnes, V. "Cursor

MIS Quarterly/December1991 545

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

MovementDuringText Editing,"ACMTransactions on OfficeInformationSystems (3:1),


January 1985, pp. 22-34.
Gould,J.D. and Lewis,C. "DesigningforUsability:KeyPrinciplesand WhatDesignersThink,"
Communicationsof the ACM (28:3), March
1985, pp. 300-311.
Grudin,J. "The Case Against User Interface
Consistency," Communicationsof the ACM
(32:10), October 1989, pp. 1164-1173.
Grudin,J. "The Computer Reaches Out: The
HistoricalContinuityof InterfaceDesign,"Proceedings of CHI'90HumanFactors in Computing Systems, Seattle, WA, 1990, pp.
261-268.
Halasz, F.G. "Reflectionson Notecards:Seven
Issues forthe NextGenerationof Hypermedia
Systems," Communications of the ACM
(31:7), July 1988, pp. 836-852.
Halasz, F.G. and Moran,T.P. "Analogy Considered Harmful,"Proceedings of the Conference on Human Factors in Computing
Systems, Gaithersburg, MA, 1982, pp.
383-386.
Halasz, F.G. and Moran,T.P. "MentalModels
and ProblemSolving in Using a Calculator,"
Proceedings of CH1'83Human Factors in
ComputingSystems, Austin, TX, 1983, pp.
212-216.
InHartson,H.R.and Hix,D. "Human-Computer
terface Development:Concepts and Systems
for Its Management," Computing Surveys
(21:1), March1989, pp. 5-92.
Hauptmann,A.G. "Speech and Gestures for
Proceedingsof
GraphicImageManipulation,"
CHI'89 Human Factors in Computing
Systems, Austin, TX, 1989, pp. 241-245.
Hendrickson, J.J. "Performance, Preference,
and Visual Scan Patternson a Menu-Based
System: Implicationsfor InterfaceDesign,"
Proceedings of CHI'89 Human Factors in
ComputingSystems, Austin, TX, 1989, pp.
217-222.
Hiltz,S.R. and Kerr,E.B. "LearningModes and
ComSubsequentUse of Computer-Mediated
municationSystems," Proceedings of CHI'86
Human Factors in Computing Systems,
Boston, MA, 1986, pp. 149-155.
Houghton,R.C. "OnlineHelp Systems: A Conspectus," Communicationsof the ACM(27:2),
February1984, pp. 126-133.
Hutchins, E.L., Hollan,J.D. and Norman,D.A.
"Direct ManipulationInterfaces," in User

Centered System Design, D.A. Normanand


S.W. Draper (eds.), Lawrence Erlbaum
Associates, Hillsdale,NJ, 1986, pp. 87-124.
Jagodzinski, A.P. "A TheoreticalBasis for the
Representationof On-LineComputerSystems
to NaiveUsers," International
Journalof ManMachine Studies (18), 1983, pp. 215-252.
Jarvenpaa, S.L. and Dickson, G.W. "Graphics
and ManagerialDecision Making:ResearchBased Guidelines," Communicationsof the
ACM(31:6), June 1988, pp. 764-774.
Johnson, E.J. and Payne, J.W. "Effortand Accuracy in Choice," Management Science
(31:4), April1985, pp. 395-414.
Johnson, E.J., Payne, J.W., and Bettman,J.R.
"Information
Displaysand PreferenceReversals," OrganizationalBehavior and Human
Decision Processes (42), 1988, pp. 1-21.
Jones, W.P. and Dumais, S.T. "The Spatial
Metaphorfor User Interfaces:Experimental
Tests of Reference by Location versus
Names," ACMTransactionson Office Information Systems (4:1), January 1986, pp.
42-63.
Kellogg,W.A.and Breen, T.J. "EvaluatingUser
and System Models:ApplyingScaling Techniques to Problems in Human-ComputerInteraction," Proceedings of CHI + GI 1987
Human Factors in Computing Systems,
Toronto,Ontario,1987, pp. 303-308.
Kieras,D.E. and Bovair,S. "The Role of Mental
Knowledgein Learningto Operatea Device,"
CognitiveScience (8), 1984, pp. 191-219.
Kieras,D.E. and Poison, P.G. "AnApproachto
the FormalAnalysisof User Complexity,"InternationalJournalof Man-MachineStudies
(22), 1985, pp. 365-394.
Laird,J.E., Newell, A., and Rosenbloom, P.S.
"SOAR: An Architecture for General Intelligence," ArtificialIntelligence (33), 1987,
pp. 1- 64.
Landauer,T.K., Galotti,K.M.,and Hartwell,S.
"NaturalCommandNames and InitialLearning: A Study of Text-EditingTerms," Communicationsof the ACM(26:7),July 1983, pp.
495-503.
Lerch, F.J., Mantei, M.M., and Olson, J.R.
"Skilled Financial Planning: The Cost of
TranslatingIdeas into Action,"Proceedings
of CHI'89 Human Factors in Computing
Systems, Austin, TX, 1989, pp. 121-126.
Lewis,C., Poison, P., Wharton,C., and Rieman,

546 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Human-ComputerInteraction

J. "Testing a WalkthroughMethodologyfor
Theory-BasedDesign of Walk-Upand Use Interfaces,"Proceedingsof CHI'90HumanFactors in Computing Systems, Seattle, WA,
1990, pp. 235-242.
Lewis,M.W.and Anderson,J.R. "Discrimination
of Operatorin ProblemSolving:Learningfrom
Examples,"CognitivePsychology(17), 1985,
pp. 26-65.
Lohse, J. "ACognitiveModelforthe Perception
and Understandingof Graphs,"Proceedings
of CHI'91 Human Factors in Computing
Systems, New Orleans, LA, 1991, pp.
137-144.
Lotus Development Corporation.Lotus 1-2-3,
LotusDevelopmentCorporation,Cambridge,
MA, 1989.
Mack, R.L., Lewis, C.H., and Carroll, J.M.
"Learningto Use WordProcessors:Problems
and Prospects,"ACMTransactionson Office
InformationSystems (1:3), July 1983, pp.
254-271.
Maclean, A., Young, R.M., and Moran, T.P.
"Design Rationale:The ArgumentBehindthe
Artifact,"Proceedings of CHI'89HumanFactorsin ComputingSystems, Austin,TX,1989,
pp. 247-252.
Malone, T.W. "Heuristics for Designing Enjoyable User Interfaces:Lessons fromComputer Games," in Human Factors in
Computing Systems, J.C. Thomas and M.
Schneider (eds.), Ablex, Norwood,NJ, 1984,
pp. 1-12.
Mandler,J.M., Seegmiller, D., and Day, J. "On
the Encodingof SpatialInformation,"
Memory
Cognition(5), 1977, pp. 10-16.
Mayer,R.E. "The Psychology of How Novices
LearnComputerProgramming,"Computing
Surveys (13:1), March1981, pp. 121-141.
McDonald,J.E. and Schvaneveldt, R.W. "The
Applicationof User Knowledgeto Interface
Design,"in CognitiveScience and ItsApplication for Human-ComputerInteraction, R.
Guinden(ed.), LawrenceErlbaumAssociates,
Hillsdale,NJ, 1988, pp. 289-338.
Mehlenbacher,B., Duffy,T.M., and Palmer J.
"Finding Informationon a Menu: Linking
Menu Organizationto the User's Goals,"
Interaction(4:3), 1989, pp.
Human-Computer
231-251.
Miller, L.A. and Thomas, J.C., Jr. "Behavior
Issues in the Use of InteractiveSystems," InternationalJournalof Man MachineStudies

(9), 1977, pp. 509-536.


Moran,T. "AnAppliedPsychologyof the User,"
ComputingSurveys (13:1), March1981, pp.
1-12.
Morland,D.V. "Human Factors Guidelines for
TerminalInterfaceDesign,"Communications
of the ACM(26:7), July 1983, pp. 100-104.
Mozeico H. "A Human/ComputerInterfacesto
AccommodateUser LearningStages," Communicationsof the ACM(25:2),February1982,
pp. 100-104.
Myers, B.A. "The Importanceof Percent-Done
IndicatorsforComputer-HumanInterfaces,"
Proceedings of CHI'85 Human Factors in
Computing Systems, San Francisco, CA,
1985, pp. 11-17.
Nakatani, L.H., Egan, D.E., Ruedisueli, L.W.,
Hawley,P.M.,and Lewart,D.K."TNT:A Talking Tutor'N' Trainerfor Teaching the Use of
InteractiveComputerSystems," Proceedings
of CHI'86 Human Factors in Computing
Systems, Boston, MA, 1986, pp. 29-34.
Newell, A. and Card, S. "The Prospects of
Psychological Science in Human-Computer
Interaction,"Human Computer Interaction
(1:3), 1985, pp. 209-242.
Newell, A. and Card, S. "StraighteningOut
SofteningUp:Response to Carrolland Campbell," Human Computer Interaction (2:3),
1986, pp. 251-267.
Newell, A. and Simon, H.A. Human Problem
Solving, Prentice-Hall,EnglewoodCliffs,NJ,
1972.
Nickerson, R.S. "Why Interactive Computer
Systems AreSometimesNotUsed bythe People Who MightBenefitfromThem," International Journal of Man-MachineStudies (4),
1981, pp. 469-483.
Norman,D.A. "Design Rules Based on Analysis
of Human Error,"Communicationsof the
ACM(26:4), April1983, pp. 254-258.
Norman,D.A. "CognitiveEngineering,"in User
Centered System Design, D.A. Normanand
S.W. Draper (eds.), Lawrence Erlbaum
Associates, Hillsdale,NJ, 1986, pp. 31-61.
Olson,J.R. and Nilsen,E. "Analysisof the Cognition Involvedin SpreadsheetSoftwareInteraction," Human-ComputerInteraction (3:4),
1987, pp. 309-349.
Olson, J.R. and Olson, G.M. "The Growthof
Cognitive Modelingin HumanComputerInteractionSince GOMS,"HumanComputerInteraction (5:2-3), 1990, pp. 221-266.

MIS Quarterly/December1991 547

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

Olson, J.R. and Rueter,H.H."ExtractingExpertise fromExperts:MethodsforKnowledgeAcquistion," Journal of Expert Systems (4:3),


1987, pp. 152-168.
Payne, S.J. and Green, T.R.G. "Task-Action
Grammars:A Modelof the MentalRepresentationof Task Languages,"Human-Computer
Interaction(2:2), 1986, pp.93-134.
Phillips,M.D., Howard,B.S., Ammerman,H.L.,
and Fligg, C.M., Jr. "A Task Analytic Approachto Dialogue Design," in Handbookof
Human-ComputerInteraction,M. Helander
(ed.), Elsevier Science Publishers, Amsterdam, 1988, pp. 835-857.
Poison, P., Muncher,E., and Englebeck, G. "A
Test of a Common Elements Theory of
Transfer," Proceedings of CHI'86 Human
Factors in ComputingSystems, Boston, MA,
1986, pp. 78-83.
Poison, P. "The Consequences of Consistent
and Inconsistent Interfaces," in Cognitive
Science and ItsApplicationforHuman-ComputerInteraction,R. Guinden(ed.), Lawrence
ErlbaumAssociates, Hillsdale,NJ, 1988, pp.
59-107.
Powers, M., Lashley, C., Sanchez, D., and
Shneiderman, B. "An ExperimentalComparison of Tabular and Graphical Data
Presentation,"InternationalJournalof ManMachine Studies (20), 1984, pp. 545-566.
Rasmussen, J. "The Humanas a System Component," in Human Interactionwith Computers, H.T. Smith and T.R.G. Green (eds.),
Academic Press, London, 1980, pp. 67-96.
Reisner,P. "Usinga FormalGrammarin Human
Factors Design of an InteractiveGraphics
System," IEEE Transactions on Software
Engineering(7:2),March1981, pp. 1409-1411.
Remus,W. "AnExperimentalInvestigationof the
Impactof Graphicaland TabularDataPresentations of Decision Making," Management
Science (30:5), May 1984, pp. 533-542.
Remus, W. "A Study of Graphicaland Tabular
Displays and Their Integration with Environmental Complexity," Management
Science (33:9), September 1987, pp.
1200-1204.
Sein, M.K.and Bostrom, R.P. "IndividualDifferences and ConceptualModels in Training
Interaction
Novice Users," Human-Computer
(4:3), 1989, pp. 197-229.
Shiffrin,R.M.and Schneider,W. "Controlledand
AutomaticInformation
Processing:Perceptual

Learning,AutomaticAttending,and a General
Theory,"PsychologicalReview (84:2),March
1977, pp. 127-190.
Shneiderman,B. Designing the User Interface,
Addison-Wesley, Reading, MA, 1987.
Somberg, B.L. "A Comparisonof Rule-Based
and Potentially Constant Arrangements of
ComputerMenuItems,"Proceedings of CHI
+ GI 1987 Human Factors in Computing
Systems, Toronto,Ontario,1987, pp. 255-260.
Trevellyan, R. and Browne, D.P. "A SelfRegulatingAdaptiveSystem,"Proceedingsof
CHI+ GI 1987 HumanFactorsin Computing
Systems, Toronto,Ontario,1987, pp. 103-107.
Waren, Y. "Mental Models in LearningComputerizedTasks," in Psychological Issues of
Human Computer Interactionin the Work
Place, M.Frese, E. Ulich,and W. Dzida(eds.),
Elsevier Science Publishers, Amsterdam,
1987, pp. 275-294.
Weimer, D. and Ganapathy,S.K. "A Synthetic
VisualEnvironmentwithHandGesturingand
Voice Input,"Proceedings of CHI'89Human
Factors in ComputingSystems, Austin, TX,
1989, pp. 235-240.
Witten,I.H.,Cleary,J., and Greenberg, S. "On
Frequency-BasedMenu-Splitting
Algorithms,"
International
Journalof Man-MachineStudies
(21), 1984, pp. 135-148.
Wixon,D., Holtzblatt,K., and Knox,S. "Contextual Design: An Emergent View of System
Design," Proceedings of CHI'90HumanFactors in Computing Systems, Seattle, WA,
1990, pp. 329-336.
Young, R.M."The MachineInsidethe Machine:
Users' Models of Pocket Calculators,"International Journal on Man-MachineStudies
(15), 1981, pp. 51-85.
Young, R.M. and Barnard, P.J. "The Use of
Scenarios in Human-ComputerInteraction
Research: Turbo-Chargingthe Tortoise of
CumulativeScience," Proceedings of CHI +
GI 1987 Human Factors in Computing
Systems, Toronto,Ontario,1987, pp. 291-296.
Young, R.M.,Barnard,P., Simon, T., and Whittington,J. "HowWouldYourFavouriteUser
ModelCope withThese Scenarios?" SIGCHI
Bulletin(20:4), April1989, pp. 51-55.
Young, R.M. and Whittington,J. "Using a
Knowledge Analysis to Predict Conceputal
Errorsin Text-EditorUsage," Proceedings of
CHI '90 Human Factors in Computing
Systems, Seattle, WA, 1990, pp. 91-97.

548 MIS Quarterly/December1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

Interaction
Human-Computer

About the Authors


James H. Gerlach is associate professor of information systems at the University of Colorado
at Denver. In addition to human-computer interaction, his research interests include software
engineering and EDP auditing. His work has appeared in ACM Transactions on Information
Systems, IEEE Computer, Decision Support
Systems, Journal of Systems and Software, The
Accounting Review, and Auditing. Dr. Gerlach
received an M.S. in computer science and a
Ph.D. in management, both from Purdue
University.

Feng-Yang Kuo is assistant professor of information systems in the Graduate School of


Business, University of Colorado at Denver. He
received his Ph.D. in management information
systems from the Univerity of Arizona. His
research interests include human-computer interaction, database management, office automation, and decision support systems. Dr. Kuo's
work has appeared in MIS Quarterly, Communications of the ACM, Information Management, and Decision Support Systems.

MIS Quarterly/December 1991

This content downloaded from 111.68.103.208 on Mon, 24 Feb 2014 07:23:05 AM


All use subject to JSTOR Terms and Conditions

549

Você também pode gostar