Você está na página 1de 10

Available at www.ictom.

info

Conference Proceedings 2012 ISBN: 978-979-15458-4-6


www.sbm.itb.ac.id

www.cob.uum.edu.my

The 3rd International Conference on Technology and Operations Management


Sustaining Competitiveness through Green Technology Management
Bandung Indonesia, July 4-6, 2012

A Comparative Analysis on Methods for Measuring Web


Usability
Rohana Husin1,*, Nurul Huda Che Ali1, Abd Aziz Othman1
1

School of Technology Management and Logistics (STML) - College of Business (CoB) - Universiti Utara Malaysia (UUM),
Sintok 06010, Kedah Darul Aman, Malaysia

Abstract. There are many methods in measuring web usability. Each method has its strength and limitation. This article
identified the top three most commonly used methods in measuring web usability. The top three methods were applied and
tested and using a prototype designed with some major flaws. The result show that the usability testing, heuristic evaluation,
and cognitive walkthrough are top three methods, and usability testing method (think-aloud techniques) is more preferable
than the other two in terms of usability problem identified and cost effectiveness.

Keywords: Web usability; usability testing; heuristic evaluation; cognitive walkthrough

1. Introduction
Nowadays, many people already have good knowledge and experiences in using computers in particular the
Internet. The internet technology allows many people to retrieve web sites that contain massive amount of
information. Every day, new web pages are being added to the World Wide Web (WWW) at rapid rate (Comber,
1995). For web sites to be widely accepted and regularly visited, the sites should be well designed.
One of the most important aspects of web design to be considered is usability. According to Nielsen (1993),
usability is one of the quality attributes that assesses how easy the user interface is to use. The main aim of
usability is to allow users to navigate web site easily and effectively (Petterson, 1995). There are many methods
available to measure web sites which are proposed in literature review but only a few are widely used. However,
these methods differ between each other in term of coverage, approach and criteria used.
According to Rubin (1991), a good user interface is an optimal combination of a few good ideas, a thousand
tradeoffs and million details. Basically designer evaluates their software in first few design iterations and based
on the evaluation, the problem will be identified. According to Whiteside (1998), usability testing methods was
dominant since 1980s and remains important. At the same time, the field study and heuristic evaluation methods
have these limitations (Rubin et al., 1991): (1) need experience user interface people, (2) difficult to apply

* Corresponding author. Tel.:+60-49282541; fax.: +60-49286860


E-mail address: rohana.husin@uum.edu.my

604

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

before an interface exist, (3) recommendation come late in development, (4) user interface people are not aware
of technical or practical constraints; and (5) usability testing is expensive and time consuming.
In order to overcome these limitations, several other evaluation methods such as cognitive walkthrough,
software guidelines, user survey and many other methods have been developed. Nonetheless, the relative utilities
and effectiveness of these methods are fully unknown (Rubin et al., 2002). According to Perlman (1997), the
practical method is the method that introduce the cost effective, low skill and low investment method of usability
assessment.
The intention of this research paper:
1) To identify the top three most commonly used methods for measuring web usability.
2) To compare the methods identified in (1) in term of:
Usability problem identified;
Cost effectiveness; and,
Human factor involvement.
This research paper will first describe the definition of the concept of web usability, followed by the
description of some existing web evaluation methods. Then methods used in this study will be explained briefly.
Next, the findings are then discussed in detail. Finally, the paper ends with some suggestions for future studies.

2. Concept of Web Usability


Usability is a broad area and is defined differently by different scholars. However, usability generally refer
to the users ability to manipulate the site features in order to accomplish particular goals (Preece et al., 2002).
According to webster dictionary (1999), usability originates from the word usable which means capable of
being used or convenient and practicable for use (Shahizan & Norshuhada, 2003). The Institute of Electrical
and Electronic Engineers (IEEE, 1990) define usability as the ease which a user can learn to operate, prepare
inputs for, and interpret outputs of a system or components (Shahizan & Norshuhada, 2003).
According to Shahizan and Norshuhada (2003), most usability models (Nielsen, 1993; Lu & Yeung, 1998;
ISO 9241-11, 1998) emphasize the importance of usability because it relates to four main aspect:

Effectiveness relates to accuracy and completeness of users tasks while using a particular web sites;
Efficiency relates to users level of performance while using a particular web site;
Learnability relates to users ability to learn a particular system, and;
User Satisfaction refer to users subjective assessment for a particular web site concerning how useful
and easy it is to use it.

Therefore, having highly usable web sites would eventually improve effectiveness, efficiency, learnability
and user satisfaction. From these definitions, it can be concluded that usability relates to how easy it is for users
to use web sites and find what they are looking for.

3. Method for Measuring Web Usability


Evaluation is driven by a question about how well the design or a particular aspect of it satisfied users
needs. According to Gediga et al. (1999), evaluation techniques can be classified into two categories which are
descriptive evaluation and predictive evaluation.
Descriptive evaluation techniques are used to describe the status and actual problems of the software in an
objective, reliable and valid way. These techniques are user-based and can be subdivided into several approaches
(Gediga et al., 1999):

Behaviour-based evaluation methods - record user behavior while working with a system which
produce some kind of data. The procedures included observational and thinking-aloud protocols.
Opinion-based evaluation methods - aim to elicit the users(subjective) opinion which includes
interview and questionnaires.

Usability Testing combination of behavior and opinion based measured with some amount of
experimental control which includes questionnaires and interview , observational, think aloud and video
confrontation.
Predictive evaluation techniques aim to make recommendation for future software development and
prevention of usability errors (Gediga et al., 1999). These techniques are expert based such as:

Usability walkthrough prominent example is cognitive walkthrough. It is a method for evaluating user
interface by analyzing mental processes required of auser; and,

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

605

Heuristic review evaluator checks the software using system only a few guidelines such as ten
usability heuristic guidelines.

There are efforts to compare several usability evaluation methods in the past including Karat et al. (1992)
and Rubin et al. (1991). However, both of the studies only concentrated on usability of office system and a
software produce respectively. Hence, there is need to perform similar study and apply it to the web environment.

4. Methodology
There are four phases of activities of this study:
1) Content analysis of previous study on web usability for identifies top three most commonly
methods for measuring web usability;
2) Prototype development;
3) Evaluation of the prototype using all identified method;
4) Comparative analysis of the result.

used

In first phase, a total of 30 to 35 previous study on web usability that were reported in referred journals and
conference proceeding, were identified through web search and library search. Then the articles and reports
were analyzed to identify the top three most commonly-used methods for measuring web usability.
After identifying the top three most commonly-used methods, a prototype was developed. The prototype
that was developed for this paper was named as Online Learning for Management of Technology (MOT)
Courses (OLMOT). The prototype was used as a basis for comparing the identified methods. The contents of the
interface were included with 10 flaws: (1) untitled for each page, (2) animation, (3) small fixed font size, (4)
unclear heading for text, (5) redundant animation, (6) inappropriate use of typography and skimming layout (e.g.
bold font and highlighted word), (7) inconsistent used of text in term of type and colors, (8) broken link, (9) use
background image in the content of display area, and (10) inappropriate use of font that is easy to read (e.g.
Times and Verdana).
The third phase, the top three most commonly-used methods for measuring web usability as identified in
first phase was used in evaluating the prototype. During the third phase, five evaluators for each method
participated in this testing. The selection of 5 evaluators for each method was based on suggestion by Nielsen
and Landauer (N.D) who had determined that the maximum benefits-cost ratio of running usability evaluation
can be achieved when using three to five subjects (Barnum, 2002).
The usability testing method was applied using think-aloud technique. The prototype was tested on real
users of the prototype which is the target audience. In this case, the real users of the prototype are students and
MOT lecturers. The evaluation sessions was done in place where the real user actually performs their task.
These tests can offer excellent opportunities for observing how well the situated interface support the users
work environment (Rubin et al., 1991). The observer gave a set of prescribed tasks and the evaluators were
encouraged to vocalize their thoughts, feelings and opinion while interacting with the prototype. The evaluators
were asked to describe the usability flaw they encountered and the problems were recorded by evaluators. The
observer set a time about one hour for each session, beginning with the explanation on the purpose of evaluation
and evaluation process.
In heuristic evaluation, user interface specialist study the interface in depth and look for properties that they
know, from experience, will lead to usability problem (Rubin et. al, 1991). The goal of heuristic evaluation is to
find the usability problem in the design (Nielsen, 1993). Originally heuristic evaluation was developed for
evaluators who had some knowledge of usability principles, not necessarily usability experts (Nielsen, 1992).
For this research paper, 5 suitable evaluators whos had knowledge in usability principles were identified before
the evaluation sessions were started. The evaluators were given a list of 10 principles of usability heuristic
proposed by Nielsen. The list are adapted from Nielsens 10 heuristic adapted for the web (Barnum, 2002).
Evaluators inspected all the interfaces and contents of the prototype based on those principles. The evaluators
should report the amount of time they spent in completing the evaluation.
According to Nielsen and Mack (1994), a cognitive walkthrough involves simulating a user problem solving
process at each step in human-computer dialog, checking to see if the users goals and memory for actions can
lead to next correct action (Preece, 2002). Cognitive walkthrough also uses an expert as an evaluator but the
evaluators evaluates the interface in the context of task that user would perform. In conducting the evaluation
sessions, the observer gave an evaluator with a set of prescribed tasks. Evaluator and observer would discuss
after each step was performed based on this questions: (1) will the user try to achieve the right effect?, (2) will
the user notice that the correct action is available?, (3) will the user associate the correct action with the effect to
be achieved?, and (4) if the correct action is performed, will the user see the progress is being made toward

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

606

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

solutions of the tasks?. The evaluation sessions was done in a place where expert actually performs their task.
The observer set time about one hour for each session, beginning with the explanation of the purpose of
evaluation and evaluation process.
The data collection procedures, the observer observed each evaluator in separated evaluation sessions
according to each evaluation method. The observer followed a guideline which is based on Rubins guideline
(Rubin, 1994) for monitoring the evaluation. In this research paper, usability problem defined as anything that
impact ease of use from core dump to misspelled word. Evaluator had been encouraged to report problem they
found even if the method was used did not lead them to the problem. All the comments from evaluators are
considering data and useful in order to make a comparison between each method.
Finally, in fourth phase, the result of the evaluation as mentioned earlier will be compared and analyzed
based on three criteria (Karat et al., 1992):

Usability problem identified


Cost effectiveness
Human factor involvement

5. Findings
5.1. Identify Top Three Most Commonly Used Methods for Measuring Web Usability
A total of 35 past studies on web usability were selected and analyzed. The types and the numbers of
methods for measuring web usability that was found from 35 past studies as shown in Table 1. The methods used
for the studies were identified and summarized in Table 2.
Table 1: Method, number of studies and researchers
Methods
Usability Testing

Numbers of studies
14

Heuristic Evaluation

Cognitive
Walkthrough
Focus Group
Benchmarking
ClassicExperim ent

6
3
2
1

Researcher
Byun et. al (2011), Cooke (2010), Erica Olmsted-Hawala et al (2010),
Bakken, (2010), Moritz (2010), Olsen et al. (2010), Bastien (2009),
Zahariah et al. (2009), Barravale et al. (2003), Grady (2001), No Author
(2001), Soon et al. (2001), Kutti (2001), Scholtz (2001).
Omar et al (2010), Thyvalikakath et al. (2009), Delice et al.( 2009), Tan et
al. (2008), Panille et al. (2008), Ebba Thora Hvanberg et al. (2005),
Catherine, (2003), Myra et al.( 2003), Athanasis et al. (2001)
Fang (2009), Jaspers (2007), Mahatody et al. (2010), Huges et al. (2003),
Huges et al. (2002), Clayton et al. (2000)
Moreno et al. (2010), Cowley et al. (2011), Gwyneth et al. (2002)
Kim et al. (2003), Christy (1998)
Penny et. al (1999)

Table 2: Summary methods was used in measured web usability


Methods
Usability
Testing

Title
An AHP method for evaluating usability of
electronic government portals (Byun et. al,
2011)
Assessing concurent think-aloud protocol as a
usability test method: A technical
communication approach. (Cooke, 2010)

Summary
This paper is to propose an Analytic Hierarchy Process
(AHP) method to assess e-government website usability
and shows the evaluation and ranking of the portals of
Australian state governments.
This paper discussed about concurrent think-aloud (CTA)
protocol used in usability test settings to gain insight into
participants thoughts during their task performances.

Think-aloud protocols: a comparison of three


This paper described an empirical, between-subjects study
think-aloud protocols for use in testing dataon the use of think-aloud protocols in usability testing of a
dissemination web sites for usability. (Olmsted- federal data-dissemination Web site.
Hawala et al., 2010)
Web-based education for low-literate parents in
Neonatal Intensive Care Unit: Development of
a website and heuristic evaluation and usability
testing. (Bakken, 2010)

The paper reported the development of a website for lowliterate parents in the Neonatal Intensive Care Unit (NICU),
and the findings of heuristic evaluation and a usability
testing of this website.

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

Mobile Web Usability Evaluation - Combining


the Modified Think Aloud Method with the
Testing of Emotional, Cognitive and Conative
Aspects of the Usage of a Web Application.
(Moritz, 2010)
Comparing different eye tracking cues when
using the retrospective think aloud method in
usability testing. (Olsen et al., 2010)
Usability testing: a review of some
methodological and technical aspects of the
method. (Bastien, 2009)
Developing a Usability Evaluation Method for
e-Learning Applications: Beyond Functional
Usability. (Zahariah et. al, 2009)
Remote Web Usability Testing. (Barravale et.
al, 2003)
Web site: A case study in usability testing using
paper prototype. (Grady, 2001).

Heuristic
Evaluation

607

This paper proposed a usability testing method that alters a


given usability testing method to make it less costly and
time consuming for the investigator.

This paper presented the results of a study aimed to


compare the outcomes from four different retrospective
think aloud (RTA) methods in a web usability study.
This paper is to review some work conducted in the field of
user testing that aims at specifying or clarifying the test
procedures and at defining and developing tools to help
conduct user tests.
This paper discussed the development of a questionnairebased usability evaluation method for e-learning
applications are described.
This paper introduced Open Web Survey, a software system
for remote usability testing that can record users behavior
while they surf the internet.
This paper discussed the benefits of using paper prototype
to conduct usability testing of a web site for Mercer
Universitys.
This paper performed usability testing on texts designed for
pleasured reading such as hypertext narrative.

The ergonomic of hypertext narrative: usability


testing as a tool for evaluation and redesign.
(No Author, 2001)
Web usability test finding and analysis issues. This paper discussed the finding and data analysis issues
(Soon et al., 2001)
that resulted from the case usability test. The case
conducted as a part of redesign project that aimed to update
the existing homepage of Indiana University Bloomington.
Virtual Prototype in usability testing. (Kutti, This paper described a study of three dimensional virtual
2001).
prototype intended for usability testing and concept
validation over the internet.
Adaptation of traditional usability testing This paper discussed the approach in designing the remote
methods for remote testing. (Scholtz, 2001)
usability testing method adaptation from traditional
usability testing in producing the web site.
Development and potential analysis of Heuristic This paper proposed Heuristics Evaluation for Courseware
Evaluation for Courseware (Omar et. al, 2010) (HECW) to be developed because of the important
elements of courseware that need to be addressed in order
to develop a usable and acceptable courseware.
Comparative study of heuristic evaluation and This paper compared the results of a heuristic evaluation
usability testing methods (Thyvalikakath et al., with those of formal user tests in order to determine which
2009)
usability problems were detected by both methods.
Web evaluation: Heuristic evaluation vs. user This paper compared the efficiency and effectiveness
testing (Tan et al., 2008)
between user testing and heuristic analysis in evaluating
four different commercial web sites.
The usability analysis with heuristic evaluation This paper discussed detailed the usability problems of web
and analytic hierarchy process. (Delice et al.,
sites. Heuristic Evaluation (HE) is used to identify the
2009)
usability problems, and Analytic Hierarchy Process (AHP)
is used to rate their severity.
Heuristic evaluation for games: usability
This paper introduced a new set of heuristics that can be
principles for video game design. ( Panille et
used to carry out usability inspections of video games. The
al., 2008)
heuristics were developed to help identify usability
problems in both early and functional game prototypes.
Heuristic evaluation: Comparing ways of
This paper discussed on an empirical study of framework
finding and reporting usability problems.
that compares two sets of heuristics, Nielsens heuristics
(Hvanberg et al., 2005)
and the cognitive principles of Gerhardt-Powals.

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

608

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

A framework for evaluation of session This paper evaluated the performance of heuristic employed
reconstruction heuristics in web usage analysis. to reconstruct session from server datalogs.
(Myra et al., 2003)
Usability evaluation of an NHS library web site. This paper reported a study on usability evaluation of the
(Catherine, 2003)
recently launched South London and Maudsley NHS Thrust
library web site. The variety standard methodology
employed included heuristic evaluation.
Heuristic evaluation of web sites: The evaluator This paper suggested the heuristic approach to the
expertise and the heuristic list. (Athanasis et al., evaluation of the web sites.
2001)
Cognitive
State of the Art on the Cognitive Walkthrough This article discussed interactive system evaluation from
Walkthrough Method, Its Variants and Evolutions. (Thomas the perspective of inspection methods, specifically the
Mahatody et al., 2010)
Cognitive Walkthrough (CW) method.
Adaptation of Cognitive Walkthrough in This paper discussed the used of cognitive walkthrough
Response to the Mobile Challenge. (Fang , during the early stages in mobile device development.
2009)
A comparison of usability methods for testing This paper described two expert-based and one userinteractive health technologies: Methodological based usability method which is the heuristic evaluation,
aspects and empirical evidence. (Jaspers, 2007). the cognitive walkthrough, and the think aloud.
Cognitive Walkthrough for the web. (Huges et This paper proposed transformation of the cognitive
al., 2002)
walkthrough to cognitive walkthrough for the web. This
method is superior to measure how well web site support
user navigation and information search task.
Repairing usability problems identified by the This paper reported a series of two experiments that
cognitive walkthrough for the web. (Huges et develop and prove effectiveness of both full scale and quick
al., 2003)
fix cognitive walkthrough for web repair method.
A study of usability of web-based software This paper presented a study of the usability web-based
repositories. (Clayton et al., 2000)
software repositories using the cognitive walkthrough
usability inspection method.
Focus Group A quality evaluation methodology for healthThis paper presented a qualitative and user-oriented
related websites based on a 2-tuple fuzzy
methodology for assessing quality of health-related
linguistic approach. (Moreno et al., 2010)
websites based on a 2-tuple fuzzy linguistic approach. To
identify the quality criteria set, a qualitative research has
been carried out using the focus groups technique.
Qualitative Data Differences between a Focus This study reviewed a new test method, the internet forum,
Group and Online Forum Hosting a Usability
and compares data outcomes related to participant
Design Review. (Cowley et al., 2011)
participation between this forum and the traditional focus
group.
User perceptions of the librarys web pages: A This paper presented a focus group study at Texas A&M
focus group study at Texas A&M University. Universities which explorer library patrons opinion about
(Gwyneth et al., 2002)
the library web page.
Benchmarking Recommendations for benchmarking web site This paper presented a study of usage of web site by
usage among academic library. (Christy, 1998) academic libraries. Benefit for web site usage, information
and recommendation for benchmarking web site usage
among the academic libraries, are proposed.
Web site design benchmarking within industry This paper determined the difference in web site design
group. (Kim et al., 2003)
between industries which provides the information for
benchmarking process.
Classic
A web accessible tutorial for Psyscope based on This paper described a modified web based tutorial for
Experiment
classic experiments in human cognition. (Penny PsyScope, a graphically oriented, script based program for
et al., 1999)
control of experiment in classic experiments in human
cognition using Macintosh computer.

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

609

5.2. Comparative Analysis of the Result


The data were analyzed in order to produce a result. The result was then compared based on the following
three criteria (Karat et al., 1992):

Usability problem identified- compare each method on term of the number of usability problem
identified.
Cost effectiveness compare the relatives cost effectiveness for each method. The analysis focuses on
time and cost requirements to complete three phases of evaluation which are preparation, administration
of sessions, and data analysis.
Human factor involvement the amount of human factors involvement for using each methods. The
analysis focuses on the people that involve in administration of session and how usability problem were
documented.

5.2.1. Problem Identification


The usability flaws description and comments were recorded during the evaluation session. The data was
interpreted and analyzed based on three criteria that were mentioned previously. The data had been analyzed to
identify the number of flaws or usability problems found by each method and which method can identify the
largest number of usability problems. Table 3 shows the result of analyzed data.
Table 3: Total problems identified by the flaws type and evaluation method
Type of Usability Flaws
Usability Testing
1) Untitled for each pages
2) Continuous animation
3) Small fixed font size
4) No heading for text document
5) Redundant animation
6) Inappropriate used typography and skimming layout
7) Inconsistent used of text in term of types of colors
8) Broken link
9) Use background image in display area
10) Inappropriate used fonts that are easy to read
Total

0
2
0
2
0
2
5
5
5
4
25

Methods
Heuristic
Evaluation
3
3
1
3
0
1
5
0
1
4
21

Cognitive
Walkthrough
0
2
1
1
0
3
5
2
2
4
20

The above table shows that among all the group of evaluators of evaluation methods, the usability testing
evaluators found the largest number of usability problems, about 50% of the total problems. Heuristic evaluation
evaluators and cognitive walkthrough evaluators found 42% and 40% respectively of the total problems. The
results are summarized graphically in the figure below (Figure 1).

Figure 1: Percentage of each method found the usability problem

5.2.2. Cost Effectiveness


Table 4 shows the relative cost effectiveness data for the three methods. This analysis includes the time and
the cost required for completing the evaluation sessions such as:

Preparation all materials (tasks sheet, server);


Getting /approaching the subject:
Administration of session (schedule participant/evaluator for specific day and time);
Data analysis (analyze of problem identification, cost effectiveness in term of cost and time requirement
for completing the evaluation session).

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

610

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

Table 4: Cost effectiveness data


Variables
Task sheet preparation
Approaching of subject

Usability Testing
Yes/2 days
Real user/2 days

Methods
Heuristic Evaluation
No need
Interface design specialist/4
days
Not necessary

Administration session

Yes/schedule participant for


specific day and time

Data analysis
Evaluation handling
-computer performance
-speed
-browser
-platform
-evaluator understanding

Easy to analyzed
Difficult to analyzed
Each evaluator evaluates the
interface not in the same room.
The computer performance
may be difference.

Cognitive Walkthrough
Yes/2 days
Expert/6days
Yes
Schedule participant for
specific day and time
Easy to analyzed

According to the Table 4, usability testing seems to be the best method compared to the other methods in
term of getting/ approaching the subject (participant), analyzing the data, identifying the problem, and the
average time spent (refer figure 2) for completing the tasks.
However, in heuristic evaluation, the evaluators evaluate the prototype based on 10 usability principles
introduced by Nielsen. Each evaluator evaluates alone where no administration session occurred. It takes 4 days
in getting/approaching the subject where the evaluator should know at least 10 principles of usability heuristic.
The data was difficult to analyze because the evaluators inspect all elements of prototype contents. The observer
should only analyze which of the problem related to the 10 created flaws.
In cognitive walkthrough, the observer needs to prepare the task sheet. In getting /approaching the subject,
it takes 6 days compared to the other methods. It was difficult to get evaluators whose were well-trained in
cognitive psychology. It was very time consuming (refer Figure 2) for evaluator to complete the given tasks.
Table 5: Usability Testing-time spent for each evaluator
Evaluators
Evaluator 1
Evaluator 2
Evaluator 3
Evaluator 4
Evaluator 5
Average time spent

Time starts Time stop


10.15
10.30
11.00
11.20
10.20
10.45
11.00
11.30
12.30
12.50

Total time
15 minutes
20 minutes
30 minutes
30 minutes
20 minutes
23 minutes

Table 6: Heuristic Evaluation-time spent for each evaluator


Evaluators
Evaluator 1
Evaluator 2
Evaluator 3
Evaluator 4
Evaluator 5
Average time spent

Time starts Time stop


3.30
4.10
10.55
11.50
10.30
11.05
12.30
1.20
3.50
4.30

Total time
40 minutes
60 minutes
35 minutes
50 minutes
40 minutes
45 minutes

Table 7: Cognitive Walkthrough-time spent for each evaluator


Evaluators
Evaluator 1
Evaluator 2
Evaluator 3
Evaluator 4
Evaluator 5
Average time spent

Time starts
4.45
1.20
3.30
10.00
12.30

Time stop
5.30
1.45
4.10
10.50
1.30

Total time
45 minutes
30 minutes
35 minutes
50 minutes
60 minutes
44 minutes

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

611

Figure 2 shows graphically the average of time spent by evaluators to complete the tasks given in these three
methods. The results seems to suggest that using usability testing method would be cost effective in term of time
required for completing the evaluation sessions.

Figure 2: The average time spent for each method

5.2.3. Human Factor Involvement


Human factor involvement focuses on the amount of people efforts in administration session and how
usability problem was documented. The usability testing and cognitive walkthrough session differed in human
factor involvement in the session and how the usability problems were documented. According to Karat et al.
(1992), in usability testing two usability engineers administered each session in its entirely with an individual
user. One person in the control studio interacted with user, controlled the video tape equipment and observed
usability problems. The second person in control studio logged user comments usability problem, time on task
and task success or failure.
However, cognitive walkthrough utilized smaller number of staff compared to usability testing especially for
staff involves in term of testing resources requirement of the method. One administrator was available on call
during the session in case of unexpected event. A few sample sessions were videotaped and observed by human
factor staff, no session logging occurred. During the evaluation session, one administrator introduced the session
and instructed the cognitive walkthrough evaluator to perform documented task.
This research project conducted the usability testing and cognitive walkthrough in different sessions for each
evaluator but in controlled environment and no administration session occurred for heuristic evaluation. In
usability testing and cognitive walkthrough, the sessions were administered by observer. No usability facilities
such as video camera and tape video was used because of the cost and time constraint.

6. Conclusions
This study found that when these methods were tested with a prototype specifically developed with major
usability flaws, usability testing (think-aloud techniques) seem to be the best due to the following reasons:

Manage to find the most usability problem;


The data was easy to analyze;
Easy in getting the sample of real user; and,
Less time required for completing the evaluation sessions.

Previous studies also show that usability testing is the best method for identifying the serious or major flaws
compared to heuristic and cognitive walkthrough (Rubin et al., 1991; Karat et al., 1992; Galitz, 2002). Heuristic
evaluation method is better than cognitive walkthrough in term of usability problem identification. However, the
heuristic data was difficult to analyze compared to usability testing and cognitive walkthrough. Nonetheless, the
use of cognitive walkthrough can also identify usability problems but it is very time consuming.

7. Problems and Limitations


This research projects main objectives are to identify the top three most commonly-used methods in
measuring web usability and to do some comparative analysis on those methods in term of problem
identification, cost effectiveness and human factor involvement. However in completing this research paper, it
has some problems and limitations. Some of problems and limitations encounter are:

The time allocated for completing this research project was very short. As such, the observer cannot
spend more time in searching previous study on web usability.

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

612

R. Husin, N.H.C. Ali, and A.A. Othman A Comparative Analysis on Methods ...

It was difficult and time consuming for observer to collect all participants together and to conduct the
session in one lab.
The observer conducts the session in different rooms with different computer specification and
performance.
The research project was accompolished in a small scale and not involving many users and equipment
such as video camera, videotape recorder and adjustment lighting
The result for cognitive walkthrough may be less accurare because the evaluators are not trained
inaluato coginitive psychology . According to Barnum (2002), untrained evaluators produce poor result.
In this study, it hard to get an expert that are trained in cognitive physchology especially for cognitive
walkthrough because the time and cost constraint.

Acknowledgements
In the name of Allah, the Most Gracious and Most Merciful, I would like to extend my thanks and gratitude
to all individual involved in accomplishment of this research project.

References
[1]

Anthanasis, K. & Adreas, P. (2001). Heuristic Evaluation of Web-Sites: The evaluator Expertise and Heuristic List.
World Conferences on WWW and Internet Proceedings. Orlando.
[2] Avouris, N.M. (N.D). An Introduction to Software Usability. Retrieved March 3, 2004 from
http://www.ee.upatras.gr/hci/usabilitynet/5Avouris_intro_in_usability.pdf.
[3] Bakken, S. and Choi, J. (2010). Web-based Education for Low-literate Parents in Neonatal Intensive Care Unit:
Development of a Website and Heuristic Evaluation and Usability Testing. Int J Med Inform. 79(8), pp. 565575.
[4] Barnum, C. (2002). Usability Testing and Research. New York: Longman.
[5] Barravale, A. & Lanfranchi, V. (2003). Remote Web Usability Testing. Behavior Research Methods, Instrument, &
Computer, 35(7), 364-368).
[6] Bastien, J. M. C. (2010). Usability testing: a review of some methodological and technical aspects of the method
International Journal of Medical Informatics Human Factors Engineering for Healthcare Applications Special Issue.
79 (4), pp. e18e23.
[7] Battleson, T., Booth, A. & Weintrop, J. (2001). Usability Testing of an Academic Library Web Site: A case Study.
The Journal of Academic Librarianship. 27(3), 188-198.
[8] Benny, A. (2002). Web Site development Process-The Life-Cycle Steps. Retrieve March 27, 2004 from http://www.
macronimous.com/resources/web_development_lifecycle.asp.
[9] Byun, D. & Finnie, G (2011). An AHP method for evaluating usability of electronic government portals. Electronic
Goverment, an International Journal, Volume 8.
[10] Cooke,L. (2010). Assessing concurrent think-aloud protocol as a usability test method: A technical communication
approach. Profession Communication, IEEE Transaction. 53 (3), 202-215.
[11] Cowley, J. A. and Radfort-Davenport, J. (2011). Qualitative Data Differences between a Focus Group and Online
Forum Hosting a Usability Design Review: A Case Study. Proceedings of the Human Factors and Ergonomics
Society Annual Meeting. 55 (1), pp. 1356-1360.

Cite this paper


Husin, R., Ali, N.H.C., and Othman, A.A. (2012). A Comparative Analysis on Methods for Measuring Web Usability,
Proceedings of The 3rd International Conference on Technology and Operations Management: Sustaining
Competitiveness through Green Technology Management, BandungIndonesia (July 4-6), pp. 603-612. ISBN: 978-97915458-4-6.

The 3rd International Conference on Technology and Operations Management (ICTOM)


Conference Proceedings 2012 ISBN: 978-979-15458-4-6

Você também pode gostar