Você está na página 1de 11

Healthcare Education with Virtual-World Simulations

David Chodos1, Eleni Stroulia1, Patricia Boechler2, Sharla King2, Pawel Kuras1,
Michael Carbonaro2, Erik de Jong2
1

Department of Computer Science


University of Alberta
221 Athabasca Hall
Edmonton, Alberta, Canada, T6G 2G5

{chodos, stroulia, pkuras}@cs.ualberta.ca

Department of Educational Psychology


University of Alberta
5-147 Education North
Edmonton, Alberta, Canada, T6G 2G5

{patricia.boechler, sharla.king}@ualberta.ca,
{mike.carbonaro, eadejong}@ualberta.ca

ABSTRACT

trainees that an expert can mentor at any point in time is limited.


Moreover, in order to reach a high quality of training, the length
of the apprenticeship period has to be such that the trainee
experiences most of the variance of the professional practice s/he
is about to encounter. Because of these limitations, simulation has
long been adopted as a learning-by-doing training method that can
supplement apprenticeship in many professional and engineering
programs, including the health sciences.

Becoming a skilled professional requires the acquisition of


theoretical knowledge and the practice of skills under the
guidance of an expert. The idea of learning-throughapprenticeship is long accepted in medicine and, more generally,
in the health sciences, where practicum courses are an essential
part of most curricula. Because of the high cost of apprenticeship
programs mentors can usually supervise few trainees and
trainees may need long apprenticeship periods - simulation has
long been adopted as a learning-by-doing training method that can
supplement apprenticeship in many professional and engineering
programs, including the health sciences. In this paper, we describe
our experience developing virtual world-based training systems
for two healthcare contexts. In one, procedural training was
emphasized, while the other focused on teaching communication
skills. In each case, we developed a custom set of tools to meet
the needs of that context. We present an analysis of the case
studies, and lessons drawn from this analysis.

Simulation-based training has been used extensively in medical


training for a long time. Recently, Eder-Van Hook proposed a
method of classifying medical simulation technology according to
its fidelity [14].
1. Low-tech simulators: this includes models or mannequins
used to practice simple physical maneuvers or procedures.
2. Simulated/standardized patients: actors, trained to role-play
patients, with which students interact to practice their skills
of history taking, conducting physical examinations, and
communication.
3. Screen-based computer simulators: programs used to train
and assess clinical knowledge and decision making, e.g.,
perioperative critical incident management, problem-based
learning, physical diagnosis in cardiology, and acute cardiac
life support.
4. Complex task trainers: computer-driven physical models of
body parts and environments that offer high-fidelity visual,
audio, and touch cues and replicating a clinical setting, e.g.,
ultrasound, bronchoscopy, cardiology, laparoscopic surgery,
arthroscopy, sigmoidoscopy, and dentistry.
5. Realistic patient simulators: computer-driven, full-length
mannequins that simulate anatomy and physiology, enabling
handling of complex and high-risk clinical situations.

Categories and Subject Descriptors


K.3.1 [Computers and Education]: Computer Uses in Education
Collaborative learning

General Terms
Design

Keywords
Virtual worlds, medical education

1. INTRODUCTION
Becoming a skilled professional requires both the acquisition of
theoretical knowledge and the practice of skills under the
guidance of an expert. It is through the act of apprenticing with an
expert that a student learns how the theory applies to practice. The
idea of learning-through-apprenticeship is long accepted in
medicine and, more generally, in the health sciences, where
practicum courses are an essential part of most curricula.

The emergence of virtual worlds that offer a rich, immersive


communication and collaboration experience and can easily be
integrated with existing systems gives rise to opportunities for
novel types of simulations (somewhere across the second and
third types of the above taxonomy). A simulation-based training
system on a virtual world can provide the student with a safe,
realistic environment in which to practice, while requiring less
resources than real-life techniques such as standard patient-based
training or running scenarios with actors.

The apprenticeship model is inherently costly. The number of


Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SEHC10, May 34, 2010, Cape Town, South Africa.
Copyright 2010 ACM 978-1-60558-973-2/10/05$10.00.

However, the development of a virtual world-based training


system for a healthcare context poses several challenges, such as
ensuring that system requirements are well understood,
developing auxiliary components to meet these needs, and
creating an environment that is sufficiently realistic to provide the
students with the necessary sense of presence and immersion.

89

not support collaborative learning, since it is meant to be used by


one student at a time.

In this paper, we describe two case studies of virtual world


training systems developed for healthcare education contexts. In
the first study, we have developed a system for training EMTs
(Emergency Medical Technicians) in accident rescue procedures
and in the process of handing off the patient to ER (Emergency
Room) personnel who are also trained in the procedure of
receiving and assessing a patient. This training system
encompasses both procedural training as well as the
communication skills required to successfully execute the relevant
processes. In the second case study, we have developed a virtual
world-based component for a first-year communication skills
course for all health-science students. In this case, the students are
placed in a virtual medical interview setting, and either interview
a patient or critically observe other students interviews. This case
study, in keeping with the aims of the course, focused on
developing students communication skills.

Victor Vergara and colleagues at the University of New Mexico


have developed a virtual environment-based tool to teach medical
students about hematomas [37]. They have developed a a 3D,
multi-user virtual environment (MUVE) within which students
can interact with a virtual character, nicknamed Mr. Toma, and
other associated objects. Several rigorous studies of the system's
effectiveness have demonstrated that it is equally effective as
conventional, paper-and-pencil education methods. Furthermore,
it offers additional advantages, including the chance to collaborate
with geographically dispersed students, and an increased sense of
immersion when using the MUVE system. A considerable amount
of effort was put into ensuring that the content was presented
accurately and effectively, including consulting with an
interdisciplinary team of subject matter experts.

The remainder of the paper is organized as follows. Section 2


reviews relevant literature on virtual world-based training and
educational psychology. Section 3 describes the two case studies
in detail. Section 4 presents an analysis of the case studies, and
some lessons learned from these experiences. Section 5 reviews
some of the interesting software-engineering problems involved in
building software systems around virtual worlds. Finally, Section
6 presents several promising areas for future work.

Randolph Jackson and Eileen Fagan, at the University of


Washington, have developed a VR-based world, called Global
Change World (GCW), to teach high school students about global
warming [17]. This project explored the ways in which students
interacted within the virtual environment, and overall educational
effectiveness of the GCW project. It was found that students were
able to communicate effectively, and that the GCW project
supported collaborative learning.

2. RELATED WORK

Finally, Forterra Systems has developed the On-LIine Virtual


Environment (OLIVE) platform, which allows clients from
government, healthcare and other contexts to create virtual worldbased systems [3]. One such system was developed to train first
responders to car accidents on the Interstate 95 Corridor. A
prototype system was developed at the University of Maryland,
and tested with a small number of potential students.

The issue of using virtual worlds for education and training has
received an increasing amount of attention from the academic
community in recent years, as virtual worlds have become more
well established in both mainstream culture and in educational
institutions. The following sections present a sample of this work,
which indicates both the steadily increasing interest in the topic,
and the variety of approaches that are being taken.

While the preceding projects are quite varied in terms of the


technology used and the context areas to which the projects were
applied, there are several common characteristics that should be
pointed out. First, each project found that the students
educational needs, whether for high school students,
undergraduates, professionals, or medical students, were met by
the virtual world-based projects. This offers evidence that this
type of system is effective in a wide range of contexts, and with a
broad range of students. Second, with the exception of the
Interstate 95 Corridor project undertaken by Forterra Systems,
most projects were created by computing-science researchers for
use in a single context area. The Forterra system, while based on
the modular, generally applicable OLIVE platform, required the
expertise of researchers from the University of Maryland to
implement the system, and even then was only usable by a
handful of students. This indicates that there is a need for a
broadly applicable framework for virtual world-based training
programs. This framework should enable the creation and
maintenance of learning modules by non-technical content
experts.

2.1 Established Projects


Hong Cai, of IBM, has taken a broad view of the issue, examining
the potential of virtual worlds for any kind of training program
[6]. He compared several virtual worlds - Second Life, Active
Worlds, OpenSim, and the Torque game engine - in terms of their
fitness for educational activities, and analyzed various common
learning activities with respect to their implementation in a virtual
environment. He also presented a development lifecycle for
creating virtual learning environments, and analyzed several
virtual learning projects at IBM according to these measures.
Edward Carpenter, along with several colleagues at Purdue
University, has developed a 3D crisis-communication training tool
to provide communication students with opportunities to practice
what are, in a standard classroom setting, largely theoretical
approaches to dealing with crises [10]. Through the immersive
tool, students get hands-on training, and can experience events,
rather than absorbing and interpreting them through written
information. The tool uses facial modeling for virtual characters, a
range of story settings, and VR-based user interface devices (a
head tracker and wand) to provide an immersive experience for
the student. The tool uses a narrative, storyboard-based technique
to deliver the educational content, where each student is offered a
set of choices at key points in the story. Afterwards, the students
are debriefed and the instructor analyzes and evaluates their
choices. Because the system uses storyboards to structure the
educational content, a student's interaction with the system is
largely pre-determined, and quite rigid. As well, the system does

2.2 Emerging Projects


The following projects are currently being developed within
virtual worlds, such as Second Life, to educate healthcare
professionals.
From the healthcare simulation field, the most relevant and welldeveloped project in Second Life is Second Health, a healthcare
simulation project at Imperial College, London [33]. This project

90

system based on use patterns, but the results of this evaluation


have not yet been published.

simulates several key points of care in a proposed model for the


British healthcare system, including a hospital and a clinic. The
project includes a number of virtual representations of healthcarerelated objects, such as hospital beds, IV poles and crash carts,
and allows the user to take actions such as checking the patients
pulse and listening to the patients breathing. However, there is no
component for clinical-skills training to the project. Rather, it is
intended to provide stakeholders with an interactive view of the
previously mentioned proposed healthcare model.

Finally, Heinrichs et al described the characteristics of a disaster


training system, and assessed the systems effectiveness [15]. The
paper presents three virtual world-based training scenarios:
individual trauma cases, disaster preparedness training, and mass
casualties after a disaster. The paper finds that the scenarios are
lifelike enough for students to be able to suspend disbelief, and
intuitive enough to be easily learned. This allows repeated
practice opportunities in dispersed locations with uncommon, lifethreatening trauma cases in a safe, reproducible, flexible setting.

The Ann Myers Medical Centre, supported by Sprott Shaw


Community College in British Columbia, takes a slightly different
approach to medical education in a virtual environment [1]. The
AMMC focuses on providing a well-established meeting place for
medical educators and students, in order to facilitate educational
sessions in a virtual environment. Recent sessions have covered
topics such as molecular oncology, post-traumatic stress disorder,
H1N1, and osteosarcoma. The AMMC meeting space includes
some medical equipment and screens for showing presentation
slides and medical images. While the goal of the project,
providing training in a virtual world, is similar to that of our
research, this project seeks to reach that goal through lectures and
presentations, rather than process simulation.

2.3 Educational Psychology


From an educational psychology standpoint, simulation-based
training is supported by Situated Cognition Theory, proposed by
Brown et al [5]. According to this theory, knowledge is not a set
of abstract concepts to be absorbed by the student; instead, it is
dependent on the context and culture in which it is used. Adhering
to situated cognition principles, Collins et al developed the
Cognitive-Apprenticeship Model of educational practice, which
incorporates the situated nature of the knowledge being conveyed
to students [12]. According to this model, learning needs to occur
in an embedded manner within the relevant context. Instruction
should involve an authentic activity with active, participatory
learning. There are three critical activities involved in the
cognitive-apprenticeship model; modeling, scaffolding and
reflection [18]. Modeling refers to student observation of an
expert enacting an authentic and relevant situation in the early
stages of learning. Scaffolding involves providing supports for the
students in the form of expert feedback on students strategies as
well as their results. Reflection entails students reviewing their
own strategies and discussing alternatives with their peers [31].
This model was later evaluated by Jrvel, who found it to be
effective within a technologically rich learning environment [18].

The Razorback Hospital of the University of Arkansas [27] uses


Second Life to model healthcare logistics. The objective is to
merge the real and virtual worlds with ubiquitous computing
technologies, location aware systems, RFID, sensors and smart
devices, using natural language to talk to devices. The
environment is also meant to be used as a platform for usability
evaluation of new devices.
Another related project is Pulse!!, a virtual medical education
project led by Claudia Mcdonald at Texas A&M University,
Corpus Christi [21]. The centrepiece of this project is the Virtual
Clinical Learning Lab, which is an interactive virtual
environment simulating operational health-care facilities,
procedures and systems [26]. While the system was initially
focused on a naval hospital setting the majority of the projects
$14.7 million in funding is from the US Office of Naval Research
it has since been tested in the Yale School of Medicine and
Johns Hopkins School of Medicine, and has licensed the
technology to BreakAway Ltd., for commercial development.

Both Situated Cognition Theory and the Cognitive Apprenticeship


Model fall under a broader perspective of learning referred to as
Social Constructivism. Constructivism in general refers to the idea
that learning is an active process of constructing, rather than
acquiring knowledge [24]. Learning occurs as a process of the
learner constructing knowledge from his/her own experiences of
interacting with the environment rather than the learner as a
passive recipient of information. Thus, each learners
understandings or schemas of the world are unique to their own
experiences. Instruction within a constructivist context focuses on
supporting that construction, rather than conveying knowledge.
Social constructivism builds on this notion to include the impact
of interaction with others as part of the environment [39]. Hence,
learning involves the negotiation of understanding with others that
experience the shared environment. Within the perspective of
social constructivism, teachers function as facilitators to
demonstrate and guide learners through their exploration of new
information and experiences, and peers provide opportunities for
shared reflection on new knowledge. These theories support the
value of simulation-based training, which is based upon
presenting students with knowledge and teaching skills in a social
context similar to that within which they will be using those
knowledge and skills

Yet another medical training example comes from Duke


University, in the nursing training program [19] As in the Ann
Myers Medical Centre, the nurse training program uses Second
Life as a virtual meeting space in which instructors can present
lectures and show educational materials, and students can interact
with each other. In a study conducted with nursing informatics
students at Duke University, it was found that the students
expressed a higher level of satisfaction with the environment and
level of instruction in the virtual world, as compared with other
online learning systems.
One ambitious, generally applicable project is Project TOUCH,
being undertaken by Mowafil et al. [22]. Project TOUCH
(Telehealth Outreach for Unified Community Health) is intended
to use advanced technology to overcome geographical barriers to
deliver medical education and enhance group learning. The
project hopes to address several key elements of virtual worldbased systems: consistency, networking, scalability and system
integration. The researchers involved also plan to evaluate the

Another psychological construct, which may be related to learning


in VWs, is social presence. The construct of social presence
represents the perception that one is communicating with people

91

watch a video of the patients admission to the hospital, develop a


draft plan, interview a virtual simulated patient and refine their
plan afterwards. This contextual diversity is a good starting point
for assessing the flexibility and robustness of virtual worlds for a
variety of educational situations in the health sciences.

rather than with inanimate objects. It is an illusion created by


the human minds ability to manufacture feelings of connection
and interaction, even when separated by distance [42]. Social
presence involves the assumption of cognizance and intentionality
in the person or agent with which you are communicating. It is
best established when verbal and non-verbal cues are present
together in a transaction between two or more separated
individuals. Verbal cues include vocal pace and inflections, and
paraverbal sounds like sighing. Non-verbal cues include body
posture and gestures, facial expressions and eye gaze. The
construct of social presence has been linked to the assumption that
people work and communicate more effectively when the sense
of being with another is evident [35]. However, results in
computer-mediated communications research are mixed. For
instance, perceptions of social presence have been reported as
unrelated to learning outcomes [41] but related to student
satisfaction [34]. Also, students who are more experienced with
VWs report a higher degree of social presence [41], suggesting
computer experience may play a role in VW learning with social
presence as a mediating variable.

3.1 The EMT/ER Scenario Simulation


We are currently working with colleagues from health sciences
education (the Interdisciplinary Health Education Partnership
conducted by the Health Sciences Education and Research
Commons), who want to use a virtual world-based system to
create simulation-based training scenarios for their students. In
one such scenario, EMT students encounter a victim at an
accident scene, and must determine which actions to take in order
to transport the victim to a nearby hospital. A screenshot of this
scenario, implemented in a Second Life-based prototype system,
is shown in Figure 1. This scenario will eventually be expanded to
encompass a handoff scenario between emergency medical
technicians (EMT) and emergency room staff when a victim is
being transferred by ambulance from the scene of the accident to
the hospital ER.

Previous research investigating the educational effectiveness of


VWs (e.g., video games) has shown an increase in student
motivation and improvement in classroom dynamics [8], [9], [29].
The additional dimension of immersiveness is related to increased
capture and maintenance of attention [11], [16], [28] and
increased motivation [44]. Skill transfer from VWs to real-world
situations has been documented for both spatial skills and
procedural skills [40], [30]. The studies presented in this section
suggest elements in VWs that impact learning and the range of
positive influences that can result from their use.

Two key goals of the training program are teaching EMT students
the procedures that they need to know in order to perform their
jobs effectively, and providing them with the communication
skills necessary to interact with colleagues both within the EMT
field and from a variety of other disciplines, such as ER doctors
and nurses, radio dispatch operators, and other rescue workers,
such as police officers and firefighters.
The EMT and ER personnel need to acquire basic skills, and to
learn medical knowledge. Moreover, procedures must be learned
and correctly applied in unpredictable, high-stress situations.
Finally, they must have the communication skills required to
coordinate their activities with co-workers, hospital staff, other
emergency workers, and others.

3. CASE STUDIES
To investigate the feasibility and utility of delivering virtual
world-based healthcare education, we have undertaken two case
studies. These case studies are drawn from diverse contexts, and
are qualitatively quite different from each other.

The first two areas, basic skills and medical knowledge, are
typically conveyed in a classroom setting and via textbooks.
Procedural training is often conveyed through enacting training
scenarios with a limited number of students and professional
actors playing the roles of accident victims, emergency workers,
hospital staff, and so forth. While these training scenarios offer
life-like experiences for the students, there is a severe limitation
on how many students can participate at any one time.
Furthermore, distance education students are entirely excluded
from this type of training.

The objective of the EMT/ER training simulation is to deliver an


environment where EMT and ER personnel are trained in the
basic procedures of assessing and stabilizing an accident victim
before transferring him to the hospital (for EMT personnel) and of
receiving, assessing and starting the treatment for this person (for
ER personnel). Furthermore, through the simulation, EMT and ER
personnel are trained in the handoff process, which involves the
exchange of all pertinent information about the patient obtained
by the EMT personnel and required by the ER personnel to
effectively start treating the patient. In fact, it has been reported in
the literature that problems in this handoff process resulting in
information miscommunication are the underlying cause for a
substantial number of preventable accident fatalities.

Recently, instructors have seen stand-alone training software used


for procedural training, which offers each student a predetermined set of options at various key points of a scenario, and
assesses their aptitude based on the students choices. While these
software products help address the accessibility issues mentioned
previously, they offer the student an artificial, canned
experience. In personal communication, one of our EMT training
experts dismissed these programs as simply teaching the student
how to correctly execute a given process, rather than giving the
student experience in the situation the process is intended to
address.

The second simulation is a teaching module developed as part of


the InterD-410 course at the University of Alberta. This is a
mandatory course, designed to teach professional competencies to
students across the health disciplines (rehabilitation medicine,
nutrition, physical education, nursing and medicine). The intent is
to instill in students the idea that health delivery is a team process
and to help them develop the skills necessary to effectively
communicate and collaborate with each other and their patients.
The simulation we have developed for this course involves
student teams working together to develop a home-care plan for
an elderly patient. The teams meet in a virtual conference room,

Finally, little to no scaffolding of communication skills occurs


during health education programs. Communication is taught in a
discipline specific context with some clinically specific aspects

92

and inter-professional communication left for students to learn on


the job.
Thus, given the shortcomings identified above, a virtual worldbased training tool could offer advantages in several key areas.
For one, a workflow-based system, with independent objects and
characters, enables flexible scenarios. Thus, the student doesn't
just learn a rigid sequence of steps; rather, the student can interact
with active objects in any order, (subject to constraints imposed
by the context or the workflow) determine his or her path through
the scenario, and thus engage in self-directed learning.
A blend of character types students, instructors and automated
characters creates a variety of communication possibilities, and
means that the system can offer several types of student-instructor
involvement. First, students can interact with other students, either
in the EMT field or in other, complementary disciplines. This,
technically speaking, is the simplest of the possibilities, but even
this offers compelling advantages to EMT students and instructors
alike. Getting EMT students to interact with students in nursing,
emergency dispatch operations, or medicine poses significant
logistical difficulties. Schedules must be coordinated among
hundreds of students, often across institutions. Space must be
found to host the combined population of multiple classes.
Transportation and funding are just two of many other issues that
must be addressed. By providing a common virtual meeting space,
the virtual world-based system solves many of these practical
problems. In this way, students from many disciplines not only get
experience with the processes relevant to their own area, but also
get experience communicating with people from a variety of other
areas. Second, students can interact with a mix of other students,
instructors, and automated characters. Automated characters,
following pre-defined workflows that involve little or no
interactivity can be developed to stand in for an unconscious
patient or a radio dispatcher. More complex roles, meanwhile, can
be played either by students (as described above), or by
instructors playing roles. This possibility offers a compromise
between the realism of interacting with other people and the
flexibility and accessibility of using automated characters. Finally,
in cases where all of the external characters may be modeled and
their behavior defined, students can interact entirely with
automated characters. This situation offers maximum accessibility
for the student, who can train using the scenario whenever he or
she chooses. Moreover, the number of students who can use the
system concurrently is limited only by technical factors such as
bandwidth. Finally, since the workflow engine records the
progress of each process, and the virtual world can be configured
to record a video of all in-world interactions, the system can also
deliver on the need of instructors to record student experiences for
later reflection and instruction.

Figure 1: EMT Scenario Screenshot

Figure 2: Wiki for Collaborative Requirements Specification

In creating a scenario in a virtual world representative of a real


world scenario, experts in the field were asked to provide the
content by describing the individual scenes that make up the
scenario. For each scene, the relevant artifacts (e.g., equipment)
and actions were identified and transformed into workflow
diagrams, which assisted in designing the artifacts and scripting
the actions. A screenshot presenting the wiki page for one such
scene is shown in Figure 2. The screenshot shows the storyboard
image and description, which were provided by the context
experts, at the top of the page. The objects and actions elicited by
a technical expert are shown the bottom of the page.

Figure 3: Ambulance model


We can describe this scenario in terms of its actions and artifacts.
The actions taken include using medical diagnostic equipment and
interacting with the accident victim. These actions are interpreted
by workflows, and the results are conveyed through a variety of
artifacts. Most of the artifacts in this scenario are pieces of
medical equipment a spine board, for example, or the two-way
radio in an ambulance which are used in treating the victim, and

93

and attitudes regarding the training experience, and their


proficiency in handling the simulated situation.

provide immediate feedback. Other, larger scale, artifacts include


the victims vehicle and the EMT ambulance.
It should be noted that, in this case, the victim is also considered
an artifact, since it helps convey the results of actions through its
condition and location. The implementation of each of these
components in the EMT training prototype will be discussed in
detail in the following paragraphs.
Representing an action in a virtual world can pose a variety of
challenges, depending on the affordances and capabilities
provided by the virtual world. For example, in Second Life, the
user has a limited set of interactions with an object: an object can
be touched, worn, sat on, driven, or taken. Thus, an action such as
having an object follow an avatar is difficult to encode, since it
must incrementally follow the avatar, be worn by the avatar, or be
driven by the avatar, and each of these approaches has limitations.
To have an object incrementally follow an avatar produces
choppy visualizations, since there is a noticeable delay between
the avatar's movement and the objects recognition that the avatar
has moved. Wearing an object also has limitations, as well, since
an object can only be worn by its owner, which poses problems in
a group setting, such as a class of students. Surprisingly, although
driving an object would seem to be the least intuitive, it produces
the most accurate results. However, this solution still requires
some modification of the object to ensure that, when driven, the
object does not tip over.

ii.

The students will go through the scenario in small,


interdisciplinary groups two paramedic students (a lead and
an assistant) and two emergency medicine students (playing
the roles of an ER nurse and a doctor). This process, which is
expected to take 5-10 minutes, will be both digitally recorded
and supervised by instructors from paramedic and emergency
medicine disciplines.

iii.

The students will be given a post-study questionnaire which


will elicit their opinions on the quality of the scenario, and
determine what the students felt they learned from the
experience. The instructors will also be interviewed, in order
to elicit their assessment of the students performance.
Through this study, we will seek to answer the following
preliminary research questions:

Finally, the artifacts relevant to the process must be modeled in


the virtual world. For this process, the artifacts that were modeled
included (a) surgical gloves, (b) a spine board, (c) the unconscious
victim, (d) a two-way radio, and (e) the ambulance. There are two
distinct issues here: first, the representation of the artifact in the
virtual world. Depending on the virtual world that is chosen, one
may be able to create a virtual representation of the artifact using
external 3D modeling software, import 3D models created by
other users, or use in-world 3D modeling tools. Each of these
options has pros and cons, and the correct choice of virtual
world and 3D modeling technique depends largely on the context
of the process. For this process, we used 3D models created by
other users for Second Life. Figure 3 shows a close-up of an
ambulance, one such 3D model. Second, this artifact must be able
to exhibit the behavior implied by the associated actions. This,
typically, involves the addition of native code to the virtual world
representation of the artifact. This code calls web services
responsible for interpreting the result of the action, both in terms
of any immediate changes to the artifact appearance, as well as the
impact of the action on the process workflow. Finally, the artifact
must be able to change its appearance in an appropriate manner.

What are student expectations towards virtual world-based


learning, and how do those expectations compare to the
experience of using this system?

2.

What is the quality of the learning experience provided by


this system? How does a students assessment of this
experience differ from that of an instructor?

3.

How does a students prior expertise with a virtual world


and with the scenario being simulated affect the quality of
the students experience?

Based on the results of this analysis, we will refine the system,


and conduct a more extensive study in June, 2010.

3.2 The InterD 410 Simulation


The purpose of this study is to develop a research-based virtual
environment (VE) aimed at the creation of a communication skills
instructional program for health science education. The program
would address some of the resource issues (cost, time and
infrastructure) associated with traditional training approaches
using Standardized Patients.
Standardized Patients (SPs) provide a safe, standardized and
supportive learning environment for the instruction, practice or
assessment of communication and/or examination skills of a
health science student [7]. A SP is a real person, often an actor,
trained to portray or simulate an actual patient. The SP performs
the history, physical findings and emotional/personality
characteristics of the actual patient. SPs are commonly used for
the instruction, assessment, or practice of communication and/or
examining skills of a health science student in a safe and
supportive environment conducive for learning or for standardized
assessment. However, SP sessions are time and resource
intensive, and require specific infrastructure for program support
[36]. In addition, finding the appropriate SPs to enact socially or
culturally sensitive scenarios may be challenging.

We have not yet empirically evaluated the effectiveness of our


service-delivery process simulations for training. We will be
conducting a pilot test of the accident scene/handoff scenario at
the end of March with paramedic students from NAIT (the
Northern Alberta Institute of Technology) and emergency
medicine students from the University of Alberta. Students and
instructors will be given some time in-world before the pilot, in
order to familiarize themselves with the virtual world interface
and basic functionality, such as movement, camera control and
communication. The study itself will consist of three phases:
i.

1.

In the first phase of the project, currently being conducted in an


undergraduate health sciences course in team skills, the directed
scenarios used in the traditional Standardized Patient (SP)
communication skills program will be used in the Second Life
environment. These scenarios consist of collaborative patient
assessment across the nine health science disciplines represented
within each student team. SPs and students will have
predetermined group sign-on schedules in which SPs will enact

The students will be given a preliminary survey, which will


assess their familiarity with virtual worlds, their expectations

94

environment. To meet this need, a whiteboard1 was developed to


fit our educational context. Specifically, the whiteboard is able to
play videos, and students can also superimpose text, lines, or other
shapes on top of the video, thus allowing them to annotate the
video as it is being shown. The flipchart was developed by the
authors to allow the students to make short comments in response
to a set of questions. Thus, the flipchart saves the responses to
each question, and can retrieve the appropriate set of responses
when a given question is shown. Moreover, these responses are
stored in an online database, so they can be read and analyzed by
course staff. The whiteboard and flipchart are shown in Figure 5.

the scenarios using their Second Life avatars. Students will use
their avatars to collaboratively develop a patient interview plan,
execute the interview and discuss their groups performance
afterwards. This first version of the Second Life program
represents a removal of the physical presence of the SP, which is
the most salient marker of social presence. The Second Life
sessions will be recorded for use in the development of
subsequent programs.
Two key challenges in the development of the Second Life-based
clinical setting were the elicitation of the course requirements
from instructional staff and the development of the appropriate
tools to enable student communication. These challenges will be
discussed in detail in the following paragraphs.

Finally, a significant amount of effort was put into developing the


virtual spaces in which the students would be interacting. The
goal was to make these spaces realistic enough that students
would feel immersed in the context, and would thus have a more
realistic, engaging educational experience. Two spaces were
developed: a clinical interview room, to be used by one group of
students while interviewing a SP, and an observation room, to be
used by a second group while observing the first groups
interview. These spaces are shown in Figures 6 and 7.

To understand the course requirements, several meetings were


held between the course staff (coordinators and instructors) and
educational and technical experts (educational and computing
science researchers). Through these informal discussions,
requirements for the tools to be used by the students, and the
virtual spaces the students would inhabit, were gradually elicited.
Although the educational activities that occur within this course
were well developed by those delivering the course, the process
and characteristics of classroom interactions were unknown to the
educational and computing science researchers. The first step was
to clearly define these activities using commonly understood
terms. As the educational and computing science researchers were
not health science content experts, this process involved iterative
questioning to accurately determine the class process and to
determine the most optimal features and tools to support these
processes. Once this understanding was reached, computing
science researchers could present technical options for executing
these tools in Second Life. The development of each tool required
multiple consultations with the health science team with recurring
revisions. To support the use of the custom tools, detailed tutorial
materials were developed consisting of a text-based tutorial
package and a walk-through tutorial in Second Life. Face-to-face
tutorial sessions were also implemented for both students and
facilitators.

The first phase of the project, the collaborative patient assessment,


was conducted over three hours on February 23, 2010. Students in
the InterD 410 course gathered in the Second Life clinical setting
along with instructors, standard patients, and a few others who
provided digital recording, troubleshooting and technical support.
The patient assessment process was conducted in four steps: a prebrief and conference planning session, admission conference,
discharge conference, and scenario reflections. In the first step,
the students divided up into small groups and discussed their plans
for conducting the admission and discharge conferences, using the
flipchart and private voice chat channels. In the second and third
steps, the students used a public voice chat channel to conduct the
conferences with standard patients playing the roles of an elderly
patient and the patients daughter. Additionally, the whiteboard
was used between the admission and discharge conferences to
show the students a video that provided the appropriate
background for conducting the discharge conference. Finally, the
students discussed the results of the conferences among
themselves and with course staff in the reflection stage.

Although Second Life provides a number of communication


methods, such as text-based messaging and audio communication
(using microphones and speakers), there were several modalities
that were required by the course, but lacking in Second Life. One
of these was the ability to perform a wide range of non-verbal
communications. These included head shaking, facial expressions
such as anger or surprise, and leaning forward to indicate interest.
To address this shortcoming, a set of custom gestures was
developed, and a heads-up display (HUD) was created to allow
the user to perform these gestures, as well as some relevant builtin gestures. Without a HUD, a user must type the appropriate code
(e.g., /nodYes) using the text-based chat tool to perform a gesture
in Second Life. The HUD provides a more intuitive way for users
to perform gestures, and allows these gestures to be distributed
easily to a group of people. The HUD is shown in Figure 4.

Through this process, several problems came up. The most


significant of these was that the public chat channel, which was
used for the admission and discharge conferences, repeatedly
ejected users from the conversation. To re-join the conversation,
students needed to log out of Second Life and then log back in,
which was both inconvenient and time-consuming and, more
importantly, meant that some students were unable to participate
in significant portions of the conferences. There were other, less
significant, technical problems, such as the flipchart getting stuck
on a page and needing to be reset, and one of the microphones
providing poor quality audio. In addition, some students were not
comfortable with the Second Life interface (particularly the
camera controls), while others were unclear about what they were
expected to do, especially during transitions between steps.

Another important feature that was missing from Second Life was
the ability to work collaboratively using tools such as a
whiteboard or flipchart. This allows the students to share ideas
with each other, in real time, without leaving the virtual world

95

Our whiteboard is based on one originally developed by


Annabeth Robinson, Senior Lecturer in Digital Media at the
Leeds College of Art.

above will help instructors and researchers improve the Second


Life environment for the second phase of the project.

4. ANALYSIS
One lesson that came out of the two case studies is the importance
of a well-established process for designing and implementing a
virtual world-based training system. In developing the EMT
training scenario, a collaborative wiki and a series of prototypes
were used in creating the system. Thus, development of the
system proceeded in continuous consultation with relevant
stakeholders, who were able to offer guidance through wellstructured methods and tools. In the case of the communication
skills course, requirements were communicated through a small
number of meetings with course instructors and the course
coordinator. While this method eventually produced a system that
met the needs of the students and instructors, defining these needs
in a precise, commonly understood manner was a challenge, some
tools went through numerous revisions, and there were some
requirements that were not properly addressed until quite late in
the development of the system. On the whole, using a welldefined process for identifying and clarifying roles and needs, and
defining actions and objects based on those needs, would have
helped in developing clear requirements and effectively designing
and implementing a system based on these requirements.

Figure 4: Gesture HUD

Another common theme is the importance of developing the


appropriate virtual world tools for teaching the required skills. In
the case of the EMT training scenario, the built-in tools were
sufficient for the communication skills being taught. However, for
the procedural skills, additional tools were required to enable the
students to learn the proper procedures in a flexible, free-form
manner. These tools included process definitions, running in the
background, and interactive objects, which were connected to
these processes and allowed the students to experience the
processes being taught. In the communication skills course, the
tools served to augment the communication capabilities provided
by the virtual world. These tools included a HUD enabling
students to gesture and a flipchart that students can write on using
the built-in text-based chat tool. In each case, these tools were
developed in consultation with stakeholders (instructors, course
coordinators, etc.), and addressed key deficiencies in the virtual
worlds capabilities.

Figure 5: Whiteboard and flipchart

Finally, respecting the interdisciplinary nature of the educational


context was crucial for both case studies. In the EMT training
case, the goal of training EMTs, radio dispatchers, ER nurses, and
other emergency rescue workers meant that the system had to be
extensible and encompass the needs of each group of students.
However, providing a common virtual space for these students
meant that the logistical difficulties involved in bringing the
students together were greatly reduced, which is an important
advantage of the system. In the case of the communication skills
course, the course brought together healthcare students from a
variety of disciplines that would, in a real-world situation, be
expected to collaborate in performing a patient interview. Thus, as
with the EMT training context, an important benefit of the virtual
world-based system was providing the common virtual space for
these students to meet, thus overcoming logistical hurdles.
Another challenge was ensuring that the course content would be
general enough to be applicable to all students, and yet would still
be meaningful and useful. By focusing on communication
principles, rather than healthcare domain-specific knowledge, this
requirement was addressed.

Figure 6: Clinical interview room

Figure 7: Observation room


Despite all these challenges, the students were able to successfully
conduct the patient conferences, and collaborate with each other
before, during and after the conferences. The difficulties described

96

5. SOFTWARE ENGINEERING IN
VIRTUAL WORLDS

games will require a different set of basic behaviors than avatars


for professionals in different disciplines. Irrespective of the
possible variations of domain-specific languages for avatar
behavior specification, the emergence of these languages is a
necessary prerequisite for the wide adoption of this technology.

The work we have discussed in this paper on training


environments within Virtual Worlds builds on our Smart Condo
work [37]. Through these three activities, we are starting to
recognize some common themes that characterize the
development of software systems that include virtual world
platforms and a set of general research questions in this area.

In the context of the EMS/ER scenario simulation, we have


started developing such an intermediary language, in which, the
set basic avatar actions is defined as an attribute grammar,
ultimately represented using a XML Schema Definition (XSD)
file, so that they can guide the creation of XML descriptions for
specific actions. The structure and content of the grammar we
have developed is based on work by Schank et al., on codifying
behavior in terms of scripts and plans [31]. The non-verbal
portion of the grammar is based on work by Mehrabian [22].

The virtual-world systems involved in these three projects


exemplify three different classes of systems, with distinct degrees
of character realism and behavior specificity. In the Smart Condo,
the characters in the virtual world are artificial and their behaviors
are completely specified and programmatically controlled. In the
EMS/ER handoff scenario some of the characters involved are
artificial (NPCs i.e,, non-playing characters enacting roles of
bystanders) and some are real (the characters of the students being
trained). The behaviors of the NPCs are again completely
specified while the behaviors of the student characters are
unscripted and can involve any composition of the basic abilities
of the avatar module in the VW and their interactions with the
behaviors afforded by the scenario artifacts. Finally, in the
InterD410 environment, all characters are realistic and their
behaviors can also include the extended sets of gestures we
developed. The emerging software-engineering research question
here is What is an appropriate systematic mechanism for
modeling character behavior? We are currently investigating
the use of BPEL models for modeling the behavior of the scenario
artifacts and the relevant (to the scenario) activities of the scenario
characters. When the character behavior is programmatically
controlled (as is the case with the Smart Condo occupant and the
EMS/ER scenario NPCs) the task is fairly straightforward. It
simply involves (a) controlling the location of the character in the
environment (which may include some coordinate-system
transformation if the location has to correspond to some realworld location, as is the case with Smart Condo) and (b) invoking
the VW procedures corresponding to the avatar basic behaviors.
When a player controls a character, she is not necessarily
following a prescribed procedure; in fact, the intention behind
designing training scenarios in VWs is to provide students with
open-ended situations in which they have to decide what to do,
based on their background knowledge. In this case, the challenge
becomes to monitor the player actions in the VW and translate
them into relevant messages that can be forwarded to BPEL
models of the discipline-specific procedures that the students may
be enacting. This is essentially the task of recognizing which of
the modeled characters possible behaviors the students are
actually enacting. Although BPEL is quite appropriate for
modeling and implementing programmatically controlled
behaviors there is a great conceptual gap between the BPEL
primitives and the types of behaviors that need to be modeled,
enacted and recognized in VWs. What is missing is a VW
character-behavior specification language that will bridge the
gap between the particulars of how characters act and gesticulate
in a particular VW and their information exchanges and
coordinated interactions as they can be modeled at the level of
BPEL. This language will enable a degree of independence of
such systems from the vagaries of individual VWs, will lead to the
development of an interesting avatar behavior specification which,
in turn, will systematize and simplify the development of different
scenarios. Of course this specification will have to be informed by
the range of systems to be developed: avatars for commercial

A different type of research question that needs to be addressed


before these platforms become widely adopted is that of the
scalability of the software/hardware architectures of VWs.
Today, commodity VWs2 do not easily scale to large-scale
simulated environments, with complex artifact models and many
concurrent users. To support acceptable levels of interactivity and
performance, they place limits to the number and complexity of
artifacts included in the world and the number of users
simultaneously present. The mechanisms that are currently being
explored to increase scalability are (a) the deployment of the VW
on a virtualized platform i.e., cloud that can be extended with
computational resources3, (b) the flexible distribution of the
computational load around the management and rendering of the
world state between the world server and the clients machines,
and (c) the development of specialized hardware for rendering the
VW state. A particularly interesting design decision involved in
the maintenance of the world state is the distribution metaphor
adopted. The conceptually simplest choice is that of distribution
according to geography, where all activity within a geographical
parcel is managed by a single computational resource. This is a
rather restrictive decision, especially as the complexity of the user
behaviors in the world and the simulation of their effects
increases. The community is now looking at alternative models
[41] that will allow the distribution of distinct services within a
region across computational resources thus supporting and
enabling this increase in complexity.
In our work, as end users of Second Life, we have not examined
this question at all, in spite of having faced several ad-hoc
limitations of the platform through which we have identified
potential services. For example, we had to develop a mechanism
for recursive compositions of artifacts all of which move together
and can be examined through their overall container in order to
support the physical examination of the various body parts of the
accident victim through his clothing. We also had to develop a
radio transmission system to communicate information from one
area in Second Life to another far away. These are just two small
examples of the infrastructure services that will have to be
supported by virtual worlds, which may cut across regions and, as

Our experience is limited to Second Life (http://secondlife.com/),


OpenSim (http://opensimulator.org/) and Wonderland (http://lg3dwonderland.dev.java.net/).
3

Wonderland
can
be
deployed
on
the
http://blogs.sun.com/wonderland/entry/elastic_wonderland

97

cloud

such, will certainly challenge the simple geographic distribution


model.

and Ken Brisbin for their contributions in modeling and refining


the EMT scenario over several lively discussions.

A more general research question around this activity relates to


the pedagogy of using VWs for teaching and learning. What can
be taught in the context of VWs? Who and why might learn in
VWs? And in what aspects can teaching-and-learning in VWs
enhance traditional classrooms? It is clearly too early to talk
about a consensus within the community of educators who
explore VWs but inspecting the landscape of education-related
projects, we see that it is being used quite extensively for
simulations of natural phenomena to teach physics, chemistry and
biology [24], [20], [42], architecture [3], language [45] as well as
simulations of social phenomena in the context of serious games
the domain of our work. To our knowledge all the work in this
area is fairly new and not quite mature but the case of simulationbased learning is compelling according to the educational theories
on peer- and situated learning methods, we reviewed in Section 2.
As the community collects more experience and knowledge, the
challenge will become to extend the existing learning-object
standards, such as SCORM for example [1], to include these more
interesting media.

8. REFERENCES

[1] Advanced Distribution Learning. 2010. SCORM 2004, 4th


Edition, Version 1.1. Website, retrieved March 2, 2010 from
http://www.adlnet.gov/Technologies/scorm/SCORMSDocum
ents/2004%204th%20Edition/Documentation.aspx
[2] Ann Myers Medical Centre. 2009. Website, retrieved
November 15, 2009, http://ammc.wordpress.com
[3] The Arch Network. 2010. University of Western Australia
hosts architecture competition in SL, gathering ideas for a
real-world building. Website, retrieved March 2, 2010 from
http://archvirtual.com/?p=2035
[4] Armentrout, M. 2008. Transportation Incident Management:
Using 3D Virtual Worlds to Train First Responders.
Published by Forterra Systems.
[5] Brown, J.S., Collins, A. and Duguid, P. 1989. Situated
Cognition and the Culture of Learning. Educational
Researcher, 32-42.
[6] Cai, H., Sun, B., Farh, P., Ye, M. 2008. Virtual Learning
Services Over 3D Internet: Patterns and Case Studies. In
Proceedings of IEEE International Conference on Services
Computing, 2, 213-219.
[7] Cantrell, M.J. and Deloney, L.A. 2007. Integration of
Standardized Patients into Simulation. In Anesthesiology
Clinics, 25, 2 (June 2007), 377-383.
[8] Carbonaro, M., Cutumisu, M., Duff, H., Gillis, S., Onuczko,
C., Siegel, J., Schaeffer, J., Schumacher, A., Szafron, D., and
Waugh, K. 2008. Interactive story authoring: a viable form of
creative expression for the classroom. Computers &
Education, 51, 2, 687-707.
[9] Carbonaro, M., King, S., Taylor, E., Satzinger, F., Snart, F.,
and Drummond, J. 2008. Integration of e-learning
technologies in an inter-professional health science course.
Medical Teacher, 30, 1, 25-33.
[10] Carpenter, E., Kim, I., Arns, L., Dutta-Berman, M.J., and
Madhavan, K. 2006. Developing a 3D Simulated Bio-terror
Crises Communication Training Module. In VRST 06:
Proceedings of the ACM symposium on Virtual reality
software and technology, New York, NY, USA, 342-345.
[11] Cho, B., Jeonghun, K., Dong, P. J., Saebyul, K., Yong, H. L.,
In, Y. K., Jang, H. L., and Sun I. K. 2002. The effect of
virtual reality cognitive training for attention enhancement.
Cyberpsychology & Behavior, 5, 2, 129-137.
[12] Collins, A. 1991. Cognitive Apprenticeship and Instructional
Technology. In Educational Values and Cognitive
Instruction: Implications for Reform, Idol, L. and Jones,
B.F., editors, pages 121-138, Lawrence Erlbaum.
[13] Duffy, T.M. and Jonassen, D.H. 1992. Constructivism: New
Implications for Instructional Technology. In Constructivism
and the technology of instruction: a conversation, T.M.
Duffy and D.H. Jonassen, editors, Lawrence Erlbaum.
[14] Eder-Van Hook, J. 2004. Building a National Agenda for
Simulation-based Medical Education. Retrieved from
http://www.medsim.org/articles/AIMS_2004_Report_Simula
tion-based_Medical_Training.pdf in February, 2009.
Published in 2004.
[15] Heinrichs, L.W., et al. 2008. Simulation for team training
and assessment: case studies of online training with virtual
worlds. World Journal of Surgery, 32, 2, 161-170.

6. FUTURE WORK
There are several areas of future work that we will be pursuing
over the coming months, to address existing issues, improve the
specification process, explore diverse context areas, and provide
empirical validation of the framework.
Implementing a process in Second Life (SL) poses platformspecific challenges. The limited interaction affordances provided
by SL make it quite challenging, if not impossible, to implement
some types of simple actions (such as picking up or pushing an
object) in a natural, realistic way. Other aspects of the system,
such as its conceptual division of entities into wearable clothing
and non-wearable objects, make other seemingly natural modeling
tasks similarly difficult. Thus, we would like to either a) develop a
consistent SL API which could be used to model processes in a
natural way or b) migrate to another virtual world platform (such
as Wonderland) that provides better native support for the
processes being modeled. Finally, we are planning on conducting
a series of empirical studies of the effectiveness of the EMT/ER
hand-off training scenario. The first phase of this research will be
a small-scale pilot study, currently under way to be followed by a
larger-scale study beginning in the Fall term of 2010.
With regards to the inter-professional communication skills
research project, two areas for future are refining the custom-built
communication tools and pursuing a second phase of the project.
In the second phase of the project, two programs will be
developed that represent further decreases in social presence and
increases in automation: a Blended Second Life program and a
Stand-alone Second Life program. In the Blended program,
students will have Second Life sessions with an SP and Second
Life sessions with an automated character. These automated
characters, referred to as non-player characters, will be
preprogrammed to lead students through several communication
scenarios, providing strategies and prompts throughout. In the
Stand-alone Second Life program, all scenarios will take place
with the automated character.

7. ACKNOWLEDGMENTS
The authors would like to acknowledge the generous support of
iCORE, NSERC, and IBM. We are also grateful to Andrew Reid

98

[32] Schroeder, U. and Spannagel, C. 2006. Supporting the active


learning process. International Journal on E-learning, 5, 2,
245-264.
[33] Second Health. 2009. Website, retrieved September 3, 2009,
http://secondhealth.wordpress.com
[34] So, H. J. and Brush, T.A. 2008. Student perceptions of
collaborative learning, social presence and satisfaction in a
blended learning environment: Relationships and critical
factors. Computers and Education, 51, 318-336.
[35] Stein, D. S. and Wanstreet, C. E. 2003. Role of Social
Presence, Choice of Online or Face-to-Face Format, and
Satisfaction with Perceived Knowledge Gained in a Distance
Learning Environment. Paper presented at the Midwest
Research to Practice Conference in Adult, Continuing and
Community
Education.
[Online
at:
http://www.alumniosu.org/midwest%20papers/Stein%20&%
20Wanstreet-Done.pdf] Retrieved: 9 May, 2004.
[36] Stevens, A., Hernandez, J., Johnsen, K., Dickerson, R., Raij,
A., Harrison, C., DiPietro, M., Allen, B., Ferdig, R., Foti, S.,
Jackson, J., Shin, M., Cendan, J., Watson, R., Duerson, M.,
Lok, B., Cohen, M., Wagner, P., and Lind, D.S. 2006. The
use of virtual patients to teach medical students history
taking and communication skills. The American Journal of
Surgery, 806-811.
[37] Stroulia, E., Chodos, D., Boers, N. M., Huang, J.,
Gburzynski, P., and Nikolaidis, I. 2009. Software
Engineering for Health Education and Care Delivery
Systems: The Smart Condo Project. Software Engineering
and Healthcare workshop (at ICSE 2009), May 18-19, 2009,
Vancouver, B.C., Canada.
[38] Vergara, V., Caudell, T., Goldsmith, T., Panaiotis, Alverson,
D. 2008. Knowledge-driven Design of Virtual Patient
Simulations. In Innovate: Journal of Online Education, 5.
[39] Vygotsky, L. 1978. Mind in Society. London: Harvard
University Press.
[40] Waller, D., Knapp, D., and Hunt, E. 2001. Spatial
representations of virtual mazes: The role of visual fidelity
and individual differences. Human Factors. 43, 1, 147-158.
[41] Wilfred, L., Hall, R., Hilgers, M., Leu, M., Hortenstine, J.,
Walker, C., and Reddy, M. 2004. Training in Affectively
Intense Virtual Environments. In G. Richards (Ed.),
Proceedings of World Conference on E-Learning in
Corporate, Government, Healthcare, and Higher Education,
Chesapeake, VA, 2233-2240.
[42] Wolf, P.G. Fern Seed: Population Genetics & Lifecycle
Simulations. Website, retrieved March 2, 2010,
http://fernseed.usu.edu/virtual-ferns
[43] Wheeler, S. 2005. Creating social presence in digital learning
environments: A presence of mind? Proceedings of TAFE
Conference, Queensland, Australia.
[44] Yee, N. 2006. Motivations for Play in Online Games.
CyberPsychology & Behavior, (December 2006), 9, 6, 772775.
[45] Zheng, D., Young, M.F., Wagner, M.M., and Brewer, R.A.
2009. Negotiation for Action: English Language Learning in
Game-Based Virtual Worlds. The Modern Language Journal,
93, 4, 489-511.

[16] Hoffman, H. 2004. Virtual-reality therapy. Scientific


American, 291, 2, 5865.
[17] Jackson, R.L., and Fagan, E. 2000. Collaboration and
Learning within Immersive Virtual Reality. In Proceedings
of the Third International Conference on Collaborative
Virtual Environments, New York, NY, USA, 83-92.
[18] Jrvel, S. 1995. The Cognitive Apprenticeship Model in a
Technologically Rich Environment: Interpreting the
Learning Interaction. Learning and Instruction, 5, 231-259.
[19] Johnson, C., Vorderstrasse, A., and Shaw, R. 2009. Virtual
Worlds in Health Care Higher Education. Journal Of Virtual
Worlds Research, 2, 2.
[20] Lang, A. and Bradley, J.C. 2009. Chemistry in Second Life.
Chemistry Central Journal, 3, 14. Available online at
http://journal.chemistrycentral.com/content/3/1/14
[21] Mcdonald, C.L. 2009. The Pulse!! Collaboration: Academe
& Industry, Building Trust. Presentation given at the 17th
Medicine Meets Virtual Reality Conference, January 19-22,
2009, Long Beach, CA.
[22] Mehrabian, A. 1972. Nonverbal Communication. Chicago:
Aldine-Atherton.
[23] Mowafil, M.Y., et al. 2004. Distributed interactive virtual
environments for collaborative medical education and
training: design and characterization. Studies in Health
Technology and Informatics. 98, 259-61.
[24] Nakasone, A., Prendinger, H., Holland, S., Hut, P., Makino,
J.,
and Miura, K. 2009. AstroSim: Collaborative
Visualization of an Astrophysics Simulation in Second Life.
IEEE Computer Graphics and Applications, 29, 5, 69-81.
[25] Piaget, J. 1968. Six Psychological Studies, Anita Tenzer
(Trans.), New York: Vintage Books.
[26] Pulse!! 2009. Website, retrieved October 24, 2009,
http://www.sp.tamucc.edu/pulse/home.asp
[27] Razorback Hospital. 2010. Website, retrieved January 13,
2010, http://vw.ddns.uark.edu/index.php?page=overview
[28] Rizzo, A.A., Buckwalter, J.G., Bowerly, T., Van Der Zaag,
C., Humphrey, L., Neumann, U., Chua, C., Kyriakakis, C.,
Van Rooyen, A., and Sisemore, D. 2000. The Virtual
Classroom: A Virtual Reality Environment for the
Assessment and Rehabilitation of Attention Deficits. In
CyberPsychology & Behavior, (June 2000), 3, 3, 483-499.
[29] Rosas, R., Nussbaum, M., Cumsille, P., Marianov, V.,
Correa, M., Flores, P., Grau, V., Lagos, F., Lpez, X., Lpez,
V., Rodriguez P., and Salinas, M. 2003. Beyond Nintendo:
design and assessment of educational video games for first
and second grade students. In Computers & Education, 40, 1,
71-94.
[30] Rose, F.D., Attree, E.A., Brooks, B.M., Parslow, D.M., Penn,
P.R., and Ambihaipahan, N. 2000. Training in virtual
environments: Transfer to real world tasks and equivalence
to real task training. Ergonomics. 43, 4, (April 2000), 494511.
[31] Schank, R. and Abelson, R.P. 1977. Scripts, Plans, Goals and
Understanding; An inquiry into human knowledge structures.
New Jersey: Lawrence Erlbaum.

99

Você também pode gostar