Você está na página 1de 10

Proceedings of the ASME 2010 International Mechanical Engineering Congress & Exposition

IMECE2010
November 12-18, 2010, Vancouver, British Columbia, Canada

IMECE2010-38308
ROBOTIC INTERFACES THROUGH VIRTUAL REALITY TECHNOLOGY
David B. Streusand
Graduate Research Assistant
Colorado School of Mines
Division of Engineering
1500 Illinois St.
Golden, Colorado, USA 80401
dstreusa@mines.edu

John Steuben
Graduate Research Assistant
Colorado School of Mines
Division of Engineering
1500 Illinois St.
Golden, Colorado, USA 80401
jsteuben@mines.edu

Cameron J. Turner
Assistant Professor
Colorado School of Mines
Division of Engineering
1500 Illinois St.
Golden, Colorado, USA 80401
cturner@mines.edu

Abstract
Virtual reality, the ability to view and interact with virtual
environments, has changed the way the world solves problems
and accomplishes goals. The ability to control a persons
perceptions and interactions with a virtual environment allows
programmers to create situations that can be used in numerous
fields. Virtual interaction can go from a computer program to
an immersive experience with realistic sounds, smells, visuals,
and even touch. Research in virtual reality has covered human
interaction with virtual reality, different potential applications,
and different techniques in creating the virtual environments.
This paper reviews several key areas of virtual reality
technology and related applications.
An application that has large implications for our research
is the control of robotic systems. Robotic systems are only as
smart as their programming. This limitation often limits the
utility of robotic applications in otherwise desirable
circumstances. Virtual reality technologies offer the ability to
couple the intelligence of a human operator with a physical
robotic implementation through a user-friendly virtualized
interface. This early-stage research aims to develop a
technological foundation that will ultimately lead to a virtual
teleoperation interface for robotics in hazardous applications.
The resulting system may have applications in nuclear material
handling, chemical and pharmaceutical manufacturing, and
biomedical research fields.

research underway with Los Alamos National Laboratory to


use virtual reality to develop innovative solutions enabling
intelligent robotic systems for nuclear materials handling.
In nuclear material handling operations, the safeguards that
are used to protect workers from the hazards posed by the
materials also reduce or eliminate their ability to interact with
the environment. Tactile sensations are dramatically reduced,
there is little or no auditory information, and even visual cues
are often limited due to obstructions and are complicated by
parallax effects. Integrating VR technologies and sensors into
robotic systems deployed into these environments would allow
operators to recover some if not all of their sensory information
about the environment while maintaining the protections
against the environmental hazards. Restoring these inputs is
essential to safe and efficient material handling operations.

1. INTRODUCTION
Virtual reality (VR) technologies allow numerous
innovations in robotics and manufacturing. In particular, VR
techniques support the development and implementation of
novel decision-making and user interfaces for robotic systems.
The following sections of this paper describe some of the
relevant literature supporting these developments, some of the
applications specific to the field of robotics, and finally the

2.1. HUMAN-VR INTERACTION


One of the most highly valued aspects of VR is the ability
to become immersed in a virtual environment. Immersion can
be traced to the application of realistic sensory information to
users, effectively providing natural interaction and reaction to
the applied sensory input. Lepecq, et al [1] demonstrated that
physical presence, or the feeling of immersion in a VR, can be
determined from unconscious actions when moving in a virtual

2. LITERATURE REVIEW
Virtual reality research covers an extensive and wideranging set of fields. Due to the versatility of virtual
environments and their applications, this review has been
divided into three subsections. The first subsection describes
research focusing on the effects of human interaction with
virtual systems. The second subsection reviews some of the
applications of virtual environments explored thus far. The
final subsection discusses several techniques for developing an
environment to meet specific requirements.

Copyright 2010 by ASME

Additional training using VR is done using simulators.


Many simulators have been developed to teach certain skills,
from aircraft piloting, to military tactics and even medical
training. Luciano et al [5] tested the use of haptic feedback in a
virtual reality system for periodontal training. Haptic feedback
in periodontal training is important because many of the
conditions that dentists look for are discovered through tactile
sensation. Therefore, any realistic periodontal training needs to
include haptic feedback of the virtual objects. In this case,
using haptic feedback in conjunction with virtual reality was an
integral part of training dentists with the simulator.
Not only can virtual reality be used to supplement learning,
it can also be used to assess learning tools. Experimentation
with collaborative e-learning has provided a new tool in
electronic learning [6]. Monahan et al used a three-dimensional,
interactive school environment that allows students access to
learning tools, lectures, and social areas. The system, called
CLEV-R, was designed to be a step beyond course
management systems such as Blackboard or WebCT by
including social interaction, a three dimensional environment,
and interactive media to provide a complete online-course
system. Initial evaluations of the system have shown promise
that the system could provide a more dynamic system for
online learning.
Another approach to learning with virtual reality systems is
through the use of augmented reality systems. These systems,
usually using a head-mounted-display (HMD), take reality and
add to it. In recent studies, augmented reality has been used in
conjunction with teaching car maintenance [7]. The system
uses a HMD to display interactive instructions when
performing maintenance. While making repairs, marine
mechanics are given instructions while also being provided
with identifying labels and arrows indicating where tasks need
to be performed. These instructions, provided by a computer
are able to visually identify object locations and provide indepth instructions on maintenance. When compared to other
methods, such as copying maintenance instructions from a
laptop screen, using the interactive Head-Mounted-Displays
(HMD) proved 47% faster in performing maintenance tasks [7].
Another application for virtual reality is in medicine.
Specifically, using virtual reality has shown a lot of promise in
treating victims of Post-Traumatic Stress Disorder (PTSD),
anxiety, and specific phobias. A study [8] on the effects of
virtual reality therapy on an Operation Iraqi Freedom veteran
suffering from PTSD found that the patients symptoms were
significantly decreased from attending only four therapy
sessions. The sessions were conducted using visual, audio, and
olfactory stimuli to simulate the conditions leading to the
disorder. Additionally, the floor was rigged to provide motion
sensation corresponding to the triggered memory. Sessions
were only an hour long and once a week. The significant
decrease in PTSD symptoms after only four weeks of therapy
indicate that prolonged use of virtual reality therapy may lead
to further recovery. These findings corroborate the findings of
Parsons and Rizzo [9] which provided a quantified basis to
support the more qualitative prior research results in anxiety
and phobia disorders, and which relied on either subjective tests
or, more often, few test subjects.

environment. His experiment, subjects walked through


doorways of varying width, and demonstrated that the majority
of users interact with a virtual environment as if the virtual
objects were solid. Evidence of physical presence was further
corroborated, as the relationship between shoulder and aperture
width ratio, and the angle turned was consistent for all
participants.
Similar research was conducted by Zudilova-Seinstra, et al
[2], which examined user differences using 2D and 3D
interaction. Their research looked at user differences in
selecting and positioning when using three different types of
input devices: mouse pointers, 2D glove interactions, and 3D
glove interactions. Findings indicate that while the mouse was
the easiest in object selection, the 3D glove input showed a lot
of potential with both object selection and positioning. Other
results from the research show that women, on average, spend
more time than men exploring the virtual environment.
Additionally, and perhaps unsurprisingly, younger participants
showed better ability to use the 3D glove interface, and also
provided better reviews of using the new technology.
Research done by Kruszynski and van Liere [3] looked
into the use of tangible props as input devices. They suggested
that tangible props enhance the feeling of presence and provide
a more intuitive and easy to use interface for object selection
and manipulation. Their approach used rapid prototyping
techniques to generate basic representations of objects in the
virtual environment. These objects could be outfitted with
multiple sensors (position sensors, accelerometers, buttons,
etc.) to provide ways to interact with their virtual
representations. In conjunction with a stylus, two-hand
interaction allows several degrees of freedom when
manipulating an object. Results from the research indicate that
the use of props gives an increased sense of immersion in a
virtual environment and increases the usability of the
application.
2.2. APPLIED VR
Virtual reality has the ability to provide input for the
different senses. Most common is a visual and audio input such
as in computer and video games, but a virtual environment also
can be used to provide smells and touch through the use of
either haptic feedback or physical props. Since virtual reality
can provide experiences that are as real as the designer can
make them, they have an unparalleled ability to allow users to
gain experiences in a controlled environment that would
normally not be associated with the corresponding real world
experiences.
One of the explored applications for virtual environments
is teaching. When teaching fire safety to children, Smith and
Erickson [4] found that using virtual scenarios to supplement
regular training provided higher motivation for children to learn
fire safety. Additionally, they demonstrated that immersive
VR systems have the advantage of allowing children to obtain
realistic experiences that could not be achieved through
traditional video or lectures. This study indicates that future
experiential-learning in classrooms can be enhanced with
tailored virtual environments that allow students to experience
the subjects they are learning.
2

Copyright 2010 by ASME

Other techniques have been developed to look at the


teleoperation of robotic devices [16-19]. Huijun and Aiguo
[16] demonstrate techniques that determine predictive force
data for remote devices, providing more accurate force data for
virtual control of remote robots. Khedder et al [17] shows that
for complex teleoperation of a robot, the use of augmented
reality or virtual environments can be used to decrease the
complexity by reducing the workspace to include only the
needed functions of the workspace. He and Chen [18] extend
previous research done on simpler robots by using a haptic
feedback system to control a six degree-of-freedom (DOF)
robotic arm. Their results indicate that the use of haptic
feedback is an important control in the manipulation of 6 DOF
robots. He and Chen [19] also show that the use of human input
in addition to the semi-automatic path planning of a virtual
robot is an improvement over human-only programming or
computer-based algorithms. Basically, they demonstrated that
having the robot provide options for critical alignments allows
the user to choose the most advantageous path for the robot.
This takes advantage of the intuitive grasp of path planning of
the human brain along with the computational power of the
computer.

One of the most common applications for virtual reality is


the simulation of real world objects, environments and
conditions. This may be as simple as design of a three
dimensional object, or it might be something as complex as an
interactive model simulating the activity of individual particles
in a system. In some cases, virtual reality has been used to
determine the effects objects have on perception and
environment. Jallouli et al [10] used virtual reality to determine
wind turbines ability to affect visual and audio perception of
the area. Additionally, the use of virtual reality has allowed
scientists to model the behavior of wild animals and insects
[11]. In this case, virtual reality can modify the visual and
audio stimulation for insects without constricting the potential
movements of the insects being studied, thus allowing scientists
to study their natural reactions to controlled stimuli.
2.3. VR TECHNIQUES
As most programmers know, developing the algorithms
used to accomplish a programming goal can be the hardest part
of coding. A significant amount of research has gone into
finding algorithms to help develop virtual technologies. In
many cases, the use of faster and more efficient memory
systems and the larger memory caches have enabled coding to
become more interactive and complex than early versions of
VR coding. Despite advances in technology, efficient code is
still a necessity when designing a virtual environment.
One of the most researched aspects of coding techniques is
the interaction between multiple objects in a virtual
environment. Research in this case has included object
interaction with an input [12], object penetration determination
[13], and deformation with haptic feedback [14]. In all cases,
the goal was to determine pseudo-code that can be used in
programming and otherwise develop techniques that can be
turned into algorithms to accomplish a goal.
Prada and Payandeh [12] describe the implementation of
virtual fixtures; geometric fields that can provide different
outputs depending on the programmed response. These fixtures
can be programmed to act like physical barriers, or can be used
to apply haptic feedback to keep users in a specified area [15].
Initial conclusions from tests indicate that the use of virtual
fixtures increases user performance of tasks, such as following
a designated path, while still allowing the user to maintain
control of the task.
Fares and Hamam [13] looked into developing algorithms
to determine penetration depth vectors, also known as object
overlap. These algorithms, known as penetration queries, have
potential in applications that are dependent upon object depth.
They also developed an algorithm that provides a linear
increase in time to calculate penetration depth with respect to
the number of facets on the objects surface.
Vafai and Payandeh [14] looked to model virtual dissection
with haptic feedback. Their research looked at the mechanics of
deformation and collision detection between multiple bodies.
An additional aspect explored was the use of haptic feedback
during dissection related to the deformation algorithms. The
final product of their paper laid the groundwork for future use
of virtual reality with deformation and collision controlled
haptic feedback.

3. RELEVANT PRIOR APPLICATIONS


Virtual reality applications can take many forms. Several
of these forms are particularly interesting in robotics
applications as they offer a far more advanced spectrum of user
interactions than a simpler graphic interface. A virtual
environment can be tuned to the needs of the user, and
introduce distortion into the virtual model in order to
emphasize or de-emphasize some model features. This could
include color changes or transparency adjustments based on
model conditions, enhancing the sense of depth-perception in a
model, or simplifying model elements that are not active.
Additional instructions can be provided to the user via
graphical (color indicators, arrows, etc.), aural (warning tones),
haptic, or other methods.
This section focuses on previous applications of VR
techniques to solve several robotics challenges. These
demonstrations show the ability of commonly available
software to generate useful interactive virtual models that aid or
increase the insight of the user.
In one application, the goal is to generate a decisionmaking representation of the world around a robot to enable
efficient path planning within a workspace. In this application,
the virtual representation of the workspace is actually a
distorted perception that is designed to facilitate the decisionmaking process. Key to this application is the realization that
distorted VR representation can be more useful than a true or an
undistorted representation.
The second highlighted application uses a virtual
simulation of a desired weld to reverse engineer the necessary
path plan for a robot to produce the weld. This application is a
marriage of simulation and virtual reality technology for
robotic path planning that utilizes computer-aided drawing
(CAD) tools and the popular MATLAB computational suite.
In a third application, the Microsoft Robotics Developer
Studio is one of several developing applications that are
3

Copyright 2010 by ASME

designed to facilitate offline and simulated interactions with


robotic hardware. The use of this package to use a virtual
environment for exploring the effectiveness of algorithms for
robotic path planning is explored in the third application.

Input of Topography,
Robot Characteristics,
Obstacles, Start and
Goal Points.

3.1. VISUALIZATION OF PATH OPTIMIZATION


Mobile-robot path planning is one of the most widely
researched topics within the field of modern robotics. In this
application, intuitive advanced visualization techniques are of
great benefit to the planner. In this section, let us apply
visualization to the problem of path optimization for a mobile
robot traveling over a curved topography, as opposed to a flat
plane.
3.1.1. BACKGROUND
Autonomous robots are being used in situations such as
space exploration, counter terrorist operations, search and
rescue, and other environments that are not accommodating to
human life. Mobile robots carry a finite amount of energy that
they are reliant upon for movement, and it is obviously
beneficial to conserve as much energy as possible. This can be
accomplished by following an energy-optimized path over a
given topography containing obstacles.
In an extraterrestrial environment, common obstructions
could include mountains, craters, and other large features
located between the robot and its goal. NASA engineers have
found the traversal of sloped terrain to be quite hazardous, with
only small slopes greatly increasing the danger of the rover
becoming stuck [20]. Using a stereoscopic camera array, or
other visual input system, path planning and navigation could
be calculated by the robot in real time. This could be a great
boon to the exploration and mapping of a somewhat-known
region on a foreign planet, where topographic information is
not previously known. The effective relay of this topographic
information will be of great importance to mission planners, as
it will afford engineers the opportunity to observe the robot's
progress, and adjust the robot's course to prevent the occurrence
of potentially dangerous circumstances.

Topography
warped to include
distance-to-goal
potential

Conditioning Step:
Workspace is bounded by a
high potential, and obstacles
are assigned a simmilarly high
potential.

Conversion of
continuous
topography into
discrete
workspace

Search: Discrete workspace


elements are searched by
varying methods

Path Cost
Calculation,
Results Display

Figure 1. Path planning overview.


3.1.3. CONTOURED WORKSPACE VISUALIZATION
In Fig. 1, steps in which visualization techniques will assist
the path planner are shaded (in blue). The first highlighted step
involves the warping of the terrain topography to reflect the
distance of any given point from the goal point. For ease of
optimization, we wish to construct a potential field such that
the edges of the workspace and any obstacles have a very high
potential, while the goal point is at the global minimum
potential. In the current challenge of navigation, we wish to
maintain information regarding the topography of the
workspace in this potential-field representation as well. The
warping step benefits from visualization, as the underlying
distance-to-goal potential is inherently difficult to visualize
without assistance. Figure 2 shows a warping function applied
to the test topography as illustration.
The visualization provided in Fig. 2 allows an intuitive
understanding of the robot's workspace, and simplifies the
identification of the optimum route to the goal. This warping is
made possible by the virtualization of the environment. VR
enables a freedom to represent an distorted view of reality in
order to reveal hidden realities.
3.1.4. VISUALIZATION OF OPTIMIZATION RESULTS
Aside from a better understanding of the robot's operating
area, visualization techniques are essential to understanding the
results of path optimization studies. While the list of Cartesian
points along the path may be of interest, they are of little use
until visualized. Using rudimentary visualization techniques,
the path planner can gain much more insight into the planned
path, as shown in Fig. 3.
In Fig. 3, the red line represents the calculated optimal
path, while black regions indicate areas explored by the search
algorithm. The black border on the edge of the space represents
an arbitrarily high potential used to insure that the planned path
remains within the workspace. Even the very simple
visualization of Fig. 3 shows the planner the calculated path,
and gives insight into the operation of the searching and
optimization algorithms.

3.1.2. VISUALIZATION
Visualization techniques also are essential to the designer
in terms of understanding the path optimization of the robot.
Consider a general approach to path planning on a 3D surface,
as illustrated in Fig. 1.

Copyright 2010 by ASME

3.1.5. ADVANCED VISUALIZATION APPLICATIONS


The preceding example uses only rudimentary
visualization techniques to convey information to the planner,
and might be more accurately termed a computer graphics (CG)
exercise. However, the method shown could be easily expanded
so that planners could adjust optimization parameters in realtime and receive immediate visual feedback in a virtual
environment. The use of more advanced visualization
techniques undoubtedly would increase the engineer's insight
into this problem. While an eye in the sky view of the
workspace is adequate for many uses, visualizing the path with
VR techniques also could reveal interesting information
concerning obstacles that are located to the side or above the
robot. This is a tantalizing possibility, where the path planner
could be immersed in the workspace. In this application, the
planner would be able to examine the terrain, and the planned
path in detail, from many points of view, in order to ascertain
the validity and safety of the planned path.
3.2. INDUSTRIAL ROBOTICS
The study of industrial robotic manipulators has been
greatly augmented by the use of advanced 3D visualization
techniques. However, it is inherently difficult understand the
pose of a robot from a list of joint dimensions and angles (i.e.
the DH parameters of a robot). As a result many visualization
suites have been developed and commercialized, such as
Microsoft's Robot Development Studio. Additionally, modules
and plug-ins for software tools not explicitly intended for
robotics applications have been developed. The Robotics
Toolbox for MATLAB/Simulink is a prime example of this.
Some other tools, such as CAD suites, can be used in robotics
visualization without modification. In this section, we will
apply several of these tools to a complex industrial problem,
the robotic welding of tubes.
3.2.1. BACKGROUND
While the automation of welding procedures has become
commonplace, welds along complex paths, such as the
intersection of two round tubes, have largely been left to human
welders. This is largely due to the difficulties in path planning
introduced by the requirements of proper welding technique. In
order to properly join two members the welder tip (robot end
effector) must be held at a proscribed angle to, and distance
from, the work piece. This introduces constraints of both
position and orientation on the end effector, which require a
sophisticated path planning approach. While this problem may
seem to resemble a common computer-aided machining (CAM)
path planning challenge, it is in fact far more difficult. CAM
tools consider only one work piece and the tool that is to
perform the work. To address the welding of parts two work
pieces must be considered. The correct weld path must be
calculated from the actual (not the expected) positions and
orientations of these work pieces.
In this section an overview of one such approach is
described, while showing that modern virtual-reality techniques
are vital in this application.

Figure 2. Visualization of the warping applied to a test


topography.

Figure 3. Visualization of optimization results.


5

Copyright 2010 by ASME

Clearly, it can be seen from Fig. 4, that the proper


orientation for the welding tip can be described by a series of
vectors normal to a 45-degree fillet between the tubes.
In order to efficiently process these vectors, and the robot
kinematics, another virtual robot is constructed using the
popular MATLAB computational package, as shown in Fig. 5.

3.2.2. VISUALIZATION
In order to conduct path planning in this situation, it is
necessary to use several visualization and VR tools. Camera
data and image processing algorithms are used to find vectors
describing the central axis and diameter of each tube to be
welded. This data is passed to a CAD tool, which produces a
virtual model of the tubes, and the welding robot, in true
relation to each other. This process is illustrated in Fig. 4.

Figure 5. Virtual robot created using MATLAB.


The use of MATLAB allows easy computation of the joint
angles required for proper orientation at close intervals along
the weld path. The resultant robot orientations or waypoints can
be shown sequentially as an animation, as shown by Fig. 6.

Figure 6. Robot end effector, with blue markers showing path


waypoints.
3.2.3. INDUSTRIAL ROBOTICS APPLICATIONS
It can be seen that VR techniques provide great potential
for analyzing the movements of industrial robots, especially
when it is necessary to follow complex paths. The ability to
extract geometry information from a CAD packages is of great
benefit in this situation, and many robotic design suites now
include this ability. The ability to construct a virtual-reality
workspace also provides benefits in terms of robot safety. The
virtual workspace can be used to simulate activities and detect

Figure 4. Virtual work piece reconstructed from image data


(top), virtual welding robot (center) and work piece detail
(bottom).

Copyright 2010 by ASME

glovebox acts as a containment vessel, which limits access to


the interior to a series of gloved openings, and a set of window
ports. Often, the interior of the glovebox is maintained at
subatmospheric pressure, so that any containment breaches
result in an air inflow into the glovebox. This causes system
monitoring and path planning to become difficult because of
the limited feedback received by the technician.
The challenges resulting from the combination of safety
features often leads to ergonomic injuries [21]. The gloves and
reduced air pressure makes it difficult for workers o grasp
objects, and virtually impossible for workers to perform tasks
requiring dexterous motions such as threading a nut on a bolt.
More importantly for this paper, the windows restrict visual
feedback to the operator and can cause parallax issues.
Additionally, current methods for moving and adjusting the
robot do not provide haptic feedback that would be valuable for
determining the status of the system.
Our goal is to develop a robotic system and virtual reality
interface, which enables operators to work in these insolated
environments virtually, while the robotic system is intelligently
controlled by the operator who works in a shirtsleeve
environment, following the demonstration of Khedder et al [17]
that complex teleoperation of a robot can be simplified through
the use of a simplified virtual workspace. The virtual system
will include a visual representation of the robotic system and
environment and haptic feedback from the robot. This will
improve current path planning and obstacle avoidance
techniques within the system, and create the ability for the user
to monitor the system from any angle required. The use of a
virtual environment will also help provide feedback to the user,
utilizing augmented reality similar to that demonstrated by
Henderson and Feiner [7] in their research. Initially, this
environment is being applied to nuclear material handling
applications, but similar opportunities exist in the chemical
processing, pharmaceuticals, and biomedical research and
production fields. To develop a system that can be certified for
operation in this nuclear environment, several integration issues
need to be addressed.

potential collisions or other undesirable occurrences. Advances


in the methods by which industrial equipment designers interact
with computers can only increase this effect.
3.3. VIRTUAL PROGRAMMING OF MOBILE ROBOTS
Many types of software exist for programming robots.
Through the expanding capabilities of computers, the user
interface for these programs enable programmers to generate
realistic models of robots to test their programs and algorithms.
For one program, Microsoft Robotics Developer Studio, a
visual programming language, where connections between code
and the decision making is visible through a tree-like structure,
is used to either connect to a physical robot or to interact with a
virtual model. This structure, like a decision making tree, maps
out the flow of information in the program and develops values
from sensor inputs that correspond to actuator outputs. In
the case of mobile robots, an input to the system might be a
distance sensor, while the output actions may include a left or
right turn.
This programming language is designed so that the main
decision-making part of the program can be left unaltered and
be able to interface with different robots (virtual or physical) by
changing the robot manifest. These manifests hold the
information on how the robot operates and can translate
commands from the program into physical actions from the
robot. The manifest can also go to a virtual robot. In essence,
the virtual robot will act exactly like a physical one would, but
exists in a virtual environment, which is also programmable.
Microsoft Robotics includes several built-in mobile robots
for path planning, as well as several environments for these
robots to interact with. With built in collision detection, and
realistic physics, the built-in environments already provide a
virtual environment that can interact with the programmed
robot, controlled using anything from the built in functions in
the program, to robotic sensors, to hand control from a
keyboard or mouse.
This easy adaptability from a virtual environment to a real
robot is a valuable tool in robotic programming and path
design. The use of a virtual robot as a test bed for a program
allows the programmer to determine faults in the program
before attempting to interact with an expensive robot.
Additionally, the easy change in components for the virtual
robot can help designers understand which components are
needed for the robot to perform its required task. For instance,
the range and sensitivity of sensors needed to sense objects and
collision can be adjusted to determine the most effective values.

4. 1. PATH PLANNING
Robotic path planning is currently performed using
relatively low-technology methods. The most prevalent method
is to manually move the robot into each intermediate motion
point using a teach pendant and programming that waypoint
into the robot motion program. This is a highly labor intensive
process but is often all that is supported by most robotic
hardware. A few commercial robotic systems support offline
programming, where a virtual model of the robot and its
environment is used to generate a program for the robot. While
this enables the robot programs to be generated offline, the
process is essentially the same. The operator defines a set of
intermediate waypoints, which are then interpolated to generate
the robot motions.
In lieu of a teach pendant based interface, virtual reality
technologies may offer a superior alternative. Instead of issuing
commands to move the robot, the operator would perform the
required motions in a virtual environment with their own hand
and arm. The virtual reality controller would record these

4. VISION FOR NUCLEAR MATERIAL HANDLING


Several applications of virtual reality technologies have
been explained previously in this paper. One that has not been
explored is virtual realitys ability to provide a useable,
informative interface for inaccessible systems.
Los Alamos National Laboratory (LANL) has been
involved in the development of numerous automated systems
for nuclear material handling operations. These systems by
necessity are inside enclosed gloveboxes, preventing leakage of
any nuclear radiation. In general, glovebox applications are
challenging because of the protective equipment employed. The
7

Copyright 2010 by ASME

motions and use them to establish the necessary robot program


elements for the robot to mimic the motion of the operator. He
and Chen [19] demonstrate how human interaction mixed with
computer algorithms is an improvement over using only one of
the technologies. By combining built-in computer algorithms
and human decision-making, robotic path planning can be
streamlined to provide the best path.
When using hand motions to program a robotic path, there
are three interface technologies to consider. The first, virtual
reality gloves have the ability to sense movement and allow the
user to interact with a virtual environment. The second, haptic
gloves, in addition to virtual environment interaction, also
allow the operator to receive physical feedback from the virtual
environment. These physical effects can be as simple as being
able to touch objects, or as complex as applying an additional
gravitational force. For path planning purposes, only an input
into the virtual environment is needed; however, the capability
to touch virtual objects and restrict movement may be a
useful tool to ensure the robot can emulate the physical motion
of the glove. In other words, restricting the motion of the hand
would ensure that accidental motion that a robot cannot copy is
unable to be programmed as part of the robotic path and also
allows the hand to move as if it is experiencing the same
obstructions as the robot. These capabilities are particularly
important when the robot must come into contact with an
obstacle, which will occur when the robot needs to insert or
remove an object from the lathe. Most gloves are built to output
to a specific setup, but some come with a variety of
programming options that allow individuals to access the
coding they use to convert hand motions into useable
quantities. The use of 3D gloves as a user interface shows a lot
of promise when used for object selection and positioning [2],
and this capability can easily be adjusted to help with path
planning.
The third and final control interface is not a glove, but a
control device, such as that popularized by game systems such
as the Nintendo Wii. These controllers offer some of the same
capabilities as the glove options, but often at reduced costs.
Often these systems can be readily interfaced into a computer
control system and these systems are becoming an increasingly
popular alternative in research settings. The popularity of
gaming systems makes this interface option almost as intuitive
as the glove systems.

internally and robustly detect potential collisions and avoid


obstacles.
Collision detection is the ability for a robotic arm to detect
potential collisions with objects in its path (i.e. obstacles).
Collision detection is used in conjunction with obstacle
avoidance algorithms to develop an alternative path around the
obstacle that avoids the collision. Having a collision detection
system will enable the robotic arm to avoid unexpected
obstacles in its path, and ensure that known obstacles
programmed into the virtual environment are both in the correct
location and avoided.
Collision detection can be done in several different ways.
One of the most efficient ways is to do a virtual model of the
system in which spheres represent robotic parts and
environmental features. For instance, a wall would be
represented by arrays of hemispheres, while a robotic arm
would be made up of spheres at key locations on the body.
These spheres, when in proximity to each other, can act as a
buffer between known parts in a virtual space so that
inconsistencies between the measured and actual data can be
accounted. This method allows programmers to setup a suitable
safety gap that can prevent collisions from faulty
measurements [22-24].
4. 3. HAPTIC FEEDBACK FOR GRIPPER
As indicated by He and Chen [2008], haptic feedback is
important in the manipulation of 6 DOF robots. As mentioned
previously, haptic feedback has the ability to limit human
interface to the abilities that the robot being manipulated can
follow. In addition to actual manipulation of the robots path,
haptic feedback can be a valuable tool for gripper interaction.
Robots manipulate objects within their workspace with
end-effector tools known as grippers. When a worker performs
work in a glovebox, the glove compromise but do not eliminate
all haptic sensation. In a traditional robotic implementation,
however, this sensation is lost. In order to retrieve useful
information for the operator, grippers can be outfitted with a
number of sensors. These sensors can indicate anything from
whether there is an object within the grippers grasp to the force
being generated by the gripper on the object. This information
can be useful for path and motion planning and also can be
used to validate the grippers grasp of an object. Using haptic
feedback as a signal of the strength of a grip would be
instrumental in robotic control, as the need for grippers to apply
the correct amount of force to hold and manipulate an object is
paramount to the functionality of the gripper. Haptic feedback
provides a viable solution toward ensuring that the use of a
human interface can ensure that a robots grasp is on target and
providing the correct amount of force for object manipulation.

4. 2. COLLISION & OBSTACLE AVOIDANCE


In this approach to path planning and robot control, the
link between the operator and the robot is managed through a
virtual interface. This interface is built around a model of the
system, including the robot and its local environment. The
purpose of this component is to perform a motion check of the
movements of the operator so that potential problems with the
motion of the robot, due to kinematic and geometric
differences, can be detected and mitigated before the robot
mimics the motion. Note that the kinematics of the operator and
of the robot need not be identical. In fact, these systems can
benefit from the use of nonhuman kinematics. However, central
to this implementation is the ability to resolve differences
between control and implementation kinematics and the need to

5. FUTURE WORK
This project is in its early phases, but offers a number of
paths for realizing the advantages of VR in robotics. Our
current focus is to integrate low-cost sensor systems into a
robotic gripper, which can provide haptic feedback to an
operator. We hope to move beyond the gripper design into a
project phase, which would enable us to demonstrate the path
planning capability through a virtualized system. The basis to
8

Copyright 2010 by ASME

Interaction With Medical Image Data, Virtual Reality,


14(2), pp. 105-118.
[3] Kruszynski, K.J., van Liere, R., 2009, Tangible Props for
Scientific Visualization: Concept, Requirements,
Application, Virtual Realiy, 13(4), pp. 235-244.
[4] Smith, S., Ericson, E., 2009, Using Immersive Gamebased Virtual Reality to Teach Fire-safety Skills to
Children, Virtual Reality, 13(2), pp. 87-99.
[5] Luciano, C., Banerjee, P., DeFanti, T., 2009, Hapticsbased Virtual Reality Periodontal Training Simulator,
Virtual Reality, 13(2), pp. 69-85.
[6] Monahan, T., McArdle, G., Bertolotto, M., 2008, Virtual
Reality for Collaborative e-Learning, Computers &
Education, 50(4), pp. 1339-1353.
[7] Henderson, S.J., Feiner, S.K., 2007, Augmented Reality
for Maintenance and Repair (ARMAR). Technical Report
AFRL-RH-WP-TR-2007-0112, United States Air Force
Research Lab, Wright-Patterson AFB.
[8] Gerardi, M., Rothbaum, B.O., Ressler, K., Heekin, M.,
Rizzo, A., 2008, Virtual Reality Exposure Therapy Using a
Virtual Iraq: Case Report, Journal of Traumatic Stress,
21(2), pp. 209-213.
[9] Parsons, T.D., Rizzo, A.A., 2008, Affective Outcomes of
Virtual Reality Exposure Therapy for Anxiety and Specific
Phobias: A Meta-analysis, Journal of Behavior Therapy and
Experimental Psychiatry, 39(3), pp. 250-261.
[10] Jallouli, J., Moreau, G., Querrec,R., 2008, Wind
Turbines Landscape: Using Virtual Reality for the
Assessment of Multisensory Perception in Motion,
Proceedings of the 2008 ACM symposium on Virtual reality
software and technology, Bordeaux, France, pp. 257-258.
[11] Fry, S.N., Rohrseitz, N., Straw, A.D., Dickinson, M.H.,
2008, TrackFly: Virtual Reality for a Behavioral System
Analysis in Free-flying Fruit Flies, Journal of Neuroscience
Methods, 171(1), pp. 110-117.
[12] Prada, R., Payandeh, S., 2009, On Study of Design and
Implementation of Virtual Fixtures, Virtual Reality, 13(2),
pp. 117-129.
[13] Fares, C., Hamam, Y., 2009, Optimisation-based
Proximity Queries and Penetration Depth Computation,
Virtual Reality, 13(2), pp. 131-136.
[14] Vafai, N.M., Payandeh, S., 2009, Toward the
Development of Interactive Virtual Dissection with Haptic
Feedback, Virtual Reality, 14(2), pp. 85-103.
[15] Abbott, J.J., Marayong, P., Okamura, A.M., 2007, Haptic
Virtual Fixtures for Robot-Assisted Manipulation, Robotics
Research, STAR 28, pp. 49-64.
[16] Huijun, L., Aiguo, S., 2007, Virtual-Environment
Modeling and Correction for Force Reflecting Teleoperation
with Time Delay, IEEE Transactions on Industrial
Electronics, 54(2), pp. 1227-1233.
[17] Kheddar, A., Neo, E., Tadakuma, R., Yokoi, K., 2007,
Enhanced Teleoperation Through Virtual Reality
Techniques, Advances in Telerobotics, STAR 31, pp. 139159.
[18] He, X., Chen, Y., 2008, Six-Degree-of-Freedom Haptic
Rendering in Virtual Teleoperation, IEEE Transactions on
Instrumentation and Measurement, 57(9), pp. 1866-1875.

implement this functionality is a leader-follower program using


a VR glove. The robotic system would be able to obtain the
end-effector pose from the glove and would then need to derive
the necessary joint positions to enable this pose. Initially, the
result will be an offline path-planning program, which could be
used to download the robot program to the physical hardware.
Eventually, further developments could bypass the offline
programming step and directly connect robot movements to
those of the VR gloves as if the gloves were a part of a
teleoperation interface. Direct interaction will be highly
predicated upon implementing a real-time obstacle avoidance
and collision detection module that will prevent the robot from
colliding with objects within the workspace. This is particularly
important if the kinematics of the robot do not match the
kinematics of the operator.
Haptic feedback may be beneficial in these
implementations. Feedback of collisions through a haptic
connection to the user would enable the user to adapt their
motions to the constraints of the glovebox.
This study has several limitations. Primarily, the various
difficulties and obstacles towards implementing virtual
environments for robot control have not been discussed. These
difficulties likely include both hardware and software issues,
and will require significant investigation in the future.
Secondarily, the scope of both VR tools and robot types
considered by this study is comparatively small. Future
research may reveal powerful combinations of robotic and VR
tools that we have not discussed.
The development of a virtualized glovebox has
applications not only in an operational sense, but also for
training purposes. Virtualization of the glovebox interface
offers a dramatic potential for improvements to the operational
efficiency, user health, and the safety of systems through the
use of computer-enabled safety interlocks and user-interface
assistance tools. Significantly, the operator could obtain
numerous operational advantages from this approach including
the a reduction in ergonomic injuries, reduced operator
procedural errors due to built in operational directions, and
increased precision and operational efficiency attributable to
the virtual nature of the system.
ACKNOWLEDGEMENTS
The authors of this paper gratefully appreciate the support
of the Division of Engineering at the Colorado School of Mines
in performing this work. Any opinions, findings, and
conclusions or recommendations expressed in this publication
are those of the authors and do not necessarily reflect the views
of the Colorado School of Mines.
REFERENCES
[1] Lepecq, J.C., Bringoux, L., Pergandi, J.M., Coyle, T.,
Mestre, D., 2009, Afforded Actions as a Behavioral
Assessment of Physical Presence in Virtual Environments,
Virtual Reality, 13(3), pp. 141-151.
[2] Zudilova-Seinstra, E., van Schooten, B., Suinesiaputra, A.,
van der Geest, R., van Dijk, B., Reiber, J., Sloot, P., 2009,
Exploring Individual User Differences in the 2D/3D

Copyright 2010 by ASME

[19] He, X., Chen, Y., Haptic-aided Robot Path Planning


Based on Virtual Tele-operation, Robotics and ComputerIntegrated Manufacturing, 25(4-5), pp. 792-803.
[20] Maimone, L., Biesiadecki, J., 2007, "Overview of the Mars
Exploration Rovers' Autonomous Mobility and Vision
Capabilities," IEEE International Conference on Robotics
and Automation (ICRA) Space Robotics Workshop, Roma,
Italy.
[21] Cournoyer, M., Castro, J., Lee, M., Lawton, C., Park, Y.,
Lee, R., and Schreiber, S., 2009, Elements of a Glovebox
Glove Integrity Program, Journal of Chemical Health and
Safety, 16(1), pp. 4-10.
[22] Harden, T., 1997, The Implementation of Artificial
Potential Field Based Obstacle Avoidance for a Redundant
Manipulator, Masters Thesis, The University of Texas at
Austin, Austin, Texas.
[23] Harden, T., 2002 Minimum Distance Influence
Coefficients for Obstacle Avoidance in Manipulator Motion
Programming, Ph.D. Dissertation, The University of Texas
at Austin, Austin, Texas.
[24] Xu, J., Liu, D.K., Fang, G., 2007, An Efficient Method
for Collision Detection and Distance Queries in a Robotic
Bridge Maintenance System. Robot Weld Intelligence &
Automation. Springer-Verlag Berlin Heidelberg, 2007. pp 7182.

10

Copyright 2010 by ASME

Você também pode gostar