Você está na página 1de 24

CAD model based virtual assembly simulation, planning and training

Ming C. Leu (1)


a,
*, Hoda A. ElMaraghy (1)
b
, Andrew Y.C. Nee (1)
c
, Soh Khim Ong (1)
c
,
Michele Lanzetta (2)
d
, Matthias Putz (2)
e
, Wenjuan Zhu
a
, Alain Bernard (1)
f
a
Department of Mechanical and Aerospace Engineering, Missouri University of Science and Technology, Rolla, MO, USA
b
Intelligent Manufacturing Systems Center, Industrial and Manufacturing Systems Engineering Department, University of Windsor, Windsor, Ontario, Canada
c
Department of Mechanical Engineering, National University of Singapore, Singapore
d
Department of Civil and Industrial Engineering, University of Pisa, Italy
e
Chemnitz University of Technology, Chemnitz, Germany
f
LUNAM Universite, Ecole Centrale de Nantes, IRCCyN UMR CNRS 6597, France
1. Introduction
1.1. Motivation
To succeed in todays ercely competitive global market,
manufacturers must reconsider their assembly methods and
strategies. More agile and responsive assembly methods and
strategies have to be developed to meet the dynamic requirements
of customers and the shortened product lifecycle. More efcient
assembly systems must be designed in order for products to
remain protable and competitive. These goals could be achieved
through assembly simulation, planning and assessment in a CAD
model based virtual environment (VE) before launching a real
factory in order to identify potential problems without the use of
physical mockups, thus shortening the design cycle and improving
product quality. Even assembly training can be conducted using a
VE in order to train workers and improve their skills.
Many companies use CAD model based simulations to improve
the capability and efciency of their assembly processes. For
example, before implementing the V-Comm Digital Mockup
program, 80% of Toyotas manufacturing problems were assembly
issues. By using simulation and addressing assembly issues in the
design phase, Toyota shortened its lead time by 33%, reduced
design variations by 33%, and reduced product development costs
by 50% [83,158]. By using motion capture and assembly simulation
in a VE, Ford reduced its assembly-related worker injuries
dramatically with better designed workstations through assembly
assessment with ergonomic analysis; additionally, the quality of
new vehicles measured 3 months after their sale improved by 11%
[178].
1.2. Overview
1.2.1. CAD model based assembly simulation, planning and training
system
Fig. 1 shows the schematic of a hybrid digital and physical
system for CAD model based assembly simulation integrating
design, planning and training. It emphasizes the integration
CIRP Annals - Manufacturing Technology 62 (2013) 799822
A R T I C L E I N F O
Keywords:
Assembly
CAD model
Simulation
A B S T R A C T
This paper reviews the state-of-the-art methodologies for developing computer-aided design (CAD)
model based systems for assembly simulation, planning and training. Methods for CAD model generation
from digital data acquisition, motion capture, assembly modeling, humancomputer interface, and data
exchange between a CAD system and a VR/AR system are described. Also presented is an integrated
methodology for designing, planning, evaluating and testing assembly systems. The paper further
describes the implementation of these methods and provides application examples of CAD model based
simulation for virtual assembly prototyping, planning and training. Finally, the technology gaps and
future research and development needs are discussed.
2013 CIRP.

Fig. 1. Schematic of a CAD model based assembly simulation, planning and training
system [57]. * Corresponding author.
Contents lists available at SciVerse ScienceDirect
CIRP Annals - Manufacturing Technology
j ournal homepage: ht t p: / / ees. el sevi er. com/ ci rp/ def aul t . asp
0007-8506/$ see front matter 2013 CIRP.
http://dx.doi.org/10.1016/j.cirp.2013.05.005
between product design, manufacturing planning, and actual
production. Such a system also serves as the basis of learning
factories, which are ideal for transferring research outcomes to
industry. New changeable and recongurable manufacturing
systems can be investigated, in which novel system concepts
and changeability enablers can be developed, realized, tested, and
evaluated [57].
CAD model based simulations have been developed with
functions spanning from conceptual and detailed design to
manufacturing process planning to product maintenance. They
have provided insight into product design and have been shown to
reduce manufacturing time and costs and to improve product
quality signicantly. A CAD model based simulation system for
assembly planning, training and assessment is illustrated in Fig. 2.
The 3D model of a physical part can be generated using CAD
software or a reverse engineering process with the acquisition of
part geometric data in digital form from an existing physical part.
Then, the movement of the physical part and the human operator,
as well as their interaction with each other and with other objects
in the physical environment, can be tracked using a motion capture
system and other input devices, such as a sensory glove, a
microphone, etc. Furthermore, stereoscopic viewing augmented by
auditory and haptic sensations can make the operator feel fully
immersed in a virtual reality (VR) environment.
Generally speaking, there are three objectives in CAD model
based assembly simulation: (1) evaluating the assembly process in
the early design stage; (2) generating practical and suitable
assembly operation sequences; and (3) creating a virtual assembly
platform for ofine training of operators on assembly tasks.
With motion capture and 3D visualization capabilities, inter-
actions among products, processes and human operators can be
analyzed and evaluated to identify potential problems during
assembly, such as awkward postures, poor workcell layout,
insufcient tools and xtures and inability to access parts. During
the assembly operation, the contact force can be estimated and
transmitted to the operator using a haptic device so that the
operator can feel the physical contact. This could increase the
delity of simulation and can be used to simulate complex
assembly tasks.
1.2.2. Evolution of assembly planning research
Assembly is a critical process in manufacturing that may
consume up to 50% of the total production time and account for
more than 20% of the total manufacturing cost in traditional
industrial manufacturing [144]. Assembly automation and opti-
mization have been studied thoroughly in several areas, including
the following [152,191,207]. Assembly design has been studied and
applied to reduce assembly costs in the product conception and
design stage [20]. Assembly sequence planning has been conducted
to determine the optimal assembly sequence of components and
other aspects, including tool changes, xture design, assembly
freedom, etc., in the component assembly stage [107]. It affects
how quickly and cost-effectively the product is assembled. At this
stage, ergonomic analyses also must be performed to consider
human factors in manual assembly. Systemconguration generation
is the next stage [192]. Traditionally, assembly systems are serial,
but non-serial congurations are now more widely used
[81,87,104]. Assembly line balancing has been investigated to
assign various assembly tasks to different workstations with the
objective of having equal or almost equal loads among the
workstations in the production planning stage [102,214]. All of
these research efforts aim to build a well-designed assembly
process in order to improve product quality and production
efciency and to reduce assembly costs and the products time to
market.
Historically, assembly personnel have scheduled assembly
plans for mechanical products based on existing assembly lines/
cells and their own experiences, and they have veried the plans by
assembling physical prototypes. With more complicated assembly
tasks or new products/plants, this method becomes more time-
consuming, expensive and error-prone.
Computer-aided assembly planning (CAAP), or computer-aided
assembly process planning (CAAPP), has the ability to automate
assembly planning to reduce manpower requirements and
simplify the planning process. ElMaraghy [55] discussed the
evolution and future perspectives of CAAP. Traditionally, CAAP
generates the assembly sequences by studying the disassembly
process. Later, CAAP utilizes intelligent identication and groups
geometric features based on automatic feature recognition
[39,116] to generate assembly sequences. Contact/connection/
interference features and part surfaces/volumes can be auto-
matically extracted from CAD les [47]. Generally speaking, CAAP
systems have three major limitations. First, the number of possible
assembly sequences will increase exponentially with the number
of parts requiring assembly; therefore, selecting an optimal or
near-optimal assembly sequence for a given part becomes more
difcult. Secondly, CAAP cannot incorporate expert knowledge
from the assembler, which is essential to developing an efcient
and successful assembly sequence. Thirdly, CAAP does not have
human interaction with the assembled parts, so it cannot evaluate
issues related to ergonomics, such as awkward postures and
reaching angles. These limitations have led CAAP into the realm of
VR based assembly planning.
VR technology can simulate an assembly operation with 3D
humancomputer interactions, including visual, haptic and
auditory interfaces. With VR technology, human assembly
planners can immerse themselves inside a VE, implement the
design concept in the early stage, and evaluate assembly/
disassembly sequences and operations to analyze the design of
assembly processes and systems.
Assembly planning has evolved from manual planning to
computer-aided planning to VR based planning, with the
objectives of shortening assembly time, reducing costs, increasing
operator safety, and improving production efciency and product
quality. This evolution is depicted in Fig. 3, which shows
publications from the Compendex & GEOBASE databases [61] on
computer-aided assembly planning and virtual assembly simula-
tion from 1972 to 2011. CAAP research peaked during the 1990s,
while virtual assembly simulation research has been increasing
steadily. This implies that the simpler virtual assembly simulation
technology is gradually replacing the paradigm of more complex
algorithmic assembly planning.

Fig. 2. Key elements of a CAD model based assembly simulation system.

Fig. 3. Publications on CAAP and virtual assembly simulation.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 800
New enabling technologies that can be utilized for assembly
planning, simulation and training in a network-centric environ-
ment have continuously been developed in recent years. Key to
these enabling technologies is the use of CAD model based
simulation, including computer graphics, VR, and augmented
reality (AR) as the basis for developing advanced tools (software
and hardware) and systems for assembly planning and
training.
1.2.3. Objectives and organization of keynote paper
The objectives of this keynote paper are to review the state-
of-the-art enabling technologies, tools and systems in CAD
model based virtual assembly simulation, planning and training;
to identify research trends and technical issues; to provide
relevant application examples; and to discuss technology gaps
and future research and development needs. The paper is
organized such that the enabling technologies including digital
data acquisition, motion capture, and multi-modal rendering are
reviewed rst, followed by the methods of virtual assembly
modeling, planning and training. Two sections are devoted to
discussing virtual assembly based on augmented reality
technologies and the design, planning, evaluation and testing
of assembly systems.
2. Digital data acquisition for CAD modeling
Assembly simulation, planning and training based on CAD
models utilize a virtual environment, so how to build 3D digital
models for objects has become a fundamental issue. One
common approach for generating 3D models is to use commer-
cial software, such as NX, CREO, CATIA and SolidWorks, to design
CAD models for 3D objects. Another approach is Modeling from
Reality [90], which starts with data acquisition from physical
objects in a real environment and ends with 3D digital models
representing these objects on the computer [156]. With this
approach, the often time-consuming modeling process done by
human programmers can be automated. Consequently, both the
development time and cost of computer models of 3D objects can
be reduced drastically.
The process of creating a 3D model from a real object includes
data acquisition, data processing, positional registration, modeling,
and rendering [51,52]. Considering the acquisition of digital data,
three alternative object modeling methods can be distinguished
[156]:
(i) Image-based modeling. This modeling method uses 2D image
data to recover 3Dinformation through a mathematical model
or methods such as shape from shading [84], shape from
texture [99], shape from specularity [82], shape from contour
[179], and shape from 2D edge gradients [196]. These
techniques can be used to generate 3D data from a single
view [98]. To obtain more information, multiple views can be
used to construct the 3D model more quickly and/or
accurately [46]. The image-based modeling method has the
advantage of higher portability and lower cost compared with
range-based modeling.
(ii) Range-based modeling. The 3D objects detailed geometric
information is acquired directly in this modeling method,
which relies on articial lighting, such as the structured light
[63,76], coded light [100,143], and laser light [26,208]. The
techniques are based on triangulation, time-of-ight, contin-
uous wave, and interferometry reectivity measurement
principles.
(iii) Integrated image- and range-based modeling. Baltsavias [13],
Bohler [19] and Remondino [155] compared the image-based
and range-based modeling methods and showed that no single
modeling technique can satisfy all desired features of high
geometric accuracy, portability, full automation, photo-
realism, low cost, exibility, and efciency. To achieve better
performance and address the limitations associated with
either of these two methods, the image-based and range-
based modeling methods can be integrated together [63,142].
As an example, the structured light can be used to create
feature points for matching, and the point cloud obtained by a
stereo vision system can be post-processed to obtain the
objects 3D object model.
2.1. Digital data acquisition techniques
No matter which of the above three modeling methods is used,
the rst step is to acquire the digital data of the 3Dobjects surface.
Many data acquisition techniques exist, and they can be
categorized as contact and non-contact methods [15]. A classica-
tion of these techniques is depicted in Fig. 4, which emphasizes
optical methods with subcategories. Contact measurement tech-
niques have been used for many years in reverse engineering and
industrial inspections. The main contact measurement method is
to use a Coordinate Measuring Machine (CMM), which is a mature
and well-established technology. A CMM typically uses a probe to
measure an objects surface. The probe may be mechanical, optical,
laser, white light, etc. The machine can generate the X, Y, Z
coordinates of each point with micrometer precision. A CMM has
several drawbacks, however. First, an NC path must be planned in
order to cover the entire surface, which may be difcult for
measuring an object with complex geometry. Secondly, a CMM
requires the probe to physically touch the surface when taking
measurements. The probe may damage the surface under some
conditions, or sudden surface changes may damage the probe.
Thirdly, a CMM can pick only one point with each measurement,
which increases the measurement time. In comparison, imaging-
based methods can provide non-contact, relatively fast data
acquisition. The following section will focus on optical techniques
for acquiring the 3D data of an objects surface.
2.1.1. Image-based techniques
Image-based techniques are used widely to obtain a surfaces
geometric information by analyzing how the images are formed
and how light affects them. To obtain depth information, the
imaging systemhas to be calibrated to obtain the systems physical
parameters, and some complicated image processing must be
performed. Stereo vision, shape from shading, and shape from
silhouette are some well-known techniques for doing this.
2.1.1.1. Stereo vision. Stereo vision works by acquiring an objects
3D geometric information from two or more images taken from
different points of view. A stereo vision system determines which
point in one image corresponds to which point in the other image
(the Correspondence Problem), and then triangulation is used to
calculate the 3D information. Stereo vision is also called passive
vision because it does not use active lighting, and hence is usually a

Fig. 4. 3D data acquisition techniques.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 801
low-cost system. However, it is fundamentally difcult to nd the
correspondence between the images taken from different views.
Correspondence algorithms can be classied as correlation-based
[68,73] or feature-based algorithms [171]. In correlation-based
algorithms, a similarity criterion is used to measure the elements
inside a xed windowin the two or more images. These algorithms
typically give dense measurements of depth data. On the other
hand, feature-based algorithms nd correspondence between the
different images using a set of features. By comparing the distances
between feature descriptors, which are measured with numerical
and/or symbolic properties of the features, the corresponding
elements can be found by determining the pair of most similar
features. However, feature-based approaches can only recover the
corresponding feature points to determine their 3D positions. The
main drawback of this approach is that it is difcult to reconstruct
3D shapes from stereo images in real time with high resolution.
2.1.1.2. Shape from shading. Shape from shading deals with the
recovery of a shape from a gradual variation of shading in the
image [84]. To solve this problem, it is important to investigate
how the images are formed. A simple model of image formation is
the Lambertian model, in which the gray level at any given pixel in
the image depends on the light direction and the surface normal. In
the shape-from-shading method, given a gray-level image, the aim
is to recover the surface shape at each pixel in the image. However,
the Lambertian model does not always apply to real images, so
using the shape-from-shading method for reconstruction of real
3D objects is difcult.
2.1.1.3. Shape from silhouette. Shape from silhouette (SFS), also
known as visual hull (VH) construction, is a popular 3D model
generation method that estimates the objects shape through
multiple silhouette images. The algorithm for estimating the
bounding volume of a physical object was described in [118]. The
approximated shape of the object fromonly two silhouette images
may be very coarse. The shape approximation can be improved
greatly by combining multiple silhouette images captured using
multiple cameras at different locations or using a single camera at
different times [35].
Initially, SFS was used to acquire a 3D model non-invasively
with silhouette images that were captured simultaneously or
sequentially while the object was static [108]. Later, some
researchers worked on recovering shape and motion data from
moving objects using SFS [185], and on rening the shape over
time with the help of stereo vision [35]. Recently, SFS has been
extended to 3Darticulated objects; consequently, the possibility of
using SFS to track human motion has been investigated and
developed [36,148,174]. Obtaining a 2D silhouette is a computa-
tionally simple task, so the SFS method has been regarded as an
effective method by which to reconstruct 3D objects. Human
motion tracking using the SFS technique excels as a marker-less
system.
2.1.2. Range-based techniques
Range-based methods can measure the range of a point on an
object based on time delay, triangulation, continuous waves,
interferometry, or reectivity measurement principles [156]. Only
the time of ight and structured light techniques are discussed in
the following subsections.
2.1.2.1. Time of ight. The 3Ddata acquisitionmethod that uses the
time-of-ight principle, as shown in Fig. 5, directly measures the
time taken for the light to travel from the transmitter to the object
surface and back to a receiver. With the known speed of light and
time of measurement, the distance range can be calculated. The
common light used in this method is laser, which can travel a long
distance with good resolution. The projection of the laser used in
this method is always a dot. In order to measure the entire surface
of the object, the laser has to be moved around during the
measurement, which reduces the measurement speed. In 2010,
Panasonic Electric Works announced the release of its new time-
of-ight 3D image sensor, the D-Imager [145]. Instead of using a
laser, the system utilizes a near-infrared LED, which is safer for the
human body. With an array of LEDs, the whole object can be
covered without moving the light sensor; therefore, the range
between the entire surface and the camera can be obtained in real
time. This camera has been used for gesture recognition.
2.1.2.2. Structured light. Using a structured light based technique
to acquire surface data from a 3D object is one of the most reliable
approaches to reconstruct 3Dmodels for objects. A structured light
based system is an active stereo vision system in principle. With a
calibrated projector-camera pair, a certain pattern is projected
onto the object, allowing the correspondence of the images to be
identied. The depth information can be retrieved using the
triangulation technique [162]. The structured light can be divided
into one- and two-dimensional structured light, as described
below.
(i) One-dimensional structured light
One-dimensional structured light is usually a single line of a
light projected onto a 3Dobjects surface, as illustrated in Fig. 6,
where the depth of any point P on the projection line can be
calculated using the following equation: z = btan
1
a.
To obtain information about the entire surface of the 3D
object, the laser line has to scan along one axis. Although it
provides good measurement accuracy, this method is time-
consuming and requires highly specialized and expensive
equipment [5].
(ii) Two-dimensional structured light
In order to avoid the time-consuming scanning of one-
dimensional structured light, two-dimensional structured light
based methods have been developed. Salvi et al. [161]
performed an exhaustive analysis of the different coding
strategies used in two-dimensional structured light and
categorized them as either discrete or continuous, as shown
in Fig. 7. The discrete coding methods consist of (i) spatial
multiplexing methods, including De Bruijn based techniques,
non-formal coding, M-arrays, etc. and (ii) time-multiplexing
methods, including temporal binary codes, temporal N-array
codes, shifting codes, etc. The continuous coding methods

Fig. 5. Principle of time of ight.

Fig. 6. Principle of one-dimensional structured light based measurement.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 802
consist of (i) phase shifting methods, including single-phase
shifting methods and multiple-phase shifting methods, (ii)
frequency multiplexing methods, including Fourier transform
prolometry, wavelet transform prolometry, etc. and (iii)
spatial multiplexing grading methods.
During implementation, the surface of a 3D target is
illuminated by a structured-light projection pattern, and the
image captured by the imaging sensor varies accordingly. Based
on the distortion of the structured-light pattern seen on the
sensed image compared to the undistorted calibrated projection
pattern, the geometric shape of the target can be computed.
Usually, the source of structured light used in the sensor is a laser
that is visible, bright and hazardous to human eyes. Therefore,
invisible structured light, such as infrared, has increasingly been
used because it can be projected to the scene without disruption
and danger [66,100].
2.1.3. Comparison between different techniques
Each of the digital data acquisition techniques discussed above
has its own advantages and drawbacks. A comparison of these
various techniques is given in Table 1, which can be used to help
choose the right technique in a given situation for acquiring digital
data for CAD modeling.
2.2. CAD modeling from digital data acquisition
With the 3D digital data acquired from a physical object, the
CAD model of the object can be generated by surface reconstruc-
tion, the objective of which is to determine the objects surface
from a given nite set of points. Usually the acquired data are
disorganized and noisy. Furthermore, the surface may not have a
certain topological type and could be arbitrary in shape. The steps
to generate CAD models from the acquired 3D digital data include
the following [156]: (i): pre-processing: erroneous data are
removed, and the noise in the data is smoothed out; (ii)
determination of the surfaces global topology; (iii) generation
of the polygonal surface; and (iv) post-processing: Edge correction,
triangle insertion, hole lling, and polygon editing are used to
optimize and smooth the shape.
According to the acquired data type, the surface reconstruction
methods can be classied into the following three types:
(i) Unorganized point clouds: The point data scatter in space
without any information aside from their spatial positions.
There are no constraints on them such as object geometry,
point adjacency or connectivity, other than that all the points
can be expected to lie on a common surface. Mullen et al. [131]
and Ye et al. [202] proposed some methods that showed
promise in terms of their ability to reconstruct the surface
from disorganized, unoriented, noisy and outlying hidden
data.
(ii) Structured point clouds: The best-known structured point
cloud for CAD modeling is obtained by slicing a closed surface
by a stack of parallel planes. The result is usually a sequence of
closed polygonal loops (more than two loops when there are
branches) located in parallel planes. This technique has been
researched for quite some time, and a survey of techniques can
be found in [130]. More recently, Wang et al. [187] proposed
an efcient method based on two-dimensional Delaunay
triangulation. Triangulated surfaces are easily managed in STL
format and are very common in rapid prototyping and other
CAM systems.
(iii) Volume data: Based on the sample points, a spatial triangula-
tion consisting of tetrahedral cells can be generated, and a
surface can be constructed by adapting the marching cubes
algorithm to tetrahedrons from the tetrahedral mesh. Some
automatic tetrahedral mesh generation methods have been
developed [128].
3. Motion capture and multi-modal rendering
Humancomputer interfaces play a vital role in a virtual
assembly (VA) system by providing the user with different visual,
haptic and auditory sensations to increase the degree of immersion
in the virtual environment (VE). Fig. 8 shows a typical virtual
reality (VR) system conguration with physical input and output
devices that transmit information between the user and the VE.
These sensing technologies are essential to the realism of the VR
system. The key technologies addressed here include motion
capture, haptic modeling and rendering, and auditory modeling
and rendering.

Fig. 7. Classication of 2D structured light coding methods.


Table 1
Comparison between different data acquisition techniques.
Techniques Pros Cons
Stereo vision No need for active lighting; low-
cost; easy to implement
Correspondence between
different views and
reconstruction of 3D shape with
high resolution
in real time are difcult
Shape from shading Able to reconstruct a 3D shape
from a single image
Difcult to implement
algorithms for 3D shape
reconstruction in the real world
Shape from silhouette Computationally simple; able to
reconstruct 3D shapes efciently
Accuracy is relatively low
Time of ight Able to provide high accuracy at
a reasonable price
Data acquisition time is
relatively high
Struc-tured light
1D structured light Able to provide high accuracy;
no need
for complicated correspondence
calculation
Laser scanning is time-
consuming; equipment is
relatively expensive
2D structured light No need for scanning or
complicated
correspondence calculation
Less accurate than 1D structured
light technique

Fig. 8. VR system conguration with humancomputer interface.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 803
3.1. Motion capture
To animate objects represented by CAD models, either key-
frame animation or motion capture based animation can be
adopted [2]. In key-frame animation, the application developer
has to set key values for parameters, such as the position and
orientation of each object, and then save these values at particular
time instants. After setting the key frames, the 3D animation
software interpolates the parameter values for key frames using
an editor to generate intermediate frames, thus generating
smooth object movements in the animation. This technique
depends on the persons ability to generate correct and accurate
key frames and to create the interpolation between key frames.
Generating a realistic animation using this technique is very time-
consuming when complex movements are involved. Motion
capture based animation employs sensors to measure the
positions and orientations of the operators and parts (workpieces,
tools, etc.) in physical space and record them as functions of time
in a computer. Then these data are used to describe the
movements of the CAD models to generate the animation. With
good motion capture capabilities, the animation can be generated
easily and accurately in real time.
Motion capture and analysis research has been active in recent
years. Various optical, acoustic, inertial, magnetic and other
sensors, as well as their combinations, have been developed.
The applications of motion capture include, among others,
surveillance, control and analysis [126]. Surveillance applications
include automatic monitoring and recognizing public structures
such as airports and subways to monitor the ow of people and
detect abnormal activities for security purposes. Control applica-
tions refer to manipulating CAD models using estimated motion or
pose parameters for humancomputer interface. Analysis applica-
tions are used to analyze particular events, e.g., to diagnose
orthopedic patients in the clinic or to optimize the performance of
athletes. Motion capture techniques were reviewed and high-
lighted by Menache [123].
Based on the sensing techniques used, motion capture can be
divided into two categories: optical and non-optical. A main
common feature among different optical motion capture techni-
ques is that the motion data are calculated from digital images, so
these techniques also are called image-based motion capture. The
3D position and orientation of an object can be calculated fromthe
images taken by one or more cameras. Usually, special markers are
attached to the object to represent the position of each marker.
However, some recent systems can generate data for the position
and orientation of a 3D object by identifying the objects surface
features without using any markers. The different optical motion
capture techniques are discussed next.
3.1.1. Passive markers
A passive marker usually incorporates a plastic ball with retro-
reective material on the outside surface of the ball that is attached
to the object being tracked and reects the IR light back to the
camera. Around the camera lens, the IR LEDs normally are installed
circularly and emit the IR light to the tracking area. The marker
does not emit any IR light and does not have power, so it is called a
passive marker. The camera used in the system is mainly sensitive
to IR light, so the image from the camera consists of only
the reective light from the passive markers. This removes most of
the noise from the background and makes image processing easier
and faster. With the position of each marker, the orientation of the
object mounted with multiple markers can be estimated. This type
of motion capture system also can be made wireless and is easy to
implement. It usually consists of multiple IR cameras to avoid
occlusion and achieve larger coverage. It generally has better
accuracy than using active markers.
Several passive optical motion capture systems are available
commercially, such as ART [7], Vicon [183], OptiTrack [141], and
IoTracker [93]. Their prices vary depending on the number of
cameras, performance, accuracy, etc. The motion capture system
that Ford Motor Company has been using to evaluate their
assembly line and product design is a passive optical system[45]. A
similar system was used to evaluate an existing workstation at an
automobile assembly plant [48].
3.1.2. Active markers
In an active optical system, the IR LED is used as the marker and
has a power supply. The LEDcan emit its own IR light, which can be
picked up by the camera. With the LEDs on, the images taken by an
active system are not different from the images taken by a passive
system, so the same marker identication algorithm can be
implemented. The LEDs can be turned on or off as desired. The price
of an active optical systemis generally lower than that of a passive
optical system because the light emits from the marker itself and
there is no need to add circular lights around the lens. However, the
LEDs need power, which is less convenient than using passive
markers.
The active optical motion capture technique has been adopted
by Nintendo, whose Wiimote controller has an IR camera in the
front, and whose sensor bar in the game console consists simply of
two IR LEDclusters in one line. Zhu et al. [210,212,213] and Chadda
et al. [32] developed a low-cost active optical motion capture
systemwith Wiimote and Firey cameras to automate an assembly
simulation for evaluating the ergonomics of a fastening operation.
Kirk [101] developed an algorithm to estimate a subjects skeletal
structure using the motion capture data from an active optical
system.
3.1.3. Marker-less
A marker based motion capture system can offer good
measurement accuracy, but it has drawbacks including: (i) it is
time-consuming to put markers onto the tracked object; and (ii)
the attached markers may interfere with the objects normal
movement.
Marker-less motion capture is attractive because it is non-
invasive. Currently available marker-less human motion track-
ing systems mainly use the shape from silhouette technique and
the structured light technique discussed in Section 2 to
construct CAD models for humans and other 3D objects. Human
model construction is just one component in the human motion
tracking system, which includes initialization, tracking, pose
estimation, and recognition [126]. For pose estimation and
recognition, a full-body model is needed in the marker-less
system, which contains surface morphological information and
kinematics information that can be used to describe how the
model moves.
Microsoft works with a depth sensor to recognize the human
pose from a single depth image [18] and has introduced the
commercial marker-less human body tracking device called
Kinect, which performs skeleton-based motion tracking. Some
researchers have tried to estimate not only the articulated rigid-
body skeleton but also the potential surface deformation caused by
the tissue or garment [3]. Researchers at the Missouri University of
Science and Technology have used Kinects to track the movement
of the human body in an assembly operation [43]. The data
provided by a Kinect have been used to generate the simulation of
an assembly operation for ergonomic analysis. They also have
developed a system with multiple Kinects to increase the coverage
area of motion capture.
3.1.4. Non-optical techniques
Magnetic motion capture systems can provide the objects
position and orientation based on the earths magnetic eld.
MotionStar from Ascension Technology Corp. is a commercial
system that has good accuracy and update rate [8]. However, it is
expensive, requires a substantial power supply, and can be affected
easily by metallic objects in the environment.
With the use of gyroscopes and accelerometers, an inertia-
based motion capture system can measure the rotation angles of
joints. Moven from Xsens Corp. [200] is a portable commercial
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 804
system based on this principle, but it cannot provide positions
directly. Also, measurement drift may accumulate over time.
An acoustic motion capture system computes the markers
position using the time-of-ight technique. The user wears an
ultrasonic emitter, and multiple receivers are congured at known
locations in the environment. The time differences from the
different receivers are used to calculate the location of the emitter.
Hybrid systems can be developed to compensate for the
individual shortcomings of different types of sensors and conse-
quently improve system performance. For example, optical and
inertial sensors have been combined by Ascension Technology
Corp. [8] to resolve the occlusion problem in the optical system.
3.1.5. Comparison between different techniques
Each of the techniques discussed above has its own advantages
and drawbacks. A comparison between the various techniques is
given in Table 2, which is helpful to selection of the right
techniques for various applications.
3.2. Haptic modeling and rendering
Haptic feedback can be used to increase the simulations
realism to the user in performing a virtual assembly task. For
example, Edwards et al. [50] evaluated the use of force feedback to
perform an assembly task in an immersive virtual environment.
Haptic feedback becomes essential when the simulation is poorly
visible or the virtual object is partly or totally occluded [149].
Haptic feedback can be categorized into force feedback and tactile
feedback. Force feedback relates to a virtual objects hardness,
weight and inertia, while tactile feedback simulates the users feel
of the virtual objects surface geometry, smoothness, slippage,
temperature, etc. [28].
3.2.1. Collision detection
A basis for planning and executing a virtual assembly operation
is collision detection. The input is a set of objects (i.e., all objects in
the scene graph) represented by CAD models, while the output is a
set of intersecting or overlapping polygons. The ability of a VR
system to simulate realistic object behavior at interactive frame
rates is very important; thus, a collision detection algorithm
should be able to compute the time and position of collision
quickly, given the positions of moving objects as functions of time
[205].
Zachmann [205] classied collision detection algorithms as
hierarchical and non-hierarchical. Most hierarchical algorithms
start by enclosing objects with bonding boxes and then perform-
ing collision detection with a bounding box test [205]. A
description of different bounding box algorithms can be found
in [29]. Non-hierarchical algorithms use other representations,
such as points and voxels, for the objects [205]. Regardless of the
algorithm category, edge/polygon and polygon/polygon intersec-
tion tests are the basic operations of collision detection
algorithms.
To decrease the number of faces that need to be tested in
collision detection, hierarchical algorithms use bounding volumes,
such as boxes and spheres, to exclude objects that are not
interfering. The object is divided into different bounding volumes
to allowthe interfering regions to be identied quickly [204]. Next,
the intersecting volumes are subdivided into smaller volumes for
further interference tests. The subdivision continues until reaching
a pre-dened volume size.
Zachmann [205] developed an algorithm consisting of a
pipeline of bounding boxes that checks carefully chosen coordinate
frames combined with sorting of bounding boxes. By sorting, one
can nd quickly a range of polygons within a certain region. The
sorted list of polygons can be updated quickly between successive
frames, because usually deformation between frames is small. This
algorithm can check two spheres with 10,000 polygons each
within about 4.5 ms on average.
The consideration of exible parts in assembly planning
remains an important research issue. A rst attempt to solve this
problem involved integrating hoses and wires mounted at both
ends of non-exible parts [72]. The integration of exible parts
complicates the collision detection problem signicantly because
recalculations with deformations in the parts may nd the initial
calculations invalid if they do not consider the deformations [205].
3.2.2. Force computation
Haptic rendering must be conducted at a rate of 1000 Hz or
higher in order to be realistic, so interactive forces between two
objects in contact have to be calculated at very high rates. This
implies that the force computation must be done with a fairly
simple mathematical model (e.g., no nite element modeling) in
order to be computationally efcient.
One simple approach for force computation(after a collisionhas
been detected) is to employ a single-point representation of the
tool [119,215]. The force computation can be done using Hookes
law with the following formula:

F kd

N (1)
where k is the objects stiffness, d is the shortest distance from the
tool point to the objects surface, and N represents the vector from
the tool point to the contact point (i.e., this vector is along the
surface normal at the surfaces contact point). Besides the force
caused by the objects stiffness, friction occurs through the relative
motion of two surfaces in contact with one another and can be
estimated using the Coulombic friction model:
F F
C
siny (2)
where v is the relative velocity, and F
C
is the Coulombic force,
which equals the normal force multiplied by the coefcient of
friction (which depends on the surface properties). In [17], more
details can be found about other formulations that have been used
to improve the Coulombic model by considering more subtle
frictional effects. To include the inertial force, a mass-spring-
damper model can be used to estimate the contact force as follows:
M u
t
D u
t
Ku
t
f
t
(3)
where M is the mass, D is the damping constant, K(u
t
) represents
the stiffness force, and u
t
is the current contact point. The details of
this method can be found in [49].
The single-point object representation for force computation
has the following drawbacks: (i) it does not represent the 3Dshape
of a virtual tool, and (ii) it models a workpiece with inhomoge-
neous material as one having the properties of homogeneous
material. Single-point force estimation rarely reects the force
magnitude and direction accurately, especially when the tools and/
or workpieces are freeform objects. This problem can be overcome
by using a multiple-point object representation for collision
detection and force computation. In this approach, the workpiece
in the virtual environment can be represented by a voxmap, and
the tool can be represented by a point shell with a set of surface
Table 2
Comparison between different techniques.
Techniques Pros Cons
Optical system
Marker based
Passive marker No cable required for activating the
marker; high accuracy
Markers may interfere with human
movement; occlusion may occur
Active marker Less expensive than using passive
markers; high accuracy
Power supply is required for markers;
markers may interfere with human
movement; occlusion may occur
Marker-less No markers needed to track objects Lower accuracy; less reliable
Non-optical system
Electromagnetic Able to provide good accuracy Relatively expensive; more power
supply required; easily affected by
metallic objects in the environment
Inertial Easily portable Substantial measurement drift may
accumulate over time
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 805
points and the associated inward pointing surface normals at these
points [121]. When a tool point interpenetrates a workpiece voxel
(volumetric element), the interpenetration depth can be calculated
as the distance d from the tool point to the tangent plane, which is
constructed as a plane passing through the voxels center and
having the same normal as the surface normal at the tool point. The
force at that point can be calculated using the distance d and
the workpiece stiffness at that point. The net force acting between
the tool and the workpiece can then be obtained by summing the
vector forces computed at the various tool points from such point-
voxel intersections.
3.2.3. Haptic rendering
Haptic rendering includes force rendering and tactile rendering.
Sensors and actuators can be combined in a device to measure the
tools contact position with a virtual object and apply force or other
haptic displays to the user at the contact position. The selection of
haptic rendering hardware for a given application should take into
consideration the number of degrees of freedom needed, max-
imum and sustainable force levels, friction, stiffness, etc. [28].
The Sensable PHANTOM haptic device from Geomagic [71], as
shown in Fig. 9, has been used widely as a force feedback device. An
advanced version of this device can provide not only force feedback
in three translational degrees of freedom but also torque feedback
in three rotational degrees of freedom. The force feedback is
applied to the whole hand at the point of contact, which is called
the haptic interface point. This makes the PHANTOM device
suitable for VR applications with point interaction for the whole
hand.
If manipulation of virtual objects with haptic feedback provided
to individual ngers (not just the whole hand) is of interest, a
wearable haptic device such as the CyberGrasp [42] or Rutgers
Master II [23] can be used. Shown in the left of Fig. 10, CyberGrasp
consists of a sensory glove (CyberGlove) and an exoskeleton
mechanism. It can provide force feedback to each nger and the
palm, so it is suitable for more complex manipulations in the VE.
The position of each nger is measured by the sensory glove, and
this information is used to compute the force to be provided by the
exoskeleton mechanism to each nger for interaction with the
virtual object. The forces generated by the CyberGrasp are
grounded in the palm or in the back of the hand; thus, this device
can only be used to feel the size and shape of a virtual object, not its
weight. CyberForce, shown in the right of Fig. 10, possesses the key
features of both the PHANToM and CyberGrasp devices. It can
provide a very natural haptic interface in the VE while interacting
with a simulated graphical object, i.e., being able to sense the
objects shape and size as well as its mass and inertia.
Because all haptic feedback devices incur signicant costs and
have strict geometry, placement and workspace requirements,
which prevent them from being used widely, pseudo-haptic
feedback was proposed by Lecuyer et al. [110] to provide haptic
illusions using visual feedback in the VE. These researchers
conducted some experiments to showthe feasibility of providing a
sense of touch without using complex mechanical devices. Lecuyer
[109] surveyed research and applications of pseudo-haptic feed-
back, including simulations of various haptic properties such as the
stiffness of a virtual spring, the texture of an image, and the mass of
a virtual object.
3.3. Auditory modeling and rendering
In virtual assembly, audio clues can be used to augment visual
and haptic displays. Auditory rendering is especially helpful when
haptic feedback is not available. Synthetic sound can be used to
approximate the real sound generated fromthe physical assembly,
which can make the simulation more realistic.
Physics-based sound modeling is too computationally expen-
sive for real-time rendering required for virtual assembly. Spectral
modeling can be used as the basis for sound synthesis in virtual
assembly simulation. Its general form is [169]:
st
X
N
k1
A
k
sinv
k
t u
k
rt (4)
where s(t) is the input sound signal; A
k
, u
k
, and v
k
are the
amplitude, frequency and phase of the kth sinusoid, and r(t) is the
residue. The sinusoidal, or deterministic, components in the sound
model correspond to the main modes of vibration in the physical
system. The residue, which is stochastic in nature, comprises the
energy that is not transformed into deterministic vibrations. The
output of spectral modeling consists of a set of peak frequencies,
magnitudes, and phases corresponding to the sinusoidal compo-
nents, as well as the residual part of the time-varying signal. After
performing Fast Fourier Transform (FFT) for each windowed
portion of a given signal, a series of complex spectra are obtained,
fromwhich the magnitude spectra are calculated. After the sounds
sinusoids have been obtained, the next step is to obtain the
spectrum of the residual part. This can be done by subtracting the
sinusoids from the original sound in the time domain and then
performing FFT on the resulting signal using the same window
function as that used for the original sound signal.
For auditory rendering, sound synthesis is performed rst by
transforming the input peak frequencies, magnitudes and phases
into time-domain sinusoids and then adding the sinusoids frame
by frame. The synthesis of the residual part of the sound takes the
residues enveloped spectrum and applies Inverse Fast Fourier
Transform (IFFT) with a window function to this spectrum to
generate a stochastic signal in the time domain. The sinusoidal and
residual parts then are added together frame by frame to create the
synthesized sound for auditory rendering, which outputs the result
of the synthesis to sound generation hardware such as a sound card
and loud speaker, so that the user of the VE can hear it [136].
3.4. Multi-modal rendering
A major challenge in developing a multi-modal VR systemis the
coordination of computations for rendering graphics, haptics and
sound, which require very different update rates. Multi-threading
can be used to simultaneously satisfy the different requirements of
update rates for the various rendering modalities. For example,
multi-modal rendering computations can be conducted in the
following threads:

Fig. 9. The Sensable PHANTOM device.

Fig. 10. CyberGrasp (left) and CyberForce (right).


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 806
(i) Simulation thread: This thread conducts the computations
using the collision detection algorithmand modeling methods
described above.
(ii) Graphics thread: This thread runs with a 30 Hz timer to fulll
the graphic rendering of geometric models and to update these
models during the simulation.
(iii) Haptics thread: This thread runs with a 1 kHz timer to render
the contact force using the force modeling and rendering
techniques described.
(iv) Sound thread: This thread generates sound data at a rate of
20 kHz using the sound synthesis and rendering techniques
described.
4. Virtual assembly modeling and simulation
Constraints such as mating and alignment exist in part
assembly processes in the real world. Thus, in virtual assembly,
generating accurate positions and orientations to update virtual
parts is important for generating realistic assembly simulations.
Much research has been conducted to develop effective methods
by which to model part movements in assembly in order to achieve
this realism. The developed methods can be categorized into
constraint-based modeling and physics-based modeling. Before
presenting these two modeling methods, the assembly part model
representation will be discussed because it serves as the basis of
both modeling methods.
In VR, geometric models usually are represented by polygons
for the purpose of fast rendering. However, polygonal representa-
tions cannot be utilized for virtual assembly simulation, planning
and training without more part information. Therefore, when
transforming CAD models into polygon models in a virtual
assembly system, all of the geometry, physics, topology and
assembly information must be transferred into the virtual
assembly system. Geometry data are used for graphic display
and collision detection, and topology data are used to build the
hierarchical mapping relationships in the model data. Physics data
and assembly data are used in the assembly modeling.
The hierarchical data model used to represent parts in the
assembly simulation may consist of various levels, including the
product level, subassembly level, part level, feature level, surface
level, and polygon level [33,198]. For elements in the different
levels, hierarchical mapping relations are extracted from the
topology data. For elements at the same level, there exist
constraint relationships from the assembly data. These constraints
include external constraints between different parts; external
constraints between different features; external constraints
between different surfaces due to parallelism, coincidence,
perpendicularity, alignment, co-edge, etc.; internal constraints
within each feature to dene the parts shape and its structure; and
internal constraints within each surface to dene the features
shape and its structure. The internal constraints mainly are used to
maintain the objects inner structures and shapes, while the
external constraints are used to dene assembly relationships
between objects. The lowest level, which is the polygon level, is
used for real-time graphic rendering and collision detection in the
assembly simulation modeling process. As an example, Fig. 11
depicts the hierarchical constraint-based data model for a
temporary fastener, which can be divided into subassembly A1,
subassembly A2, and part 3.
4.1. Constraint-based modeling
When the user operates objects in the VE and moves related
objects close to each other, the potential geometric constraints can
be captured. The precise position and orientation of each of these
objects can be calculated with a constraint solver, and the
constraint-based motion can be simulated. Constraint-based
modeling can be based on either positional or geometric
constraints [64].
4.1.1. Positional constraints
Positional constraints can be represented by a set of equations,
which are then solved using a numeric method [111], a symbolic
method [69], or a graph-based method. In the numeric method, an
iterative technique such as NewtonRaphson can be used to solve
equations numerically. In the symbolic method, the equations are
solved using a symbolic algebraic technique such as Grobner bases.
In the graph-based method, equations and variables rst are
maintainedin an undirected bipartite graph, whichthen is directed
to a sequence of constraint satisfaction. The graph-based method
lacks efciency and cannot be used for real-time application. The
symbolic method is too slow for large-scale applications. The
numerical method is generally applicable and can process loops,
but may not always be stable. These are trade-offs among the
different methods.
The temporary fastener shown in Fig. 12 is provided as an
example. Part A and part B of the temporary fastener are to be
assembled together. One of the assembly constraints in this
situation is the co-axis, which can be used to dene the orientation
of part B when it moves close to part A, so that part B can only move
in some xed allowable direction. Suppose the constraint equation
set has n variables and m equations (n m): F(x) = 0: R
n
!R
m
.
Equation G(X) = F(X) F(X
0
): R
n
!R
m
represents the distance from
the initial position to the current position, where X
0
is the initial
position of the system. The NewtonRaphson method can be
applied to nd this equations solution, which includes the nal
positions of o
1
and o
2
and the rotation matrices A
1
and A
2
.
4.1.2. Geometric constraints
Instead of translating positional constraints into equations,
geometric constraints based modeling employs a set of steps to
place geometric elements relative to each other through rigid body
transformations that satisfy a set of constraints describing the
relationships between these geometric elements. Examples can be
found in [22,62,64]. The basic idea of this method is to nd the
allowable motions for parts in the assembly. This enables the
positioning of a solid model by automatically constraining its 3D
movement.
The implementation steps are as follows. First, the assembly
constraints are stored in an assembly relationship graph, which is
an undirected graph. Each node of the graph represents either a
geometric entity (mating part, mating feature, or mating surface)
or a constraint. All of the related geometric entity nodes
are connected to a constraint node that shows the assembly

Fig. 11. Example for hierarchical constraint-based modeling.

Fig. 12. Example for modeling based on positional constraints.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 807
relationships. Second, when the user moves a target model close to
a reference model within some given tolerance, a constraint
recognition process is triggered to automatically recognize
geometric constraints between the models, such as parallel
symmetry, angular symmetry, coincidence, tangency, concentri-
city, and cylindrical t. Third, the target model is transformed
following the allowable motion to satisfy the identied con-
straints. Fa et al. [62] divided the allowable motions into allowable
translations (which include translation on line, translation on
plane, and translation in 3D space) and allowable rotations (which
include rotation about point and rotation about line).
4.2. Physics-based modeling
A physics-based modeling method assembles parts together in
a VE by simulating physical interactions between them, which
ensures the correct physical behaviors of the objects. This method
is used primarily in interactive dynamic simulation with human
operators involved. Accurate and fast collision detection is a
prerequisite for this method. Once a collisionis detected, a physics-
based algorithm calculates the forces and velocities at the contact
point, and then the motions of the colliding objects can be
simulated. The calculated force may be returned to the operator
through some haptic device, and then the operator could decide
what to do next based on the objects properties and the forces and
torques that he/she feels are necessary to avoid interpenetration of
parts. The operators movement is tracked through some hardware
interface and applied to the object that is grasped by the operator
to dynamically simulate the real-world interaction. Physics-based
modeling can be based on penalty force or impulse [170].
4.2.1. Penalty force
The penalty force can be calculated by inserting a very stiff
spring between the points of deepest interpenetration between
two objects when a collision is detected [129]. Then, this force can
be used in the simulation to estimate reactions of objects. The force
can be estimated using Eq. (1). The spring force is applied to both
colliding objects equally but in opposite directions.
Variable elasticity must be considered in estimating the penalty
force. If e = 1, which indicates an elastic (hard) collision, the spring
constant K will be the same whether the objects are approaching or
receding. If e = 0, indicating a completely inelastic (soft) collision,
the spring will act the same as it does in a hard collision when
objects are approaching each other, but the spring constant will
decrease to 0 immediately once objects move away from each
other. If 0 < e < 1, the two spring constants will have the following
relationship: K (receding) = eK (approaching). Using Hooks law to
calculate the penalty force is intuitive and easy to implement.
However, the very high spring stiffness may lead to stiff equations
that are numerically intractable.
4.2.2. Impulse
Impulse based modeling can be applied to all types of contacts
such as colliding, rolling, sliding, and resting because it simulates
contact as a series of micro-collisions. The impulse fromcollision is
calculated to nd out the absolute velocity of an object at the
contact point. This method is conceptually simpler and more
robust than constraint-based modeling. In addition, it can simulate
the physical behaviors of colliding objects correctly and quickly,
which can be used in real-time simulation. Some examples can be
found in [75,125,197]. Impulse-based modeling is more stable and
robust than the penalty force method. However, it cannot model
stable and simultaneous contacts with static friction very well.
4.3. Integrated constraint-based and physics-based modeling
Considering limitations on the constraint-based and physics-
based modeling methods, some researchers have been working on
integrating these different methods. In the integrated approach,
physics-based modeling is used to allow the virtual assembly
system user to manipulate objects and feel the physical interac-
tions between them. When related objects come together, the
operator can decide whether to assemble them together or not. If
an assembly order is given, pre-dened geometric constraints can
be retrieved to assemble the objects.
5. Virtual assembly planning and training
The process of virtual assembly (VA) planning should take
various factors into consideration, including assembly time and
sequence, tooling and xture requirements, ergonomics, operator
safety and accessibility, etc. Successful VA planning can reduce the
time and cost of the assembly process and, in addition, increase
production efciency and product quality. Typical applications of
VA planning include planning and verication of parts tting,
analysis of service possibilities, analysis of different assembly
alternatives, and generation of an optimal assembly sequence. The
basis for the creation of most VA scenarios is the CAD data of the
product and its components. Traditionally, assembly planning is
realized within a CAD program; however, the CAD models are
normally displayed on a 2D computer screen, and physical
prototypes are fabricated for verication purposes. Using the
immersive virtual reality (VR) technology, the CAD data can be
used to create VA scenarios in which users can interactively
generate and train assembly sequences using natural human
motion and realistic 3D product models.
5.1. Data exchange between CAD and VR systems
The main prerequisite for nearly all VR applications is data
extraction from a CAD system. CAD systems are used to model
physical objects, while VR systems use scene graphs to animate the
motion and interaction of CADmodels, such as in virtual assembly.
A VR system uses polygonal geometry to represent CAD models in
order to ensure fast frame rates for live interaction with a human
user. Although current commercial CAD systems can convert solid
models into triangular meshes (STL format), importing the
parametric information of the created solid models from a CAD
system into a VR system is a critical issue. The CAD data can be
imported either over a universal interchange format or by
implementing a native CAD system importer. If no CAD model
hierarchy and no kinematic constraints are necessary in the virtual
assembly training (VAT) scenario, the 3D model data can be
imported using a CAD interchange format such as VRML or STEP.
Otherwise, an importer for the native CADformats of different CAD
systems must be implemented. After the import of the 3Ddata, the
3Dobjects have to be placed in the correct position and orientation
within the VE. Also, scaling may be necessary. Additional data, such
as separate instructions that can later be visualized on a 2D
information carrier in the VE, need to be implemented by the
creator of the VA training scenario [134]. Additional constraints
among objects also can be implemented in many of the modelers.
In some cases, additional parameters of the 3D models, such as
weight and surface conditions, may also need to be integrated. For
example, masses could be used to calculate inertial forces for the
haptic devices if used in the VA scenario. Additional sensors, such
as touch sensors, can be used to generate tactile feedback through
vibrations over an interaction device, e.g., a Wiimote [21].
Depending on the architecture of the VA system, the model data
are drawn either directly from the CAD system or from a Product
Data Management (PDM) system that is connected with the CAD
system. Barbieri et al. [14] implemented the second approach, i.e.,
using a PDM system. Their virtual design data preparation (VDDP)
system realizes automatic data exchange between a PDM system
and a VR system. The VR system is integrated directly with the
PDM system and allows the user to navigate through the product
structure, select entries for conversion, correct geometric conver-
sion errors, and reduce the models complexity [74]. In addition to
creating the VDDP system, Barbieri et al. [14] described two
different approaches for transferring CAD data directly to a VA
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 808
system. In the rst approach, the CAD system writes all of the
necessary information in an exchange le, while in the second
approach, a more complex runtime linkage between a CAD system
and a VR system is established. Furthermore, they implemented a
semi-automatic procedure to extract CAD data to a VR system,
getting only those (linear) constraints necessary for the VA
application. Another system accesses the CAD data through the
CAD systems API [112]. Fig. 13 provides an overview of data
exchange between a CAD system and a VR system [72].
Bowland et al. [24], in contrast, integrated the PDMsysteminto
their VA system, using PDM as a data pool to create and store
assembly planning data. The stored data contain informationabout
components, links, mating features, and les associated with the
conguration (such as a solid representation). This means that
PDM also can be used to manage complexity, as it subdivides the
product into different components. Also integrated into the VA
system are the CAD data and a liaison graph. The CAD integration
allows the visualization of components and congurations; the
addition, removal and viewing of mating features; the viewing of
feature parameters; and the recognition and storage of features.
The liaison graph displays structure denitions and allows
interaction with the underlying object-oriented structure, the
alteration of the assembly structure, and the addition of fasteners
and links.
Wang et al. [190] established a way to integrate CAD and VR
systems by using Autodesk Inventor
1
as the automation server
and the Inventors automation interface for the CAD-VR integra-
tion. Two major features are established in this VA application. The
rst is a dynamic transformation mechanismfor runtime adaption
to the changes and movements of the assembly structure. The
second is a visibility optimization for the reduction of the overall
geometric complexity. Interactive parametric design modications
from the VR applications are possible [190].
One of the main problems in extracting CAD data for VR use is
the loss of non-geometrical information such as kinematic
constraints and material properties [105], as well as topological
information and assembly structure and semantic data such as
dimensions, names, constraints and physical properties [14].
Hence, manual reprocessing of the CAD data is necessary [132].
Gomes de Sa and Zachmann [72] stated that on average about 70%
of the overall time spent to create a VA application is used to nd
and prepare CAD data, and only 30% is used for authoring the VE.
The data reprocessing may include the redenition of surface
normals orientations, missing geometries, and kinematic con-
straints, as well as the deletion of unwanted geometries
[14,72,133].
Seth et al. [170] stated that most VR systems use hierarchical
data structures, scene graphs, triangulated mesh geometry, spatial
transformations, lighting, material properties, and other metadata.
Within the necessary tessellation process to create a scene graph
using CAD data, the parametric information of a CAD model, such
as the procedural modeling history, constraints and texture maps,
is usually not exported into the VR application. Yun et al. [204]
developed an algorithm for reconstructing semantic information
exported from CAD software, as well as for automatic polygoniza-
tion. They established a data center that handles data exchange
between CAD and VR systems.
5.2. Virtual assembly planning
Virtual assembly planning (VAP) systems can target different
applications, some of which focus on the product design, others on
process planning, and some on virtual prototyping. The rst
category of VAP systems, called the Design for Assembly (DFA)
systems, focuses on the optimal assembly structure, including the
recognition and elimination of unsuitable and infeasible features.
The second category of VAP systems analyzes the assembly
process, focusing on the tools, xtures and procedures. The last
category deals with the effects of force and deformation during the
assembly process of the components. Methods for shape precision
analysis and tolerance optimization have also been researched
[105].
Over the past 20 years, many researchers have developed
different VAP systems that have certain capabilities and char-
acteristics in common. They intend to describe each assembly task,
express time sequences between different assembly tasks, and
group different assembly tasks [204]. To fulll those tasks, Lang
et al. [105] listed three environments and two interfaces that an
ideal VAP systemshould contain. First, it needs a CADinterface and
a user interfacethe former to realize data exchange between the
CAD system and the VA system, and the latter to allow the user to
interact with the VA system. This includes interactive assembly
planning, performance evaluation, and documentation exchange.
In addition, the VA system should contain three environments: (i)
CAD modeling, (ii) VAP, and (iii) assembly documentation
generation and web training. The rst environment realizes the
positioning of the parts in combination with their mating
relationships. The VAP environment, which is the most complex
of the three environments, contains methods for the efcient
recognition and management of geometric constraints, the
optimization of the assembly sequence and planned paths, the
selection of necessary tools and xtures, and the analysis and
evaluation of the generated assembly design. In addition, it
contains methods for the determination of key points, the
estimation of time and costs, and the computation of the assembly
planning scheme. The assembly documentation generation and
web training environment contains methods for the generation of
the assembly documentation, assembly operation visualization,
and web-based training to realize the assembly training applica-
tions.
Depending on the target applications, researchers have focused
on certain aspects and therefore have used specic methods when
developing VAP systems. The main point of Virtual Assembly
Process Planning (VAPP), developed by Yun et al. [204], is the
hierarchical assembly task list (HATL) that uses geometrical
positioning to automatically divide assembly tasks into hierarch-
ical groups according to existing subassembly tasks [112]. HATL
also logs the time and records the path of the planned assembly
sequence. In addition, the grouping and capsulation of the
assembly tasks simplies the replanning process.
Bullinger et al. [27] focused on the integration of ergonomic
analysis and therefore integrated Virtual ANTHROPOS in their VAP
system. Bowland et al. [24] developed a computer-aided assembly
process planning (CAAPP) system in which an integrated
manufacturing assembly process sequencer (MAPS) system
creates assembly plans using the component global freedom that
checks every component and in turn determines whether any
blocking components lie in the possible assembly path. Jayarams
VADE system [95], regarded as the rst VAP system [105], used VR
to design assembly tasks. Case studies in industry have proven the
usefulness of this system [94].

Fig. 13. Data ow between CAD and VR systems.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 809
Within the product lifecycle oriented virtual assembly technol-
ogy architecture (PLO-VATA), VAP has been adopted as an efcient,
intuitive, and convenient method for assembly process modeling,
simulation, and analysis [113]. This VAP system consists of four
basic elements: principles and methodology of DFA, assembly
analysis and evaluation, virtual assembly model, and virtual
assembly toolkits.
Barbieri et al. [14] described the integration of cabling into a
VAP system. They developed a method that allows cables to be
reproduced within the CADsystem. They discussed the handling of
screws and bolts, which requires a space reservation analysis to
verify whether enough space is available to use tools such as screw
drivers or wrenches. The tool motion must be constrained to keep
the tool axis aligned with the screw axis and to keep the tool tip
coincident with the head of the screw. In addition, they dened
some disassembly rules during the preparation of the VE, which
allows disassembling screwed connections when collisions are
detected between screws and nuts.
5.3. Virtual assembly training
Virtual assembly training (VAT) varies from a one-person
application for small parts to collaborative training scenarios in
different VEs for complex assembly scenarios. Before using a VAT
application, the VAT scenario must be created. Most of the VAT
systems have separate software tools to build scenarios, which are
known as scenario modelers or authoring systems. To realize a
good training scenario in order to achieve a good training result,
the trainee needs support from a supervisor. The supervisor can
participate in the VA scenario by monitoring the actions of the user
and assisting him/her in enhancing the trainees understanding of
the assembly or disassembly process [25]. If the supervisor cannot
take part in the VA scenario, the trainee must be able to log his/her
own interactions. This logging feature enables the supervisor to
evaluate the success of the VAT later, which may be even more
effective than real-time evaluation because the recorded trainees
actions can be fast forwarded or rewound. The logging claries the
cognitive insight of the human operator [157]. In addition, the
assembly training session can be evaluated easily by the trainer
and the trainee together.
Gomes de Sa and Zachmann [72] created a three-layer
framework called SCENARIOS, which includes a scene graph layer
(CAD interface), a scripting layer (general user interface, story-
board driven) and an application layer (graphical user interface for
each specic application). The system allows both path recording
and editing in the VE. The recorded data are stored in the
integrated PDM system. In addition, the trainee has the oppor-
tunity to place markers to highlight problems occurring during the
training session.
Brough et al. [25] created a system called the Virtual Training
Studio (VTS), which enables the creation of virtual assembly
scenarios using tools called the Virtual Author and the Virtual
Mentor. The Virtual Mentor allows the classical master-apprentice
training model to be simulated by monitoring the actions of the
trainee in the VE and assisting at appropriate times. The system
contains no haptic feedback. A virtual ray is used for interaction,
which can be controlled by a joystick. Jayaram et al. [94] used a
scalable human model for a piston assembly test case, driven by
inverse kinematics through the use of six tracking sensors.
A major advantage of VAT applications is the ability to interact
with the assembly scenario intuitively in real time. In general, the
interaction with the VA system can be accomplished with a VR
joystick, a data glove, or a 3D mouse. To enhance the whole
scenario and make it more realistic, haptic devices such as the
PHANToM, exoskeleton arms, or motion capture suits can be
integrated. Ritchie et al. [157] developed a Haptic Assembly,
Manufacturing and Machining System(HAMMS) to investigate and
measure user interaction and response while performing various
VA tasks. They found that haptic feedback is critical for a successful
VA training application.
Bordegoni et al. [21] combined a haptic PHANToM arm and a
Wiimote to simulate user interaction with both hands. The haptic
device provides force feedback, and the Wiimote provides tactile
feedback through vibration. Froehlich et al. [67] developed a
Responsive Workbench that contains a spring-based interaction
concept. This concept allows multi-hand and multi-user interac-
tion. Garbaya and Zaldivar [70] implemented a similar concept,
also using a spring-damper model for interaction purposes. In
addition, they included a representation for the parts dynamic
behavior.
Gomes de Sa and Zachmann [72] integrated a combination of
different input and output mechanisms into their VAsystem. Three
different feedback sources were used: acoustic, tactile (using
Cybertouch
TM
) and visual (highlighting of parts). They established
3Dmenus to support the user during task performance. In addition,
a clipping plane and measurement tools can be attached to the
virtual hand to improve the capabilities for investigating assembly
components. For interaction purposes, combining a data glove and
a speech recognition device proved most successful. Nevertheless,
they experienced some difculties with both input systems.
Regarding the speech input, occasional users often did not
remember the commands, and the speech recognition algorithm
was error-prone and therefore irritating to the established users.
Users did not rate the Cybertouch data glove with nger vibrators
very highly; instead, they preferred real force feedback. Zachmann
[205] noted that, especially for the force feedback rendering
application, very fast collision detection is needed because the
haptic rendering loop must run at least at 1000 Hz. With no force
feedback mechanisms established, Gomes de Sa and Zachmann
[72] used a snapping paradigm, which snaps parts together when
they are sufciently close; the parts can be released by the user.
Before snapping, the parts follow the hand as long as they do not
collide with other parts, and the parts can glide along other parts.
This snapping paradigm can also be used for virtual tools during
their utilization in the assembly process.
Bullinger et al. [27] used a combination of a head-mounted
display (HMD), a data glove, and a full-body electromagnetic
tracking device for user interaction. Kopp and Wachsmuth [103]
developed the CODY system, which combines VR technology and
articial intelligence technology. Howard and Vance [85] estab-
lished an affordable desktop based VA system. Chryssolouris et al.
[38] developed the VIRTUE (virtual reality environment for the
simulation of critical industrial processes involving human
intervention) system with spatially tracked user interaction,
allowing a realistic representation of human performance in VA
environments. The behavioral model proposed by Ikonomov and
Milkova [91] considered the assembly hierarchy and contained
constraint-based part movements. The interaction was realized
using an HMD and a data glove.
5.4. Cooperative virtual assembly planning and training
Some of the developed algorithms and software for VAP and
VAT are for a web platform. Those cooperative assembly planning
technologies allow cooperation and communication between
people from different departments or companies across geogra-
phical and temporal boundaries. Through the Internet, users of the
VA system from all over the world can design, analyze, plan, and
validate product performance cooperatively. Barbieri et al. [14]
proposed a web-based VE that allows designers from different
places and with different software platforms to cooperate in the
same assembly scenario and to complete the assembly task
synchronously. The systemwas based upon the key technologies of
multi-modal interaction and task processing [201]. The system
developed by Liu et al. [112] allowed transmission of assembly
relationships on the web using VRML. A combination of a 2D
planning system and a 3D visualization tool was implemented by
Neugebauer et al. [134]. Iglesias et al. [89] focused on a
collaborative assembly training application. The major problem
they experienced was synchronizing between different users.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 810
6. Virtual assembly using augmented reality technologies
6.1. AR assembly applications
Augmented reality (AR) technologies allow the user to see the
real environment with virtual objects superimposed upon the real
world. Over the past two decades, AR has become more popular in
industrial assembly applications. Specically, augmented assem-
bly (AA) refers to the application of AR in assembly; real objects
(e.g., physical prototypes, tools, robots, etc.) are mixed with virtual
objects (e.g., virtual prototypes, information, tools, etc.) to create
an augmented environment (AE) for the user so as to enhance the
assembly design and planning process. An AE for product assembly
design combines physical parts, real feedback, and virtual contents
to analyze the behaviors and properties of the product assembly,
thus combining the benets of physical prototyping and virtual
prototyping.
AR has been applied to manual assembly station planning [154],
product assembly guidance [159,203,209], assembly workplace
design [139], assembly constraint analysis [140], digital virtual
prototype augmentation with physical products [78], data glove-
based virtual assembly [182], physical manual replacement with
augmented virtual contents [195], and humanrobot interaction
design for safety and efciency [135,138]. Many of the research
efforts focused on AR-assisted assembly training and guidance.
6.2. AR-aided product and workplace design and planning
Manual assembly design and planning is a complex and time-
consuming process as technical, economic and human factors have
to be considered simultaneously. The two main assembly issues
are Product Design and Planning (PDP) and Workplace Design and
Planning (WDP). PDP aims to simplify the assembly process,
making it more efcient and reliable and less costly through Design
for Assembly (DFA) techniques [20] and assembly sequence
analysis. WDP involves workplace design, postural concerns, and
workplace layout analysis [9]. An improved workplace design
enables the assembly operators to work safely, thus reducing
hazardous and strenuous reaches and preventing potential serious
bodily injuries. Both PDP and WDP issues can greatly affect
assembly efciency and operator comfort during manual assembly
operations.
Many CAD and computer-aided planning (CAP) systems have
been developed to support PDP and WDP processes. In these
systems, information primarily ows between these processes
unidirectionally fromthe PDP process to the WDP process (Fig. 14).
Information from the assembly workplace, e.g., the position and
orientation of parts, spatial constraints, the actual viewpoint of an
operator, etc., which affects the PDP process, are not fed back to the
WDP process. This lack of WDP information in the product
assembly design stage typically leads to problems in the assembly
design and plan, which usually remain undetected until the
assembly operations are evaluated in a real assembly workplace
using physical prototypes, leading to costly and time-consuming
redesign processes.
AR can be applied in assembly to integrate the PDP and WDP
activities in order to improve the efciency and quality of assembly
design and planning. Through AA, designers and engineers can
design and plan a product assembly and its assembly sequence by
manipulating virtual prototypes in a real assembly workplace. In
an AE, WDP information can be fed back to the designers and
engineers in real time to aid them in making better decisions in
assembly design and planning.
Ong et al. [139] have developed an AR-based assembly system
to provide a highly immersive and intuitive environment (Fig. 15)
that allows engineers to design and plan assemblies with sufcient
information about the assembly environment during the early
design stage. AR techniques were applied to enhance users
perceptions of the surrounding world through the mixing of real
objects with virtual objects to create an AR environment. With this
environment, engineers can manipulate and evaluate the virtual
prototypes of new product designs in the real assembly environ-
ment and make design changes to enhance the assembly process. A
computer vision based tracking and registration technique [146] is
used to render virtual prototypes in the real assembly environment
(Fig. 16). Hierarchical feature based models [146,194] are used to
model the assemblies. Using this AR system, engineers can design
and plan the assemblies by manipulating virtual prototypes on a
real workstation to identify the drawbacks of an assembly. When
an assembly design is changed, only the related feature models
need to be updated rather than the entire product model. This
offers computational simplicity and is important for the real-time

Fig. 14. Integrating PDP and WDP processes [139].

Fig. 15. An AR assembly environment [139].

Fig. 16. Architecture of AR assembly system [139].


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 811
requirements of the AR environment. A CAD system is integrated
with this AR system to make use of the geometry modeling kernel
to model the assemblies. Design data can be exchanged between
the AR and CAD environments through the application program-
ming interface of the CAD system. The CAD system also can
interface with other computer-aided tools that support other
production activities. Thus, this system improves the assembly
design and planning and reduces re-designing and re-planning
activities.
6.3. AR-based assembly planning and verication
A typical assembly process involves grouping individual parts
together to form an assembly, which may be part of a larger
assembly. Accurate and efcient assembly of the nal product is
crucial for successful product development. Many assembly
operations have been automated with the development of
advanced technologies and machinery. However, a signicant
number of assembly operations still require manual assembly by
human operators. The assembly information used to guide the
human operators in these operations often is detached from the
equipment. This results in the operators having to alternate their
attention between the actual assembly operation and the assembly
instructions, which may be available as paper manuals or soft
copies on external computers or websites. This divergence of
attention is time-consuming and increases the cognition load on
the operators, especially when the instructions are not conve-
niently placed relative to the operators. This will result in worker
fatigue, which may reduce productivity, increase errors and
assembly time, and foster repetitive motions and strain-related
injuries. AR can be applied to provide useful, relevant assembly
instructions in the real environment in the operators eld of view
so that he/she does not need to exert additional body movements
to retrieve instructions. This will save time and allow vital
information that supports the operators assembly tasks to be
retrieved and sent conveniently so as to allow the operators to
concentrate on the task at hand without having to physically move
(e.g., change head or body position) to retrieve the next set of
assembly instructions.
Boeing has demonstrated an AR-based system to aid aircraft
assembly workers [31,173]. Raghavan et al. [151] and Molineros
[127] addressed some AR-related issues in the assembly domain in
which a multimedia augmentation guides human operators in
assembling an industrial object. Zauner et al. [206] developed a
mixed-reality based step-by-step furniture assembly system.
Reiners et al. [153] described a practical and realistic AR
demonstrator that teaches users to assemble the door-lock
mechanism in a car door. Reinhard and Patron [154] developed
a modular AR system for guiding manual assembly in assembly
planning. An AR-integrated environment based on CAD assembly
software and a wearable computer system for interactive
validation of assembly sequences has been developed by Liverani
et al. [114]. Day et al. [44] proposed a wearable AR system for
enhancing information delivery in high-precision defense assem-
bly, which demonstrated that the AR-based method reduces
disruption to the operators and increases their mobility, although
there are some latencies and errors.
The effectiveness of AR-assisted assembly methods for handling
assembly tasks has been investigated, and Boeing has demon-
strated their effectiveness [31]. Several other researchers
have investigated the effectiveness of AR displays over paper
manuals in aiding operators during manual assembly tasks
[12,41,114,177,195]. The results showed that the AR-assisted
conditions were more effective than the paper-based instructions.
In addition, operators made fewer errors under the AR-assisted
conditions than when using paper-based instructional media.
Moreover, AR proved to be more suitable for difcult tasks, though
for easier tasks the two conditions did not differ signicantly.
Studies conducted by Schlick et al. [166] and Odenthal et al. [137]
support these ndings. To detect assembly errors in small
workpieces, laboratory experiments were conducted in collabora-
tion with BMW, VW, and Ford to compare a table mounted display
(TMD) to a see-through HMD and to investigate different
variations of presenting assembly information in the eld of view.
The results showed that using HMD instead of TMD increases the
accuracy of assembly error detection signicantly, but with a
longer detection time. Yuan et al. [203] proposed an AR-assisted
assembly system that incorporates Virtual Interaction Panels
(VirIPs) to directly acquire a relevant understanding of the
surrounding assembly scene from the human assemblers per-
spective. The main characteristic of this AR systemis the novel and
intuitive way in which an assembly operator can step through a
pre-dened assembly plan (assembly sequence) easily without the
need for sensor schemes and markers attached to the assembly
components. Their approach uses a visual assembly tree structure
(VATS) to manage the assembly information and retrieve the
relevant instructions for the assembly operators in the AR
environment. VATS is a hierarchical tree structure that can be
maintained easily via a visual interface. It can be integrated
directly into an AR system or reside on a remote computer as a
central control station to control the assembly information data
ow during the entire assembly process. Based on the operators
experience, different assembly information can be retrieved to
guide the assembly operation. Image-based instructions indicating
the assembly operations can be stored in the assembly instruction
database. At the same time, other means, such as video clips and
graphical primitives in the form of short labels, text, and arrows
that help the operator understand how to execute the assembly
operations, also can be stored. Fig. 17 shows two video captures in
the monitor-based display of an AR systemguiding an operator in a
computer assembly.
6.4. AR-assisted assembly training and guidance
An AR assembly interface can provide assembly training
instructions and guidance to operators in real time in the actual
workspace where the assembly operations are performed without
the operators having to alternate their attention between the
assembly workspace and the instructions available on computers
or paper manuals. Thus, AR in assembly guidance can help improve
efciency and lower overhead for each product.
An AR-assisted systemin the aerospace industry was developed
to improve workers performance of manufacturing activities
through the use of head-mounted display technology [31]. Haniff
et al. [79] used the AR technology for assembly training. A general
procedure for AR-assisted assembly training was developed [37]
for training operators in the assembly of a planetary gearbox with
the help of a hand-held device and using a variant approach with
feedback sensors in the work environment [92]. Traditional
assembly support media have been compared against AR-assisted
assembly, and AR support proved to be more suitable for difcult
assembly tasks [195].
The main constraint in using AR for assembly guidance and
training is the need to determine when, what, and where to display
the virtual information in the augmented world, which requires at
least a partial understanding of the assembly workspace. This
understanding requires sufcient sensor modality and interpreta-
tion that can communicate with the AR systems, the relevant
changes in the state of the assembly, and the surrounding world

Fig. 17. Monitor-based computer assembly [203].


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 812
with which the assembly operators interact. Most reported
research efforts have focused on the development of mechanisms
that render assembly instructions to facilitate the fast accom-
plishment of assembly sequences or assembly skill transfer. Few
researchers have discussed the development of natural, interactive
mechanisms between the assembly operators, the assembly
components, and the instructions being rendered. To provide
timely and relevant assembly guidance information, assembly
information has to be rendered in accordance with the assembly
procedures, i.e., the AR system would need to know when and
where to render what information so that the guidance can be
correct and as intuitive as possible. Hence, there are two research
issues to be addressed, namely, (i) the recognition and tracking of
assembly components and (ii) the interaction among the
operators, the components, and the AR system.
Many reported AR-assisted assembly guidance systems rely on
ARToolKit markers, which can be attached, or even stamped, on
the assembly components or the component containers
[77,114,160,206] to achieve assembly component recognition
and tracking. Assembly components were stamped with markers
in the research conducted by Liverani et al. [114]. ARTag markers
were employed by Hakkarainen et al. [77] and Salonen et al. [160]
to achieve an assembly platform. Hakkarainen et al. [77] studied
the possibility of using mobile phones for an AR-assisted assembly
guidance system. Research has shown that the tracking and
rendering performance provided by ARToolKit can meet the
application requirements quite well. However, two problems are
associated with the use of square markers. Firstly, markers
attached to components can be occluded easily as the assembly
proceeds. Secondly, because ARToolKit markers must be planar
and relatively large in order to be recognized robustly using
computer vision techniques, these markers cannot be attached
onto small components or non-planar surfaces. In the research
reported by Zauner et al. [206] regarding furniture assembly, the
assembly positions of some small components, such as screws,
which are too small for markers to be attached to, were estimated
based on the nearby markers attached to at surfaces so that the
assembly information could be rendered at these estimated
positions.
Research has been conducted on the interaction between the
operators, the components, and the AR-assisted systems during
assembly processes. This interaction can be facilitated by 2D
information rendering. However, 3D dynamic information render-
ing, such as of 3D CAD models, in the assembly workspace would
enhance the interaction as it can provide the operators with the
correct orientation and assembly directions of the components.
The assembly sequence may vary depending on the next
component or sub-assembly to be assembled in hierarchical
assembly sequences. In this case, a permutation of all possible
assembly steps and models must be generated and stored as data
les so they can be rendered according to the operators activity. A
number of technologies, e.g., sensor technology including RFID,
inertial sensors, computer vision, infrared-enhanced computer
vision, etc., have been developed to provide just-in-time informa-
tion rendering and intuitive information navigation during the
assembly processes, which facilitate the implementation of such
augmented assembly environments so as to enhance the operators
perception and experience.
Zhang et al. [209] implemented a model-based object tracking
approach based on the 3D-to-2D point matching method to
facilitate 3D information rendering. RFID technology and infrared
(IR)-enhanced computer vision-based technology have been
applied together to identify the assembly activities and retrieve
or generate 3D and 2D point sets. Their system consists of three
wearable modules (a camera, an HMD device, and an assembly
activity detector (AAD), as shown in Fig. 18a), and a computing unit
(Fig. 18b). Using a see-through HMD in the system enhances the
immersive feeling of the operator while simultaneously freeing his
hands for assembly activities. An AAD attached to the operator
detects any assembly movements of interest. These movements
include assembly-oriented movements (reaching for assembly
components) and hand movements pertaining to information
referencing and navigation, which are detected using the RFID
technology and the movement sensing technology, respectively.
The AADfacilitates information navigation as it helps the systemto
recognize the assembly component that is being handled or to
identify the operators intention of referencing certain instructions
directly from his hand movements. In this way, the operator can
proceed with the assembly operations without needing to use any
apparatus to interact with the information. Wireless data
communication technology is applied in the AAD so that the
operators hand movements will not be hindered by the cables. The
system can highlight the most relevant information requested by
the operator, and that information can be rendered properly onto
the assembly scene to assist the operator in running through the
entire assembly sequence. Fig. 19 shows the scenes captured
during this case study in the order of the assembly steps.
6.5. Barehand interactions in augmented assembly
Ong and Wang [140] developed a 3D, natural, bare-hand
interaction (3DNBHI) method to achieve a dual-handed AA
interface that allows users to manipulate and orient parts, tools,
and subassemblies simultaneously. This allows close replication of
real-world interactions in an AE, as if the user is assembling the
real product, making the AA process more realistic and almost
identical to the real process. In the 3DNBHI method, the users bare
hands are tracked to extract the hand contours, determine the
palm centers, and detect the ngertips [189]. The tips of the
thumbs and index ngers of both hands are differentiated
automatically and used to achieve interactions between the
ngers and the virtual objects. The 3DNBHI method can determine
the two hands in the cameras view and differentiate them. After
they have been differentiated, the hand centers are tracked using a
matching algorithmthat minimizes the displacement of the pair of
hand centers over two successive frames so that these two hands
can always be differentiated fromthe live video stream. To achieve
interactions between the bare hands and virtual objects, a small
virtual sphere is rendered on each ngertip. A collision detection
algorithm is used to detect collisions between these spheres and
the virtual objects. When a virtual object is manipulated by a user,
the virtual sphere on each ngertip is highlighted.

Fig. 18. Architecture of the assembly guidance system [209].

Fig. 19. Case study of a computer mouse assembly [209].


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 813
7. Design, planning, evaluation, and testing of assembly
systems
7.1. Simulation of product assembly
Products generally are produced from a variety of individual
parts, modules, and sub-assemblies using various methods and
assembly processes. The assembly process consists of several tasks,
each requiring specic tools and/or machines that must be
completed in a particular order according to the precedence
constraints. Feasible assembly plans and sequences should be
generated, and the necessary assembly tools and equipment
should be selected. The determined assembly tasks and equipment
affect the assembly systems design and performance, which also
should be modeled, analyzed, and optimized. The complexity of
assembled products greatly inuences the complexity of the
assembly system, and they both require management. These goals
are accomplished using data from CAD models and product
assembly features, relations, and the functional requirements to be
achieved. Such relationships and data should follow certain
constraints and precedence rules through optimization, sequence
generation, and simulation. An assembly simulator can be used to
model the process, the assembly system, and interactions with
human operators, as well as to examine their performance. A
product assembly simulator, therefore, should consider these
various requirements, governing constraints, and relationships, as
well as available assembly tools and equipment, in order to be
effective and useful in making decisions about the assembly
processes and systems.
Although simulation is a good tool to support decision making,
it can lose its effectiveness when many decision alternatives exist,
especially in the presence of increased product variety; under
these conditions, simulation becomes a time-consuming and
impractical exercise. Many researchers have proposed the addition
of intelligent support systems, such as expert systems, to
simulators [168]. Fig. 20 illustrates the inputs, outputs, mechan-
isms, and controls that govern product simulators using an IDEF0
diagram.
7.2. Assembly representation
Assembly representation is needed to capture the relationships
between parts and sub-assemblies in an assembly. Several
methods, such as the bill-of-material (BOM), liaison graph,
adjacency matrix, AND/OR graph, and precedence graph, are used
to represent assembly tasks. ABOMhas a hierarchical tree/graphor
tabular structure. It lists all parts, sub-assemblies, and materials, as
well as other information such as quantities, costs, and manu-
facturing/assembly methods. The BOMtraditionally has been used
in industry for design, manufacturing, and purchasing. A liaison
graph is a graphical network wherein nodes represent parts and
arcs between nodes represent pre-dened relations to represent
the joining of or physical contact between parts. All assembly steps
are characterized by the establishment of one or more assembly
liaisons. The liaison graph also has been used to generate assembly
sequences. The adjacency matrix is a planning tool that shows
adjacency between graph nodes. Given a graph with n nodes, the
adjacency matrix has 1 entry if adjacency exists, and 0 otherwise.
An AND/OR graph is a representation method in which the nodes of
the graph represent states, and their successors are labeled as
either AND or OR branches. The AND successors are sub-goals that
all must be achieved to satisfy the parent goal, while OR branches
indicate alternative sub-goals, any one of which could satisfy the
parent goal [122]. A precedence graph represents the different
possible sequences of tasks that have to be accomplished before
other tasks [10,86,194]. Appropriate assembly modeling is a key
factor in generating assembly sequences [96,199].
7.3. Assembly planning
CAD models, BOM, and relational data such as liaison diagrams
and precedence graphs are needed in order to analyze products
and begin assembly planning early in the product design and
development cycle [184,199]. Algorithmic methods, such as the
cut-set, are used to generate precedence data needed for assembly
planning [107]. Planning and optimizing the assembly design for a
new product is time-consuming. Such time investment is
justiable for products with long life cycles, which are assembled
in high volumes. The current manufacturing environment is
characterized by products with shorter life cycles, and the
production volumes are relatively small due to variety. Therefore,
the time available for product development and assembly-
planning activities is rather short. In addition, the dynamic
manufacturing environment, with its continuously evolving
products and part families and its advanced, changeable manu-
facturing systems, requires the development of new assembly
planning concepts, models, and tools [60]. Azab et al. [11]
introduced a new process-planning approach to recongure
existing plans instead of generating new ones. The Recongurable
Process Planning (RPP) method transformed the act of planning
from one of sequencing to one of insertion. Fig. 21 shows the
process of nding the best position (x
n
) to insert a new feature/
operation (f
n
) into a master process plan. Master process plans of
existing parts/products are recongured on the y to meet the
requirements of newparts/products and their features/operations,
with the objective of minimizing changes on the shop oor.
Therefore, instead of generating the new plans from scratch, only
new portions of the old process plan that correspond to the added
or removed features/operations are generated and inserted within
the existing process plan. This new approach enables local
reconguration of master process plans when needed, where
needed, and as needed, while minimizing the extent of change and
its associated cost and time.
7.4. Simulation of assembly systems
Manufacturing and assembly systems, similar to any product,
also have a life cycle from initial design, construction, and use to
redesign due to expansion, changes, and reconguration, as shown
in Fig. 22. An effective system simulator is very useful in all stages
of a manufacturing or assembly system life cycle.

Fig. 20. IDEF0 diagram: inputs, outputs, mechanisms, and controls governing
product simulators.

Fig. 21. Finding the best position (x


n
) to insert a new feature/operation (f
n
) [11].
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 814
Manufacturing and assembly systems are themselves products
[53]. They must satisfy functional requirements and objectives
regarding the processes and machinery needed to fabricate and
assemble certain products at specied rates. Their design and
planning is guided by constraints related to the machines to be
used, the process plans and precedence relationships to be
followed, certain parts/machine assignment priorities, and proces-
sing times and operating heuristics/rules to be observed, as well as
production rates and throughput/cycle time to be achieved; see
Fig. 23.
Simulation has long been used to assist system planners in
designing the best system that fullls the requirements, as well as
a decision support tool for managers to consider what-if scenarios
when selecting among alternate layouts, or when selecting system
components and their characteristics and implementing changes.
Experimenting with a simulated model is much less costly than
purchasing and implementing a physical prototype system or
approving changes and then having to test the real system
[6,59,124]. In addition to their use in the initial system design,
simulators also are used to evaluate expansion or modication
plans before investing in equipment and construction. Another
important use of system simulators is to devise the most
appropriate operating rules and heuristics governing the move-
ment of materials and tools and all decisions used to control them.
Once these logical operating and control rules have been veried,
the logic built into the simulator can be used as the basis for the
real-time controller of the real system [59]. More information
about the uses of system simulators can be found in [167,175].
7.4.1. Digital assembly system simulators
Simulators of assembly systems may be digital models,
including virtual and augmented reality, physical models, or a
hybrid of both. Digital models are much less expensive compared
to physical implementation and experimentation. As in any
computer model, attention must be paid to the quality of the
model and how closely it resembles the behavior of the real
system, any simplifying assumptions made, and the input data.
Deciencies in any one of these areas can render the results from
simulation models useless. Digital simulators may be analytical or
event-based models, and each type has applications for which it is
most suitable depending on whether the system operation is
continuous or event-driven. The most common digital system
simulators use: (i) Discrete Event Simulation (DES), (ii) System
Dynamics (SD), and (iii) Agent-Based (AB) Simulation. Virtual and
augmented reality models are increasingly used for simulation
enabling user interaction in an immersive VE.
With the increasing complexity of manufacturing/assembly
systems, comprehensive simulation models are needed to reect
the interrelations among systementities. Modeling and simulating
production processes continues to become more challenging and
to require more expert knowledge and effort. Modeling production
lines for initial simulation studies typically requires a great deal of
time. During the operation stage, it also takes several months to
implement any engineering changes, such as increasing the
number of stations or buffers, or reducing the number of operators.
Traditional simulation modeling methods usually require expert
knowledge for development and modication, and much time is
needed for verication and validation [188].
The verication and validation of new designs are essential as
they directly inuence production performance and ultimately
dene product functionality and customer perception [117].
Research on aspects of verication and validation is widespread,
ranging from tools employed during the digital system design
phase to methods deployed for prototype verication and
validation.
7.4.2. Physical assembly system simulators
Physical system simulators can be full-size or scale prototypes,
with the guiding rule being that the more the model resembles the
operation of the real system, the better the results. Affordable,
small, table-top physical models are best used for demonstration
or hands-on training and education. Some scaled physical models
are very sophisticated and can be used effectively in education and
research. Full-size physical system prototypes may include real
assembly machines, robots, and material handling systems similar
to those used in industry. They are naturally more expensive and
should be designed and built in a modular fashion for re-use in the
simulation of different systems.
7.5. Integration of product design and assembly system synthesis and
simulation
The proliferation of product variety that recently has been
observed [54,87] is driven primarily by the heterogeneity of
customer requirements. It presents major challenges in product
design, process and production planning, and manufacturing/
assembly system design [56,58,60]. However, commonalities
across different manufacturing domains are key to mitigating
the potential negative effects of variety-induced complexity. The
degree of homogeneity of customer requirements affects the
balance of product design architectures between integral and
modular designs. Integrated product architecture may only require
a simple linear process layout. Modular design may necessitate a
more complex system conguration and process layout, but it
facilitates delaying the points of product differentiation to realize
efcient mass customization [4]. Common, unchanged product
components are candidates for mass production on a dedicated
manufacturing system, while diverse, evolving components may
require more changeable and adaptable, but also more complex
and expensive, exible and recongurable manufacturing/assem-
bly systems [57]. Manufacturers, designers, and engineers must
nd the right balance between design simplicity and components
commonality in integration. Core modules and corresponding core
assembly processes seen in dedicated, non-adaptable platforms
lead to more efciency. Modularity and changeability, on the other
hand, may involve more complexity and higher investments, yet
would lead to more customer satisfaction and sustainability

Fig. 22. Role of simulation throughout manufacturing system life cycle [53].

Fig. 23. IDEF0 diagram: inputs, outputs, mechanisms, and controls governing
system simulators.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 815
through adaptation. The unied commonality pattern illustrates a
recurring footprint that relates the different components of
manufacturing. Finding those patterns leads to the development
of targeted, integrated product design and assembly system
synthesis models.
Those issues that have arisen because of the increased variety of
products and high complexity of modern manufacturing systems
make use of simulation during the manufacturing/assembly
system design stages a necessity. Simulation would be used in
analyzing, evaluating, and comparing system design alternatives,
and selecting those that would best suit the changing and more
customized products and integrate their design and synthesis with
the design of their manufacture and assembly systems.
Early design demonstration, verication, and testing offers the
best chance to improve the design of new products, processes, or
systems, especially for complex assemblies, and to enhance the
quality of digital simulation models based on real data capture. The
development of didactic manufacturing systems has made more
real-time data available. The transition from digital to physical
factories is necessary to facilitate the intelligent utilization of
online data. Saving modeling time and helping industrial engineers
with limited simulation knowledge and experience to conduct
simulation studies is a benet. It also provides a method for rapid
system prototyping.
An example of a full-scale physical assembly simulator is the
state-of-the-art transformable assembly platformat the Intelligent
Manufacturing System Center (IMSC) at the University of Windsor
[57] used for integrating product design with system design,
planning, and usage, as shown in Fig. 24. Assessing and managing
complexity at the earliest stages of product and assembly system
design and before the physical system exists is necessary for
avoiding time-consuming and costly changes in the physical
assembly system.
In todays manufacturing environment, change and increased
variety have become constant. Variety may increase prot because
of increased sales, but it can contribute substantially to increased
cost and complexity of manufacturing. In order to enhance prots
due to the increased variety, the complexity of the product/system
should be managed. The economic importance of assembly has led
to research efforts to improve the efciency and cost effectiveness
of assembly by measuring product assembly complexity [164],
system complexity [165] to manage their mutual effects on the
integrated product/assembly system design [163].
7.6. Role of assembly system simulators in education and training
In the last few years, the concept of Learning Factories has
gained popularity, and some have been installed in Europe
[1,88,176,193] and North America [57]. The objective is to provide
engineers, students, and researchers with a valuable learning and
training experience in a realistic setting. Wagner et al. [186]
recently conducted a comprehensive literature survey to investi-
gate the existing learning factories as prototypes for changeable
and recongurable manufacturing systems. They established a
classication scheme to explore and evaluate the state of the art of
learning factories and to examine their suitability for teaching and
research. Learning factories are not present in developing countries
due to the high cost associated with establishing and operating
them. However, simpler or limited variants exist in many other
countries and prove very useful for education, research, and
industrial development purposes [186].
Learning factories comprise both physical and digital environ-
ments. The physical environment includes real system compo-
nents, such as machining, assembly, logistics, controls, and
information and energy ow modules. Integrated planning,
modeling, visualization, and simulation tools are part of the
digital environment, which is also integrated with the physical
system. This offers new possibilities for transferring digitally
created solutions to a real system for testing, evaluation, and
demonstration. Furthermore, there is automatic feedback fromthe
real system components to the digital environment for adaptation
and change planning [57].
Digital system simulation methods and tools have seen
signicant advances in the last two decades and noware equipped
with powerful graphical user interfaces for model and data input
and dynamically animated displays of simulation results. Some
simulators also have 3D visualization capabilities, which yield a
realistic and immersive experience for evaluating the systems
being simulated and assessing their performance. All of these
simulation tools can effectively enrich the experiential learning of
students and allowusers to make better decisions about the design
and operation of the modeled manufacturing/assembly systems.
They can be used by senior undergraduate and graduate students
as well as researchers and practicing engineers.
As an example of a learning factory, the iFactory [65] at the
University of Winsor can be changed physically in a short amount
of time. In response to changing products and production
demands, it can be recongured into different production lines
comprised of individual modules of production cells, such as
conveyors, branches, automated storage and retrieval systems,
various assembly cells, and inspection cells, including the newest
automation technology of drives, assembly robots, and vision
systems. The plug & play intelligent system interface and its
modularity enable quick and simple implementation of many
different production layouts and system component combinations
for effective and creative learning and experimentation. It can
physically demonstrate the impact of new technologies, product
and system innovations, and changes in market conditions. This
system is supported by advanced CAD software, designers
interactive screens, and the latest rapid prototyping equipment,
system simulation software, and recongurable process and
production planning.
The original product assembled by the iFactory system had
desk-sets of 200 variants. Variety was created by the different cups
and gadgets that could be placed on the top of the product
platform. Other product platform, cup, and gadget variants also
have been produced using rapid prototyping to increase the ability
to respond to changing customer needs.
8. Applications
CADmodel based simulation provides many benets in product
development and assembly. In product development, it can reduce
the number of modications, leading to reduced product cost and
time to market. In assembly design, it can be used for assembly
operation planning and the design of tools, xtures, cells, and an
assembly line. In assembly process verication, it can be used for
accessibility verication, error prevention/reduction, interference
checking, and ergonomic analysis. In assembly training, it can be
used for documentation, computer-assisted training, VE training,

Fig. 24. Integrated digital and physical system simulator at IMSC, University of
Windsor.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 816
and performance assessment. Furthermore, it can be used with
suppliers and customers to visually communicate shared solutions
and databases, and accumulate and manage technology know-
how. This section provides application examples that have
benetted from utilizing some of the technologies discussed in
the previous sections.
8.1. Automobile assembly planning
At Husqvarna (BMW group), motorcycle assembly planning is
conducted at the design stage by product designers based on
simulations generated from CAD data [115]. Other aspects such as
machine layout design, line balancing, scheduling, etc. are carried
out at the manufacturing stage using various simulation tools that
are not always based on CAD data [181].
CAD based modeling and simulation has been implemented by
Piaggio, the biggest European manufacturer of motorcycles, as
shown in Fig. 25 and detailed in Fig. 26 [180]. Fig. 25 shows that
designers and manufacturing specialists concurrently design
scooter parts and assembly devices. Manual and automated
assembly tools are modeled using a CAD system. Both standard
tools (screwdrivers, gauges, etc.) and custom devices (calipers,
pallets, xtures, etc.) are included in the simulated assembly
sequences in order to evaluate feasibility, check for interferences,
assess tolerances, and interactively make any necessary changes to
both parts and tools. Snapshots and short movies of assembly
phases are included in the manufacturing plan as instructions for
documentation and staff training purposes.
Fig. 26 details the main benets of this CAD model based
approach. Tools such as screwdrivers and go/no-go gauges can be
assessed before they are purchased, and suppliers can visualize the
use of custom devices to reduce design errors, leading to co-
makership and co-design. Through the use of CAD model based
simulation, assembly xtures can be matched with parts, and tool
accessibility can be virtually tested from different directions.
PROKON (PROduktionsgerechte KONstruktion), which means
design for good assembly ability, is currently applied by Magna
International Inc., a world leader in automotive supply. Geometric
and physical information of parts and their relationships are
extracted fromCADmodels and evaluated alongside other product
information. Different assembly options are evaluated according to
a set of 10 rules by a team of PROKON designers and industrial
engineers in order to achieve easier assembly and consequently
reduce costs. The method has saved on the order of 2040% of time
required for product development. CAD models or product
specications are received from car manufacturers, and parts
are ready for production in six months. Fromthis experience, it can
be concluded that simplied and standardized methods often can
produce signicant practical benets and are easier to implement
in large-scale manufacturing.
Ford Motor Company [123] has developed a system called the
Human Occupant Package Simulator (HOPS). This system has a
large database of captured motions of drivers and passengers
inside vehicles. Ford designers use digital humans informed by
these motion datasets inside virtual vehicle designs to analyze
their interactions with the vehicles. This helps them improve the
ergonomics of their vehicle designs as much as possible before
building physical models and prototypes. As another example,
simulation tools are applied in [147] to a work cell with
cooperating robots in mass customization in the automotive
industry.
8.2. Aircraft assembly simulation and ergonomic analysis
Assembly processes usually involve a number of manual
operations performed by human operators working on the shop
oor. For example, fastening is a major operation performed in
aircraft assembly. Mechanics performing fastening operations at
awkward postures may risk ergonomic injuries [97]. Ergonomics is
an important issue because nearly one-third of workplace injuries
are ergonomically related [30]. To design safe workplaces, the
probable causes of injuries can be identied by simulating work
conditions and quantifying risk factors [16,80].
Researchers at the Missouri University of Science and Technol-
ogy have developed a methodology using a low-cost motion
capture system to track assembly operations using both a physical
mockup and an immersive virtual environment, with the captured
motion data used in a CAD model based simulation for ergonomic

Fig. 25. CAD model based assembly planning of scooter engines.

Fig. 26. CAD model based assembly tool and xture design.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 817
analysis [43,150]. They have demonstrated this systems utility for
investigating the fastening operation and its potential cause of
ergonomics related injuries in the aircraft manufacturing industry.
8.2.1. Simulated assembly using a physical mockup
A physical mockup for a fuselage belly section, as shown in
Fig. 27, was built to perform a simulated fastening operation.
Twelve Optitrack cameras were set up as a motion capture system
to eliminate any possible occlusion from the mockup. A Kalman
lter was implemented to increase the accuracy and stability of the
data obtained by the motion capture system. The generated data
were used in simulation with Siemens Jack software for ergonomic
analysis [150].
8.2.2. Virtual assembly inside a CAVE
A 3 m 3 m 3 m four-walled CAVE (Cave Automatic Virtual
Environment) was utilized to provide a realistic 3D virtual
environment. The layout of this CAVE included three rear-
projected walls and a down-projected oor using CRT projectors.
The projections on the walls and oor of the CAVE were monitored
by four synchronized computers that formed a cluster, with one
computer serving as the master and the others as slaves. The
scenes rendered on the walls and oor were active stereo images
created at a frame rate of 85 Hz. Shutter glasses were used in sync
with the frequency of the stereo vision to create a stereoscopic
viewing effect. Virtual reality toolkits, including VR Juggler and
OpenGL, were used to create a VR environment in the CAVE. The VR
environment was congured using VRJCong, a Java based
graphical user interface. A CAD model of the belly section of an
aircraft fuselage to be displayed in the CAVE was created. A
triangular mesh representation of the CAD model and texture in
bitmap format were used. A polygon rendering algorithm was
developed and implemented with OpenGL to render the scene. The
VR scene was composed by placing the four rendered scenes side
by side in a predened layout using the information from the VR
Juggler conguration le.
The left of Fig. 28 shows a virtual fastening operation on the
virtual fuselage inside the CAVE. After setting the world coordinate
system, the motion capture system recorded the initial position
and orientation of three body segments of the human wearing a
body suit with markers on it. This information was used to map the
human performing the virtual assembly task onto the digital
human model. Once the body pose information was recorded, the
systembegan recording the motion data and simulating the virtual
assembly in real time; see the right of Fig. 28.
8.2.3. Ergonomic analysis
Jacks Task Analysis Toolkit (TAT) is a set of human factor
analysis tools that can be used to perform ergonomic analysis of
simulated human movements. Lower Back Analysis, Static
Strength Prediction, NIOSH (National Institute for Occupational
Safety and Health) Lifting Analysis, Fatigue Analysis, and RULA
(Rapid Upper Limb Assessment) are some of the ergonomic
analysis tools available from TAT.
The fastening operation predominantly involves the upper body
of the operator, so RULA is a useful tool for ergonomic analysis.
RULA has been developed for use in ergonomic investigation of
workplaces [120], and is especially useful for scenarios in which
work-related upper limb disorders are reported. RULA uses a
scoring system based on posture, muscle use, and force exertion to
assign an action level to the evaluated task. After setting the values
of these parameters, the result of RULA analysis can be readily
obtained. The RULA analysis can be used to determine the risk
levels associated with particular postures and to suggest actions
needed in order to reduce the risk of long-term ergonomic injuries
and to design safer workplaces [86,211].
8.3. Assembly inspection planning
Design of an assembly inspection system also can take
advantage of CAD modeling of a product and its components to
be inspected. As an example, CAVIS (Fig. 29 ) is a CAD model
based inspection system design tool developed for the car lock
manufacturer Motrol [106]. Synthetic images are generated from
CAD models in order to develop a vision system before the actual
assembly line and products are available. The basic principle is to
use a CADmodeler to identify potential assembly errors, e.g., use of
wrong components, as shown in Fig. 29 . Graphic rendering
(Fig. 29 ) then is used to simulate the position of cameras (Fig. 29
) and the effect of lighting (Fig. 29 ) in order to enhance the
differences between correct and wrong components and to select
image analysis algorithms (Fig. 29 ). Despite recent research in
image rendering, real part variability still requires an on-line
learning phase to ne-tune the inspection system (Fig. 29 ).
9. Technology gaps and future R&D needs
Some technology gaps and future R&D needs for the develop-
ment of CAD model based assembly simulation, planning, and
training systems are discussed in this section.

Fig. 27. Motion data captured from a physical fuselage mockup by an Optitrack
motion capture system and real-time simulation in Jack.

Fig. 28. Virtual fastening operation inside a CAVE.

Fig. 29. CAD model based design of assembly inspection system.


M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 818
9.1. Virtual assembly realism
To improve the realism of virtual assembly, research and
development efforts are needed to advance virtual reality
technologies aimed at accomplishing the following objectives:
(i) High-delity dynamic graphic displays. For HMDs, the
resolution and eld of view need to be increased, and the
weight of the helmet should be reduced. The projection
systems can be improved by reducing the latency effect,
increasing robustness, and reducing costs.
(ii) More accurate, efcient, robust, and low-cost motion tracking
devices. Hybrid tracking devices that combine different
principles (optical, electromagnetic, etc.) need to be developed
that provide advantages over single-type tracking devices.
Advanced data fusion techniques will be the key to hybrid
tracking devices.
(iii) Less invasive and more intelligent sensors that can capture
information beyond motion, such as human emotions (e.g.,
stress level).
(iv) Haptic devices that can generate both tactile and force stimuli
to provide users with richer touch feedback and allowthem to
feel the surface texture, shape, and softness/hardness of an
object. Self-grounded haptic devices that are lightweight and
yet can generate signicant force feedback will be useful for
many applications.
(v) Algorithms and software for representing, organizing, and
manipulating a big set of geometric and physical data to meet
the different computational demands in multi-threading the
graphics, haptic, and auditory displays for highly realistic,
multi-modal virtual assembly capabilities.
9.2. Internet-based collaborative virtual assembly planning and
training
Due to the increasing complexity of engineering artifacts and
other factors such as large variations in labor costs among different
countries in the world, it is becoming more and more essential for
product design and manufacturing to be done through global
collaboration. Internet-based VEs offer the possibility of major
breakthroughs in collaborative engineering, allowing team mem-
bers located in different geographical locations and time zones to
utilize internet-enabled collaborative environments with multi-
media VR tools to share a VE in collaborative design and
manufacturing. New capabilities are needed to enable effective
distant collaboration, including conict detection and resolution
among the collaborators. In order to achieve simultaneous displays
of a complex VE and concurrency control among the collaborators
in different geographical locations, advanced computing, network-
ing, and communication architectures and methodologies, along
with the necessary hardware and software, must be developed and
implemented.
Achieving effective, web-based, collaborative virtual assembly
planning and training requires substantial further research to
develop new VE collaboration utilities [34,172]. Toward this end,
advanced communication capabilities offered by Web 2.0, ultra-
high speed internet, cloud computing and sourcing, mobile
(internet) devices with high processing power and remote data
transfer, social networks, crowd-sourcing [40], etc. should be
explored and exploited.
10. Summary
This paper has reviewed the state-of-the-art methodologies for
developing CAD model based systems for assembly simulation,
planning, and training. In particular, it has discussed how to
acquire digital data from the surface of a 3D object, how
to construct a CAD model using the acquired data, and how to
exchange data between CAD and VR systems. It also has presented
methods for motion capture, force modeling, sound modeling, and
multi-modal (graphic, haptic, and auditory) rendering. Moreover,
virtual assembly modeling methods, including constraint-based
modeling and physics-based modeling, were discussed. The
description, categorization, and comparison of different technol-
ogies should be helpful for the selection of proper methods,
techniques, and tools to build an assembly simulation, planning,
and training system. Furthermore, the paper described an
integrated methodology for the design, planning, evaluation,
and testing of assembly systems useful to making decisions on
the development and selection of assembly processes and systems
using digital and physical simulations at all levels fromproducts to
systems. This paper has provided some assembly planning and
training application examples that incorporate VR and AR
technologies, and has demonstrated the potential of such CAD
model based systems for shortening the product design cycle,
improving product quality, and enhancing worker skills via virtual
training. Finally, future R&D needs for further advancement of
these systems were discussed.
References
[1] Abele E, Tenberg R, Wennemer J, Cachay J (2010) Production Skills Devel-
opment in Learning Factories. Zeitschrift fur Wirtschaftlichen Fabrikbetrieb
Journal of Economics Factory Operation105:909913.
[2] Adamo-Villani N (2008) 3D Rendering of American Sign Language Finger
Spelling: A Comparative Study of Two Animation Techniques. International
Journal of Human and Social Sciences 3(4):314319.
[3] Aguiar ED, Theobalt C, Stoll C, Seidel H-P (2007) Marker-less Deformable
Mesh Tracking for Human Shape And Motion Capture. Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition.
[4] AlGeddawy T, ElMaraghy H (2011) Design of Single Assembly Line for the
Delayed Differentiation of Product Variants. Flexible Services and Manufactur-
ing Journal 22(3):163182.
[5] Aliaga D, Xu Y (2008) Protogeometric Structured Light: A Self-Calibrating and
Multi-viewpoint Framework for Accurate 3D Modeling. IEEE Conference on
Robotics and Automation (S. 1-8), Computer Vision and Pattern Recognition,
Anchorage, AK.
[6] Andersson M, Olsson G (1998) A Simulation Based Decision Support
Approach for Operational Capacity Planning in a Customer Order Driven
Assembly Line. Proceedings of Simulation Conference, 935942.
[7] ART, 2013. http://www.ar-tracking.com/home/.
[8] Ascension Technology Corporation, 2013. http://www.ascension-tech.com/.
[9] Ayoub MM, Miller M (1991) Industrial Workplace Design. Workplace, Equip-
ment and Tool Design, .
[10] Azab A, ElMaraghy H (2007) Mathematical Modeling for Recongurable
Process Planning. CIRP Annals Manufacturing Technology 56:467472.
[11] Azab A, ElMaraghy H, Samy SN (2009) Reconguring Process Plans: A New
Approach to Minimize Change. Changeable and Recongurable Manufacturing
Systems, Springer, London. pp. 179194.
[12] Baird KM, Bareld W (1999) Evaluating the Effectiveness of Augmented
Reality Displays for a Manual Assembly Task. Virtual Reality 4/4:250259.
[13] Baltsavias EP (1999) A Comparison Between Photogrammetry and Laser
Scanning. ISPRS Journal of Photogrammetry and Remote Sensing 8394.
[14] Barbieri L, Bruno F, Caruso F, Muzzupappa M (2008) Innovative Integration
Techniques Between Virtual Reality Systems and CAx Tools. International
Journal of Advanced Manufacturing Technology 38:10851097.
[15] Bernard A, Fischer A (2002) NewTrends on Rapid Product Development. CIRP
Annals Manufacturing Technology 51(2):635652.
[16] Bernard A, Hasan R (2002) Working Situation Model for Safety Integration
During Design Phase. CIPR Annals Manufacturing Technology 51(1):119122.
[17] Bernstein NL, Lawrence DA, Pao LY (2005) Friction Modeling and Compensation
for Haptic Interfaces, Euro Haptics Conference, and Symposium on Haptic Inter-
faces for Virtual Environment and Teleoperator Systems.
[18] Blake JS (2011) Real-time Human Pose Recognition in Parts From Single
Depth Images. Proceedings of IEEE Conference on Computer Vision and Pattern
Recognition.
[19] Bohler W (2005) Comparison of 3D Laser Scanning and Other 3D Measure-
ment Techniques. in Gruen A, Baltsavias EP, (Eds.) Recording, Modeling and
Visualisation of Cultural Heritage, Taylor & Francis, London, UK8999.
[20] Boothroyd G, Dewhurst P, Knight W (2001) Product Design for Manufacture
and Assembly, Marcel Dekker, New York.
[21] Bordegoni M, Cigini U, Belluco P, Aliverti M (2009) Evaluation of a Haptic-
Based Interaction System for Virtual Manual Assembly. Shumaker R, (Ed.)
Lecture Notes in Computer Science Virtual and Mixed Reality Third International
Conference on Virtual and Mixed Reality San Diego CA USA, vol. 5622. .
[22] Bouma W, Fudos I, Hoffmann C, Cai J, Paige R (1995) A Geometric Constraint
Solver. Computer-aided Design 27(6):487501.
[23] Bouzit M, Popescu G, Burdea G, Boian R (2002) The Rutgers Master II-ND Force
Feedback Glove, Haptic Interfaces for Virtual Environment and Teleoperator
Systems, Rolando.
[24] Bowland NW, Gao JX, Sharma R (2003) A PDM- and CAD-Integrated Assembly
Modeling Environment for Manufacturing Planning. Journal of Materials
Processing Technology 138:8288.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 819
[25] Brough J, Schwartz M, Gupta S, Anand D, Kavetsky R, Pettersen R (2007)
Towards the Development of a Virtual Environment-Based Training System
for Mechanical Assembly Operations. Virtual Reality 11:189206.
[26] Bucksch A (2006) 3D Model Generation with Laser Scanners, Leonardo Times.
[27] Bullinger H-J, Richter M, Seidel KA (2000) Virtual Assembly Planning. Human
Factors and Ergonomics in Manufacturing 10(3):331341.
[28] Burdea GC (1999) Keynote Address: Haptic Feedback for Virtual Reality.
Proceeding of International Workshop on Virtual Prototyping 8796.
[29] Burdea GC, Coiffet P (2003) Virtual Reality Technology, Wiley, Hoboken, NJ.
[30] Bureau of Labor Statistics, 2007. http://www.bls.gov/opub/ted/2008/dec/
wk1/art02.htm.
[31] Caudell TP, Mizell DW (1992) Augmented Reality: An Application of Heads-
up Display Technology to Manual Manufacturing Processes. Proceedings of
25th Hawaii International Conference on System Sciences, 659669.
[32] Chadda A, Zhu W, Leu MC, Liu XF (2011) Design, Implementation, and
Evaluation of Optical Low-cost Motion Capture System. Proceedings of the
ASME International Design Engineering Technical Conferences & Computers and
Information in Engineering Conference IDETC/CIE, Washington, DC, USA.
[33] Chandrasegaran SK, Ramani K, Sriram RD, Horva th I, Bernard A, Harik RF, Gao
W (2013) The Evolution, Challenges, and Future of Knowledge Representa-
tion in Product Design Systems. Computer-aided Design 45(2):204228.
[34] Chen C-j, Yun-feng W, Yong Y (2010) A modeling and Representation Method
for Virtual Assembly System. Applied Mechanics and Materials 10571062.
[35] Cheung G, Baker S, Kanade T (2003) Visual Hull Alignment and Renement
Across Time: A 3DReconstruction AlgorithmCombining Shape-From-Silhou-
ette With Stereo. Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition (S. II-375382, vol. 2).
[36] Cheung K, Baker S, Kanade T (2003) Shape-From-Silhouette of Articulated
Objects and Its Use for Human Body Kinematics Estimation and Motion
Capture. Proceedings of 2003 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (S. I-77I-84 vol. 1).
[37] Chimienti V, Iliano S, Dassisti M, Dini G, Failli F (2010) Guidelines for Imple-
menting Augmented Reality Procedures in Assisting Assembly Operations. IFIP
Advances in Information and Communication Technology 315:174179.
[38] Chryssolouris G, Mavrikios D, Fragos D, Karabatsou V, Alexopoulos K (2004) A
Hybrid Approach to the Verication and Analysis of Assembly and Main-
tenance Processes Using Virtual Reality and Digital Mannequin Technologies.
Virtual and Augmented Reality Applications in Manufacturing 97110.
[39] Cicek A, Gu lesin M (2007) A Part Recognition Based Computer Aided Assem-
bly System. Computers in Industry 58(89):733746.
[40] Corney JR, Torres-Sa nchez C, Jagadeesan AP, Yan XT, Regli WC, Medellin H
(2010) Putting the Crowd to Work in a Knowledge-Based Factory. Advanced
Engineering Informatics 24(3):243250.
[41] Curtis D, Mizell D, Gruenbaum P, Janin A (1998) Several Devils in the Details:
Making an AR Application Work in the Airplane Factory. Proceedings of the
International Workshop on Augmented Reality, 4760.
[42] CyberGlove Systems, 2013. http://www.cyberglovesystems.com/.
[43] Daphalapurkar CP (2012) Development of Kinect Applications for Assembly
Simulation and Ergonomic Analysis, Missouri University of Science and Tech-
nology, Rolla, MO, USA. (M.S. Thesis).
[44] Day PN, Ferguson RK, Holt OB, Hogg S, Gibson D(2005) Wearable Augmented
VR for Enhancing Information Delivery in High Precision Defense Assembly:
An Engineering Case Study. Virtual Reality 8:177185.
[45] Dearborn M (2013) Fords Virtual Manufacturing, http://www.motionanaly-
sis.com/html/temp/ford.html.
[46] Dick AR, Torr PHS, Cipolla R (2004) Modeling and Interpretation of Archi-
tecture From Several Images. International Journal of Computer Vision
60(2):11134.
[47] Dini G, Santochi M (1992) Automated Sequencing and Subassembly Detec-
tion in Assembly Planning. CIRP Annals Manufacturing Technology 41(1):
14.
[48] Du J, Duffy V (2007) A methodology for Assessing Industrial Workstations
Using Optical Motion Capture Integrated With Digital Human Models. Occu-
pational Ergonomics 7:1125.
[49] Duriez D, Kheddar A (2006) Realistic Haptic Rendering of Interacting Deform-
able Objects in Virtual Environments. IEEE Transactions on Visualization and
Computer Graphics 12(1):3647.
[50] Edwards GW, Bareld W, Nussbaum MA (2004) The Use of Force Feedback
and Auditory Cues for Performance of an Assembly Task in an Immersive
Virtual Environment. Virtual Reality 7:112119.
[51] El-hakimS (1998) Theme Issue on Imaging Modeling for Virtual Reality. ISPRS
Journal of Photogrammetry and Remote Sensing 53(6):309310.
[52] El-hakim S (2000) A Practical Approach to Creating Precise and Detailed 3D
Models From Single and Multiple Views. Proceedings of XIX Congress of the
International Society for Photogrammety and Remote Sensing.
[53] ElMaraghy H (2006) A Complexity Code for Manufacturing Systems. ASME
International Conference on Manufacturing Science & Engineering, Symposium
on Advances in Process & System Planning, Ypsilanti, USA, 110.
[54] ElMaraghy H (2009) Manufacturing Success in the Age of Variation, Keynote
Paper. 3rd Conference on Changeable, Agile, Recongurable and Virtual Produc-
tion (CARV), Munich, Germany, 515.
[55] ElMaraghy HA (1993) Evolution and Future Perspectives of Computer-Aided
Process Planning CAPP. CIRP Annals Manufacturing Technology 42(2):
739751.
[56] ElMaraghy HA (2007) Recongurable Process Plans for Responsive Manu-
facturing Systems, Digital Enterprise Technology: Perspectives and Future
Challenges. Springer Science 3544.
[57] ElMaraghy HA, AlGeddawy T, Azab A, ElMaraghy W (2011) Change in
Manufacturing-research and Industrial Challenges. Proceedings of 4th Inter-
national Conference on Changeable, Agile, Recongurable and Virtual Production
(CARV), Montreal, Canada, Springer, 29.
[58] ElMaraghy H, Azab A, Schuh G, Pulz C (2009) Managing Variations in
Products, Processes and Manufacturing Systems. CIRP Annals Manufacturing
Technology 58(1):441446.
[59] ElMaraghy HA, Ravi T (1992) Modern Tools for the Design, Modeling and
Evaluation of Flexible Manufacturing Systems. International Journal of
Robotics and Computer Integrated Manufacturing 9(45):335340.
[60] ElMaraghy H, Wiendahl H-P (2009) Changeability An Introduction. Change-
able and Recongurable Manufacturing Systems, Springer-Verlag. pp. 324.
[61] Engineering Village, 2012. http://www.engineeringvillage2.org.
[62] Fa M, Fernando T, Dew P (1993) Interacitve Constraint-based Solid Modeling
Using Allowable Motion. ACM/SIGGRAPH Symposium on Solid Modeling and
Applications, 243252.
[63] Fechteler P, Eisert P, Rurainsky J (2007) Fast and High Resolution 3D Face
Scanning. Proceedings of IEEE International Conference on Image Processing, vol.
3, III-81III-84.
[64] Fernando T, Murray N, Tan K, Wimalaratne P (1999) Software Architecture for
a Constraint-Based Virtual Environment. Proceedings of the ACM Symposium
on Virtual Reality Software and Technology, 147154.
[65] Festo-Didactic, 2011. iFactory, available from: http://www.festo-didactic.-
com/int-en/news/ifactory-innovative-training-factory.htm.
[66] Fo D, Sliwa T, Voisin Y (2004) A Comparative Survey on Invisible Structured
Light. Proceedings of SPIE Conference.
[67] Froehlich B, Tramberend H, Beers A, Agrawala M, Baraff D (2000) Physically-
based Manipulation on the Responsive Workbench. Proceedings of IEEE Virtual
Reality Conference, 511.
[68] Fusiello AVR (1997) Efcient Stereo With Multiple Windowing. Proceedings of
IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
858863.
[69] Gao XS, Chou SC (1998) Solving Geometric Constraint Systems. II. A Symbolic
Approach and Decision of Re-constructibility. Computer-aided Design
30(2):115122.
[70] Garbaya S, Zaldivar-Colado U (2007) The Affect of Contact Force Sensations
on User Performance in Virtual Assembly Tasks. Virtual Reality 11:287299.
[71] Geomagic, 2013. http://www.geomagic.com/.
[72] Gomes Sa A de, Zachmann G(1999) Virtual Reality as a Tool for Verication of
Assembly and Maintenance Processes. Computers and Graphics 23(3):
389403.
[73] Gonzalez CG, Woods RE (2002) Digital Image Processing, Prentice Hall, Upper
Saddle River, NJ, USA.
[74] Graf H, Brunetti G, Stork A (2002) A Methodology Supporting the Preparation
of 3d-cad Data for Design Reviews in VR. Proceedings of the 7th International
Design Conference, 489496.
[75] Guendelman E, Bridson R, Fedkiw R (2003) Nonconvex Rigid Bodies With
Stacking. ACM Transactions on Computer Graphics 22(3):871879.
[76] Gupta M, Agrawal A, Veeraraghavan A, Narasimhan SG (2011) Structured
Light 3D Scanning in the Presence of Global Illumination. IEEE Computer
Vision and Pattern Recognition.
[77] Hakkarainen M, Woodward C, Billinghurst M (2008) Augmented Assembly
Using a Mobile Phone. Proceedings of the 7th IEEE International Symposium on
Mixed and Augmented Reality, Cambridge, UK, 167168.
[78] Halttunen V, Tuikka T (2000) Augmenting Virtual Prototyping With Physical
Objects. Proceeding of Conference on Advanced Visual Interfaces, 305306.
[79] Haniff DJ, Boud A, Baber C (1999) Assembly Training With Augmented Reality
and Virtual Reality. Proceedings of International Conference on HumanCom-
puter Interaction, 3536.
[80] Hasan R, Bernard A, Ciccotelli J, Martin P (2003) Integrating safety into the
Design Process: Elements and Concepts Relative to the Working Situation.
Safety Science 41(2/3):155179.
[81] Hazbany S, Gilad I, Shpitalni M (2007) About the Efciency and Cost Reduc-
tion of Parallel Mixed-model Assembly Lines. The Future of Product Develop-
ment 483492.
[82] Healey G, Binford TO (1987) Local Shape From Specularity. Proceedings of
the 1st IEEE International Conference on Computer Vision, London, UK, 151
160.
[83] Hills AM(2004) Daratech Study Names Delmia Corp. as Market Leader in Digital
Manufacturing Process Management. Dassault Systems: http://www.3ds.com/
de/company/news-media/press-releases-detail/release/daratech-study-
names-delmia-corp-as-market-leader-in/single/268/?cHas-
h=aea58052058f63098d97c74d157a9258.
[84] Horn BK, Brooks MJ (1989) Shape From Shading, MIT Press, Cambridge, MA,
USA.
[85] Howard B, Vance J (2007) Desktop Haptic Virtual Assembly Using Physically
Based Modeling. Virtual Reality 11:207215.
[86] Hu B, Ma L, Zhang W, Salvendy G, Chablat D, Bennis F (2011) Can Virtual
Reality Predict Body Part Discomfort and Performance of People in Realistic
World for Assembling Tasks? International Journal of Industrial Ergonomics
41(1):6471.
[87] Hu SJ, Ko J, Weyand L, ElMaraghy HA, Lien TK, Koren Y, Bley H, Chryssolouris
G, Nasr N, Shpitalni M (2011) Assembly System Design and Operations
for Product Variety. CIRP Annals Manufacturing Technology 60(2):
715733.
[88] Hummel V, Westkamper E (2007) Learning Factory for Advanced Industrial
Engineering Integrated Approach of the Digital Learning Environment and the
Physical Model Factory, Production Engineering, University Publishing House,
Krakow, Poland215227.
[89] Iglesias R, Prada E, Uribe A, Garcia-Alonso A, Casado SGT (2007) Assembly
Simulation on Collaborative Haptic Virtual Environments. Proceedings of 15th
International Conference in Central Europe on Computer Graphics, Visualization
and Computer Vision, 241247.
[90] Ikeuchi K (2001) Modeling From Reality. Proceedings of 3rd International
Conference on 3-D Digital Imaging and Modeling, Norwell, MA, USA.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 820
[91] Ikonomov P, Milkova E (2004) Virtual Assembly/Disassembly System Using
Natural Human Interaction and Control. Virtual and Augmented Reality Appli-
cations in Manufacturing 111125.
[92] Iliano S, Chimienti V, Dini G (2012) Training by Augmented Reality in
Industrial Environments: A Case Study. Proceedings of 4th CIRP Conference
on Assembly Technologies and Systems, Ann Arbor, USA.
[93] IoTracker, 2013. http://iotracker.com/.
[94] Jayaram S, Jayaram U, Kim YJ, DeChenne C, Lyons KW, Palmer C, et al (2007)
Industry Case Studies in the Use of Immersive Virtual Assembly. Virtual
Reality 11:217228.
[95] Jayaram S, Jayaram U, Wang Y, Lyons Ka (1999) VADE: A Virtual Assembly
Design Environment. IEEE Computer Graphics and Applications 19(6):4450.
[96] Jin S, Cai W, Lai X, Lin Z (2010) Design Automation and Optimization of
Assembly Sequences for Complex Mechanical Systems. International Journal
of Advanced Manufacturing Technology 48:10451059.
[97] Joshi AS, Leu MC, Murray S (2008) Ergonomic Impact of Fastening Operation.
Proceedings of 2nd CIRP Conference on Assembly Technologies and System.
[98] Kaori Yoshiki HS, Mochimaru M (2006) Reconstruciton of 3D Face Model
From Single Shading Image Based on Anatomical Database. Proceedings of
18th International Conference on Pattern Recognition.
[99] Kender JR (1979) Shape FromTexture: An Aggregation Transformthat Maps a
Class of Textures into Surface Orientation. Proceedings of 6th International
Joint Conference on Articial Intelligence, vol. 1, 475480.
[100] Kinetct, 2013. http://en.wikipedia.org/wiki/Kinect.
[101] Kirk AG, OBrien JF, Forsyth DA (2005) Skeletal Parameter Estimation From
Optical Motion Capture Data. Proceedings of 2005 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, 782788.
[102] Ko J, Hu SJ (2008) Balancing of Manufacturing Systems With Complex
Congurations for Delayed Product Differentiation. International Journal of
Production Research 46(15):42854308.
[103] Kopp S, Wachsmuth I (2002) Model-based Animation of Co-verbal Gesture.
Proceedings of Computer Animation, Los Alamitos, 252257.
[104] Koren Y, Shpitalni M (2010) Design of Recongurable Manufacturing Sys-
tems. Journal of Manufacturing Systems 29(4):130.
[105] Lang YD, Yao Y, Xia P (2008) ASurvey of Virtual Assembly Technology. Applied
Mechanics and Materials 1012:711716.
[106] Lanzetta M, Santochi M, Tantussi G. (1999) in Hallwag re, (Ed.) Computer-
aided Visual Inspection in Assembly, 48 (1). CIRP Annals Manufacturing
Technology, Switzerland 2124.
[107] Laperriere L, ElMaraghy HA (1994) Assembly Sequences Planning for Simul-
taneous Engineering Applications. International Journal of Advanced Manu-
facturing Technology 9:231244.
[108] Laurentini A (1994) The Visual Hull Concept for Silhouette-based Image
Understanding. IEEE Transactions 6(2):150162.
[109] Lecuyer A (2009) Simulating Haptic Feedback Using Vision: A Survey of
Research and Applications of Pseudo-haptic Feedback. Teleoperators and
Virtual Environments MIT Press 18(1):3953.
[110] Lecuyer A, Coquillart S, Kheddar A, Richard P, Coiffet P (2000) Pseudo-Haptic
Feedback: Can Isometric Input Devices Simulate Force Feedback. Proceedings
of IEEE International Conference on Virtual Reality, 8390.
[111] Light R, Gossard D (1982) Modication of Geometric Models Through Varia-
tional Geometry. Computer-aided Design 14(4):209214.
[112] Liu C, Zhang Y, Sun L (2008) Web Based 3D Assembly Sequence Planning
Prototype Integrated With CAD Model. Proceedings of 12th International
Conference on Computer Supported Cooperative Work in Design, 823828.
[113] Liu J, Ning R, Yao Y, Wan B (2006) Product Lifecycle-oriented Virtual Assem-
bly Technology. Frontiers of Mechanical Engineering in China 388395.
[114] Liverani A, Amati G, Caligiana G (2004) A CAD-augmented Reality Integrated
Environment for Assembly Sequence Check and Interactive Validation. Con-
current Engineering Research and Applications 12(1):6777.
[115] Manacorda G (2012) Plant Manager at Husqvarna srl BMW Group R&D
MotorbikeDevelopment Manager ApriliaPiaggioGroup. privatecommunication.
[116] Marchetta MG, Forradellas RQ (2010) An Articial Intelligence Planning
Approach to Manufacturing Feature Recognition. Computer-aided Design
42(3):248256.
[117] Maropoulos PG, Ceglarek D (2010) Design Verication and Validation In
Product Lifecycle. CIRP Annals Manufacturing Technology 59:740759.
[118] Martin WN, Aggarwal J (1983) Volumetric Descriptions of Objects From
Multiple Views. IEEE Transactions on Pattern Analysis and Machine Intelligence
150158.
[119] Massie TH, Salisbury JK (1994) The Phantom Haptic Interface: A Device for
Probing Virtual Object. Proceedings of ASME Winter Annual Meeting, Sympo-
sium on Haptic Interfaces for Virtual Environment and Teleoperator Systems.
[120] McAtamney L, Corlet EN(1993) RULA: A Survey Method for the Investigation
of World-related Upper Limb Disorders. Applied Ergonomics 24(2):9199.
[121] McNeely WA, Puterbaugh KD, Troy JJ (1999) Six Degree-of-freedom Haptic
Rendering using Voxel Sampling, SIGGRAPH, Los Angeles, USA.
[122] Mello HD, Sanderson L (1991) A correct and Complete Algorithm for the
Generation of Mechanical Assembly Sequences. IEEE Transaction on Robotics
and Automation 7(2):228240.
[123] Menache A (2010) Understanding Motion Capture for Computer Animation,
Elsevier, store.elsevier.com/.
[124] Miller S, Pegden D (2000) Introduction to Manufacturing Simulation, Pro-
ceeding of WSC. Winter Simulation Conference 1:6366.
[125] Mirtich B, Canny J (1994) Impulse-based Dynamic Simulation. Proceedings of
Workshop on Algorithmic Foundations of Robotics.
[126] Moeslund TB, Hilton A, Kruger V (2006) ASurvey of Advances in Vision-based
Human Motion Capture and Analysis. Computer Vision and Image Under-
standing 104:90126.
[127] Molineros JM(2002) Computer Vision and Augment Reality for Guiding Assembly,
The Pennsylvania State University, State College, PA, USA. (Ph.D. Dissertation).
[128] Montenegro M, Casco nb JM, Escorba JM, Rodriguez E, Monteroa G (2009) An
Automatic Strategy for Adaptive Tetrahedral Mesh Generation. Applied
Numerical Mathematics 59:22032217.
[129] Moore M, Wilhelms J (1988) Collision Detection and Response for Computer
Animation. Computer Graphics 22(4):289298.
[130] Mu ller H, Klingert A (1993) Surface Interpolation From Cross Sections.
Proceeding of Focus on Scientic Visualization 139190.
[131] Mullen P, Goes FD, Desbrun M, Cohen-Steiner D, Alliez P (2010) Signing the
Unsigned: Robust Surface Reconstruction FromRawPoint Sets. Euro Graphics
Symposium on Geometry Processing, vol. 29(5).
[132] Neugebauer R, Heinig R, Wittstock E, Junghans T, Riedel T, Richter A (2011)
Enhancing Technical Training With Virtual Reality: Case Study. Proceedings of
International Conference on Virtual and Augmented Reality in Education, 4148.
[133] Neugebauer R, Klimant P, Wittstock V (2011) Virtual Reality Based Simula-
tion of NC Programs for Milling Machines. Proceedings of 20th CIRP Design
Conference, Ecole Centrale de Nantes, Nantes, France, 697702.
[134] Neugebauer R, Pu rzel F, Schreiber A, Riedel T (2011) Virtual Reality Aided
Planning for Energy-autonomous Factories. Proceedings of IEEE International
Conference on Industrial Informatics 250254.
[135] Neuho fer J, Odenthal B, Mayer M, Jochems N, Schlick C (2010) Analysis and
Modeling of Aimed Movements in Augmented and Virtual Reality Training
Systems for Dynamic HumanRobot Cooperation. Proceedings of 11th IFAC/
IFIP/IFORS/IEA Symposium on Analysis, Design and Evaluation of Human
Machine systems.
[136] Niu Q, Chi X, Leu MC, Ochoa J (2008) Image Processing, Geometric Modeling
and Data Management for Development of a Virtual Bone Surgery System.
Computer Aided Surgery 13(1):3040.
[137] Odenthal B, Mayer M, Kabuss W, Schlick C (2012) A Comparative Study of
Head-mounted and Table-mounted Augmented Vision Systems for Assembly
Error Detection. Human Factors and Ergonomics in Manufacturing & Service
Industries.
[138] Odenthal B, Mayer M, Kabuss W, Schlick C (2012) Design and Evaluation of an
Augmented Vision System for HumanRobot Cooperation in Cognitively
Automated Assembly Cells. Proceedings of 9th International Multi-conference
on Systems, Signals and Devices.
[139] Ong SK, Pang Y, Nee AYC (2007) Augmented Reality Aided Assembly
Design and Planning. CIRP Annals Manufacturing Technology 56(1):4952.
[140] Ong SK, Wang ZB (2010) Augmented Assembly Technologies Based on 3D
Barehand Interaction. CIRP Annals Manufacturing Technology 60(1):14.
[141] OptiTrack, 2013. http://www.naturalpoint.com/optitrack/.
[142] Ozturk A, Halici U, Ulusoy I, Akagunduz E (2008) 3D Face Reconstruction
Using Stereo Images and Structured Light. Proceedings of IEEE 16th Signal
Processing, Communication and Applications Conference.
[143] Page D, Koschan A, Voisin S, Ali N, Abidi M (2005) 3D CAD Model Generation
of Mechanical Parts Using Coded-Pattern Projection and Laser Triangulation
Systems. Assembly Automation 25(3):230238.
[144] Pan C (2005) Integrating CAD Files and Automatic Assembly Sequence Planning,
Iowa State University, Ames, IA, USA. (Ph.D. Thesis).
[145] Panasonic, 2011. http://pewa.panasonic.com/components/built-in-sensors/
3d-image-sensors/d-imager/.
[146] Pang Y, Nee AYC, Ong SK, Yuan ML, Youcef-Toumi K (2006) Assembly Feature
Design in an Augmented Reality Environment. Assembly Automation
26(1):3443.
[147] Papakostas N, Alexopoulos K, Kopanakis A (2011) Integrating Digital Man-
ufacturing and Simulation Tools in the Assembly Design Process: A Cooperat-
ing Robots Cell Case. CIRP Journal of Manufacturing Science and Technology
4(1):96100.
[148] Pekelny Y, Gotsman C (2008) Articulated Object Reconstruction and Marker-
less Motion Capture From Depth Video. EUROgraphics 27:400408.
[149] Popescu V, Burdea G, Bouzit M(1999) Virtual Reality Simulation Modeling for
a Haptic Glove. Computer Animation 195200.
[150] Puthenveetil SC (2012) Development of Marker-based Human Motion Capture
Systems for Assembly Simulation and Ergonomic Analysis, Missouri University
of Science and Technology, Rolla, MO, USA. (M.S. Thesis).
[151] Raghavan V, Molineros J, Sharma R (1999) Interactive Evaluation of Assembly
Sequences Using Augmented Reality. IEEE Transaction on Robotics and Auto-
mation 15(3):435449.
[152] Rashid MFF, Hutabarat W, Tiwari A (2012) A Review on Assembly Sequence
Planning And Assembly Line Balancing Optimization Using Soft Computing
Approaches. International Journal of Advanced Manufacturing Technology
59:335349.
[153] Reiners D, Didier S, Gudrun K, Stefan M (1998) Augmented Reality for
Construction Tasks: Door Lock Assembly. Proceedings of the International
Workshop on Augmented Reality, 3146.
[154] Reinhart G, Patron C (2003) Integrating Augmented Reality in the Assembly
DomainFundamentals, Benets and Applications. CIRP Annals Manufactur-
ing Technology 52(1):58.
[155] Remondino F (2005) 3D Modeling of Close-range Objects: Photogrammetry
or Laser Scanning. Proceedings of SPIE Conference, IS&T Electronic Imaging:
Video Metrics VIII, vol. 5665(374), 216222.
[156] Remondino F, El-hakim S (2006) Image-based 3D Modeling: A Review.
Photogrammetric Record 21(115):269291.
[157] Ritchie J, LimT, Sung RS, Corney J, Rea H(2008) Part B: The Analysis of Design
and Manufacturing Tasks Using Haptic and Immersive VR Some Case
Studies. Product Engineering Tools and Methods Based on Virtual Reality
45074522.
[158] RobotWorx, 2011. Digital Tools Provide Solution to High-quality, Low Cost
Manufacturing Demands. http://www.robots.com/articles.php?tag=3142.
[159] Saaski J, Salonen T, Hakkarainen M, Siltanen S, Woodward C, Lempiainen J
(2008) Integration of Design and Assembly Using Augmented Reality. Micro-
assembly Technologies and Applications 260:395404.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 821
[160] Salonen T, Sa a ski J, Hakkarainen M, Kannetis T, Perakakis M, Siltanen S,
Potamianos A, Korkalo O, Woodward C (2007) Demonstration of Assembly
Work Using Augmented Reality. Proceedings of 6th ACM International Con-
ference on Image and Video Retrieval, 120123.
[161] Salvi J, Fernandez S, Pribanic T, Llado X (2010) A state of the Art Instructured
Light Patterns for Surface Prolometry. Pattern Recognition 26662680.
[162] Salvi J, Pages J, Batlle J (2004) Pattern Condication Strategies in Structured
Light Systems. Pattern Recognition 37(4):827849.
[163] Samy SN, ElMaraghy H (2012) A Model for Measuring Complexity of Auto-
mated and Hybrid Assembly Systems. International Journal of Advanced
Manufacturing Technology 62(6):513533.
[164] Samy SN, ElMaraghy H (2010) A Model for Measuring Products Assembly
Complexity. International Journal of Computer Integrated Manufacturing
23:10151027.
[165] Samy SN, ElMaraghy H (2012) Complexity Mapping of the Product and
Assembly System. Assembly Automation 32(2):135151.
[166] Schlick C, Odenthal B, Mayer M, Neuho fer J, Grandt M, Kausch B, Mu tze-
Niewo hner S (2009) Design and Evaluation of an Augmented Vision System
for Self-Optimizing Assembly Cells. Industrial Engineering and Ergonomics
539560.
[167] Schriber TJ, Brunner DT (2007) Inside Discrete-event Simulation Software:
How It Works and Why It Matters. Proceedings of the Winter Simulation
Conference, Piscataway, NJ, USA, 113123.
[168] Seleim A, Azab A, Algeddawy T (2012) Simulation Methods for Changeable
Manufacturing. Proceedings of 45th CIRP Conference on Manufacturing Systems
(CIRP CMS), University of Patras, Athens, Greece, 179184.
[169] Serra X(1989) A Systemfor Sound Analysis/Transformation/Synthesis Based on a
Deterministic Plus Stochastic Decomposition, Stanford University, Stanford, CA,
USA. (Ph.D. Dissertation).
[170] Seth A, Vance J, Oliver J (2011) Virtual Reality for Assembly Methods Pro-
totyping: A Review. Virtual Reality 15:520.
[171] Shapiro LS, Brady JM (1992) Feature-Based Correspondence: An Eigenvector
Approach. Image and Vision Computing 10(5):283288.
[172] Shyamsundar N, Gadh R (2002) Collaborative Virtual Prototyping of Product
Assemblies Over the Internet. Computer-aided Design 34(10):755768.
[173] Sims D (1994) New Realities in Aircraft Design and Manufacture. IEEE
Computer Graphics and Applications 14(2).
[174] Singh M, Basu A, Mandal MK (2008) Human Activity Recognition Based on
Silhouette Directionality. IEEE Transactions on Circuits and Systems for Video
Technology 18(9):12801291.
[175] Smith JS (2003) Survey on the Use of Simulation for Manufacturing System
Design and Operation. Journal of Manufacturing Systems 22:157171.
[176] Ssemakula M, Liao G, Ellis D, Kim K-Y, Sawilowsky S (2009) Introducing a
Flexible Adaptation Framework for Implementing Learning Factory Based
Manufacturing Education. ASEE Annual Conference and Exposition.
[177] Tang A, Owen C, Biocca F, Mou WM (2003) Comparative Effectiveness of
Augmented Reality in Object Assembly. Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems, 7380.
[178] Thornton, J., 2009. At Ford, Ergonomics Meets Immersive Engineering. http://
ehstoday.com/health/ergonomics/ford-ergonomics-simulation-0409/.
[179] Ulupinar F, Nevatia R (1995) Shape from Contour: Straight Homogeneous
Generalized Cylinders and Constant Cross Section Generalized Cylinders.
IEEE Transactions on Pattern Analysis and Machine Intelligence 17(2):
120135.
[180] Urso D, Bartoli D, Landini D, Lanzetta M (2012) Simulazione CAD di oper-
azioni di montaggio, Il Progettista Industriale. Tecniche Nuove 2:3033.
[181] Urso D, Bartoli D, Landini D, Lanzetta M (2012) Tutela della salute nelle
operazioni di montaggio, Il Progettista Industriale. Tecniche Nuove 3033.
[182] Valentini PP (2009) Interactive Virtual Assembling in Augmented Reality.
International Journal of Interactive Design and Manufacturing 3(2):109119.
[183] Vicon, 2013. http://www.vicon.com/.
[184] Vigano` R, Osorio-gomez G (2011) A Computer Tool to Extract Feasible
Assembly Sequences From a Product CAD Model, in Automated Way. Pro-
ceedings the IMProVe 2011 International Conference on Innovative Methods in
Product Design, Venice, Italy.
[185] Vijayakumar B, Kriegman DJ, Ponce J (1996) Structure and Motion of Curved
3D Objects From Monocular Silhouettes. Proceedings of IEEE Conference on
Computer Vision and Pattern Recognition, 327334.
[186] Wagner U, Algeddawy T, ElMaraghy H, Mu ller E (2012) The State-of-the-art
and Prospects of Learning Factories. Proceedings of CIRP Conference 3:
109114.
[187] Wang D, Hassan O, Morgan K, Weatheril N (2006) Efcient Surface Recon-
struction FromContours Based on Two-dimensional Delaunay Triangulation.
International Journal for Numerical Methods in Engineering 65:734751.
[188] Wang J, Chang Q, Xiao G, Wang N, Li S (2011) Data Driven Production
Modeling and Simulation of Complex Automobile General Assembly Plant.
Computers in Industry 62:765775.
[189] Wang L, Keshavarzmanesh S, Feng H, Buchal RO (2009) Assembly Process
Planning and Its Future in Collaborative Manufacturing: A Review. Interna-
tional Journal of Advanced Manufacturing Technology 41:132144.
[190] Wang Q, Li J-R, Wu B-L, Zhang X-M (2010) Live Parametric Design Modica-
tions in CAD-Linked Virtual Environment. International Journal of Advanced
Manufacturing Technology 50:859869.
[191] Wang ZB, Shen Y, Ong SK, Nee AYC (2009) Assembly Design and Evaluation
Based on Barehand Interaction in an Augmented Reality Environment.
Proceedings of International Conference on Cyber Worlds 2128.
[192] Webbink R, Hu SJ (2005) Automatic Generation of Assembly System
Solutions. IEEE Transactions on Automation Science and Engineering 2(1):
3239.
[193] Westka mper E, et al (2005) Smart Factory Bridging the Gap Between Digital
Planning and Reality. Proceedings of 38th International Seminar on Manufac-
turing Systems, Florianopolis, Brazil.
[194] Whitney DE (2004) Mechanical Assemblies: Their Design, Manufacture, and Role
in Product Development, Oxford University Press, New York.
[195] Wiedenmaier S, Oehme O, Schmidt L, Luczak H (2003) Augmented Reality
(AR) for Assembly Processes Design and Experimental Evaluation. Interna-
tional Journal of HumanComputer Interaction 16(3):497514.
[196] WinkelbachS, Wahl FM(2001) ShapeFrom2DEdgeGradient. Proceedings of the
23rd DAGM Symposium on Pattern Recognition, vol. 2191(450), 377384.
[197] Witkin A, Gleicher M, Welch W (1990) Interactive Dynamics. Computer
Graphics 24(2):1122.
[198] Xia P, Lopes AM, Restivo MT, Yao Y (2012) A New Type Haptics-based Virtual
Environment System for Assembly Training of Complex Products. Interna-
tional Journal of Advanced Manufacturing Technology 58:379396.
[199] Xing Y, Chen G, Lai X, Jin S, Zhou J (2007) Assembly Sequence Planning of
Automobile Body Components Based on Liaison Graph. Assembly Automation
27:157164.
[200] Xsens MVN. Inertial Motion Capture, 2013. http://www.xsens.com/en/
general/mvn.
[201] Xu Y, Meng X, Liu W, Xiang H(2006) A Collaborative Virtual Environment for
Real Time Assembly Design. Proceedings of ACM International Conference on
Virtual Reality Continuum and Its Applications 1417.
[202] Ye, J., Bresson, X., Goldstein, T., Osher, S., 2010, A fast variational method for
surface reconstruction from sets of scattered points, UCLA CAM Report.
[203] Yuan ML, Ong SK, Nee AYC (2008) Augmented Reality for Assembly Guidance
Using a Virtual Interactive Tool. International Journal of Production Research
46(7):17451767.
[204] Yun Y, Liu J, Ning R, Zhang Y (2005) Assembly Process Modeling for Virtual
Assembly Process Planning. International Journal of Computer Integrated
Manufacturing 18(6):442451.
[205] Zachmann G (2000) Virtual Reality in Assembly Simulation: Collision Detection,
Simulation Algorithms, and Interaction Techniques, Technische Universita t
Darmstadt, Darmstadt. (Dissertation).
[206] Zauner J, Haller M, Brandl A (2003) Authoring of a Mixed Reality Assembly
Instructor for Hierarchical Structures. Proceedings of 2nd IEEE/ACM Interna-
tional Symposium on Mixed and Augmented Reality, 237246.
[207] Zha XF, Lim SYE, Fok SC (1998) Integrated Intelligent Design and Assembly
Planning: A Survey. International Journal of Advanced Manufacturing Technol-
ogy 14:664685.
[208] Zhai R, Lin C (2011) 3D Model Generation of Complex Objects From Multiple
Range Images. Proceedings of International Conference on Electric Information
and Control Engineering (ICEICE), 14.
[209] Zhang J, Ong SK, Nee AYC (2011) RFID-assisted Assembly Guidance Systemin
an Augmented Reality Environment. International Journal of Production
Research 49(13):39193938.
[210] Zhu W, Chadda A, Leu MC, Liu XF (2011) Real-time Automated Simulation
Generation Based on CAD Modeling and Motion Capture. Journal of Computer
Aided Design and Applications 103121. PACE(1).
[211] Zhu W, Daphalapurkar CP, Puthenveetil SC, Leu MC, Liu XF, Chang AM, Gilpin-
Mcminn JK, Hu PH, Snodgrass SD (2012) Motion Capture Of Fastening
Operation Using Wiimotes for Ergonomic Analysis. Proceedings of Interna-
tional Symposium on Flexible Automation, St. Louis, USA.
[212] Zhu W, Vader A, Chadda A, Leu MC, Liu XF, Vance J (2010) Low-cost Versatile
Motion Tracking for Assembly Simulation. Proceedings of International Sym-
posium on Flexible Automation, Tokyo, Japan.
[213] Zhu W, Vader A, Chadda A, Leu MC, Liu XF, Vance J (2011) Wii Remote Based
Low-cost Motion Capture for Automated Assembly Simulation. Virtual Reality
112.
[214] Zhu X, Hu SJ, Koren Y, Huang N (2012) A Complexity Model for Sequence
Planning in Mixed-Model Assembly Lines. Journal of Manufacturing Systems
31(2):121130.
[215] Zilles CB, Salisbury JK (1995) A Constraint-based God-object Method for
Haptic Display. Proceedings of International Conference on Intelligent Robots
and Systems, vol. 3, 146151.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 822

Você também pode gostar