Escolar Documentos
Profissional Documentos
Cultura Documentos
Fig. 1. Schematic of a CAD model based assembly simulation, planning and training
system [57]. * Corresponding author.
Contents lists available at SciVerse ScienceDirect
CIRP Annals - Manufacturing Technology
j ournal homepage: ht t p: / / ees. el sevi er. com/ ci rp/ def aul t . asp
0007-8506/$ see front matter 2013 CIRP.
http://dx.doi.org/10.1016/j.cirp.2013.05.005
between product design, manufacturing planning, and actual
production. Such a system also serves as the basis of learning
factories, which are ideal for transferring research outcomes to
industry. New changeable and recongurable manufacturing
systems can be investigated, in which novel system concepts
and changeability enablers can be developed, realized, tested, and
evaluated [57].
CAD model based simulations have been developed with
functions spanning from conceptual and detailed design to
manufacturing process planning to product maintenance. They
have provided insight into product design and have been shown to
reduce manufacturing time and costs and to improve product
quality signicantly. A CAD model based simulation system for
assembly planning, training and assessment is illustrated in Fig. 2.
The 3D model of a physical part can be generated using CAD
software or a reverse engineering process with the acquisition of
part geometric data in digital form from an existing physical part.
Then, the movement of the physical part and the human operator,
as well as their interaction with each other and with other objects
in the physical environment, can be tracked using a motion capture
system and other input devices, such as a sensory glove, a
microphone, etc. Furthermore, stereoscopic viewing augmented by
auditory and haptic sensations can make the operator feel fully
immersed in a virtual reality (VR) environment.
Generally speaking, there are three objectives in CAD model
based assembly simulation: (1) evaluating the assembly process in
the early design stage; (2) generating practical and suitable
assembly operation sequences; and (3) creating a virtual assembly
platform for ofine training of operators on assembly tasks.
With motion capture and 3D visualization capabilities, inter-
actions among products, processes and human operators can be
analyzed and evaluated to identify potential problems during
assembly, such as awkward postures, poor workcell layout,
insufcient tools and xtures and inability to access parts. During
the assembly operation, the contact force can be estimated and
transmitted to the operator using a haptic device so that the
operator can feel the physical contact. This could increase the
delity of simulation and can be used to simulate complex
assembly tasks.
1.2.2. Evolution of assembly planning research
Assembly is a critical process in manufacturing that may
consume up to 50% of the total production time and account for
more than 20% of the total manufacturing cost in traditional
industrial manufacturing [144]. Assembly automation and opti-
mization have been studied thoroughly in several areas, including
the following [152,191,207]. Assembly design has been studied and
applied to reduce assembly costs in the product conception and
design stage [20]. Assembly sequence planning has been conducted
to determine the optimal assembly sequence of components and
other aspects, including tool changes, xture design, assembly
freedom, etc., in the component assembly stage [107]. It affects
how quickly and cost-effectively the product is assembled. At this
stage, ergonomic analyses also must be performed to consider
human factors in manual assembly. Systemconguration generation
is the next stage [192]. Traditionally, assembly systems are serial,
but non-serial congurations are now more widely used
[81,87,104]. Assembly line balancing has been investigated to
assign various assembly tasks to different workstations with the
objective of having equal or almost equal loads among the
workstations in the production planning stage [102,214]. All of
these research efforts aim to build a well-designed assembly
process in order to improve product quality and production
efciency and to reduce assembly costs and the products time to
market.
Historically, assembly personnel have scheduled assembly
plans for mechanical products based on existing assembly lines/
cells and their own experiences, and they have veried the plans by
assembling physical prototypes. With more complicated assembly
tasks or new products/plants, this method becomes more time-
consuming, expensive and error-prone.
Computer-aided assembly planning (CAAP), or computer-aided
assembly process planning (CAAPP), has the ability to automate
assembly planning to reduce manpower requirements and
simplify the planning process. ElMaraghy [55] discussed the
evolution and future perspectives of CAAP. Traditionally, CAAP
generates the assembly sequences by studying the disassembly
process. Later, CAAP utilizes intelligent identication and groups
geometric features based on automatic feature recognition
[39,116] to generate assembly sequences. Contact/connection/
interference features and part surfaces/volumes can be auto-
matically extracted from CAD les [47]. Generally speaking, CAAP
systems have three major limitations. First, the number of possible
assembly sequences will increase exponentially with the number
of parts requiring assembly; therefore, selecting an optimal or
near-optimal assembly sequence for a given part becomes more
difcult. Secondly, CAAP cannot incorporate expert knowledge
from the assembler, which is essential to developing an efcient
and successful assembly sequence. Thirdly, CAAP does not have
human interaction with the assembled parts, so it cannot evaluate
issues related to ergonomics, such as awkward postures and
reaching angles. These limitations have led CAAP into the realm of
VR based assembly planning.
VR technology can simulate an assembly operation with 3D
humancomputer interactions, including visual, haptic and
auditory interfaces. With VR technology, human assembly
planners can immerse themselves inside a VE, implement the
design concept in the early stage, and evaluate assembly/
disassembly sequences and operations to analyze the design of
assembly processes and systems.
Assembly planning has evolved from manual planning to
computer-aided planning to VR based planning, with the
objectives of shortening assembly time, reducing costs, increasing
operator safety, and improving production efciency and product
quality. This evolution is depicted in Fig. 3, which shows
publications from the Compendex & GEOBASE databases [61] on
computer-aided assembly planning and virtual assembly simula-
tion from 1972 to 2011. CAAP research peaked during the 1990s,
while virtual assembly simulation research has been increasing
steadily. This implies that the simpler virtual assembly simulation
technology is gradually replacing the paradigm of more complex
algorithmic assembly planning.
F kd
N (1)
where k is the objects stiffness, d is the shortest distance from the
tool point to the objects surface, and N represents the vector from
the tool point to the contact point (i.e., this vector is along the
surface normal at the surfaces contact point). Besides the force
caused by the objects stiffness, friction occurs through the relative
motion of two surfaces in contact with one another and can be
estimated using the Coulombic friction model:
F F
C
siny (2)
where v is the relative velocity, and F
C
is the Coulombic force,
which equals the normal force multiplied by the coefcient of
friction (which depends on the surface properties). In [17], more
details can be found about other formulations that have been used
to improve the Coulombic model by considering more subtle
frictional effects. To include the inertial force, a mass-spring-
damper model can be used to estimate the contact force as follows:
M u
t
D u
t
Ku
t
f
t
(3)
where M is the mass, D is the damping constant, K(u
t
) represents
the stiffness force, and u
t
is the current contact point. The details of
this method can be found in [49].
The single-point object representation for force computation
has the following drawbacks: (i) it does not represent the 3Dshape
of a virtual tool, and (ii) it models a workpiece with inhomoge-
neous material as one having the properties of homogeneous
material. Single-point force estimation rarely reects the force
magnitude and direction accurately, especially when the tools and/
or workpieces are freeform objects. This problem can be overcome
by using a multiple-point object representation for collision
detection and force computation. In this approach, the workpiece
in the virtual environment can be represented by a voxmap, and
the tool can be represented by a point shell with a set of surface
Table 2
Comparison between different techniques.
Techniques Pros Cons
Optical system
Marker based
Passive marker No cable required for activating the
marker; high accuracy
Markers may interfere with human
movement; occlusion may occur
Active marker Less expensive than using passive
markers; high accuracy
Power supply is required for markers;
markers may interfere with human
movement; occlusion may occur
Marker-less No markers needed to track objects Lower accuracy; less reliable
Non-optical system
Electromagnetic Able to provide good accuracy Relatively expensive; more power
supply required; easily affected by
metallic objects in the environment
Inertial Easily portable Substantial measurement drift may
accumulate over time
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 805
points and the associated inward pointing surface normals at these
points [121]. When a tool point interpenetrates a workpiece voxel
(volumetric element), the interpenetration depth can be calculated
as the distance d from the tool point to the tangent plane, which is
constructed as a plane passing through the voxels center and
having the same normal as the surface normal at the tool point. The
force at that point can be calculated using the distance d and
the workpiece stiffness at that point. The net force acting between
the tool and the workpiece can then be obtained by summing the
vector forces computed at the various tool points from such point-
voxel intersections.
3.2.3. Haptic rendering
Haptic rendering includes force rendering and tactile rendering.
Sensors and actuators can be combined in a device to measure the
tools contact position with a virtual object and apply force or other
haptic displays to the user at the contact position. The selection of
haptic rendering hardware for a given application should take into
consideration the number of degrees of freedom needed, max-
imum and sustainable force levels, friction, stiffness, etc. [28].
The Sensable PHANTOM haptic device from Geomagic [71], as
shown in Fig. 9, has been used widely as a force feedback device. An
advanced version of this device can provide not only force feedback
in three translational degrees of freedom but also torque feedback
in three rotational degrees of freedom. The force feedback is
applied to the whole hand at the point of contact, which is called
the haptic interface point. This makes the PHANTOM device
suitable for VR applications with point interaction for the whole
hand.
If manipulation of virtual objects with haptic feedback provided
to individual ngers (not just the whole hand) is of interest, a
wearable haptic device such as the CyberGrasp [42] or Rutgers
Master II [23] can be used. Shown in the left of Fig. 10, CyberGrasp
consists of a sensory glove (CyberGlove) and an exoskeleton
mechanism. It can provide force feedback to each nger and the
palm, so it is suitable for more complex manipulations in the VE.
The position of each nger is measured by the sensory glove, and
this information is used to compute the force to be provided by the
exoskeleton mechanism to each nger for interaction with the
virtual object. The forces generated by the CyberGrasp are
grounded in the palm or in the back of the hand; thus, this device
can only be used to feel the size and shape of a virtual object, not its
weight. CyberForce, shown in the right of Fig. 10, possesses the key
features of both the PHANToM and CyberGrasp devices. It can
provide a very natural haptic interface in the VE while interacting
with a simulated graphical object, i.e., being able to sense the
objects shape and size as well as its mass and inertia.
Because all haptic feedback devices incur signicant costs and
have strict geometry, placement and workspace requirements,
which prevent them from being used widely, pseudo-haptic
feedback was proposed by Lecuyer et al. [110] to provide haptic
illusions using visual feedback in the VE. These researchers
conducted some experiments to showthe feasibility of providing a
sense of touch without using complex mechanical devices. Lecuyer
[109] surveyed research and applications of pseudo-haptic feed-
back, including simulations of various haptic properties such as the
stiffness of a virtual spring, the texture of an image, and the mass of
a virtual object.
3.3. Auditory modeling and rendering
In virtual assembly, audio clues can be used to augment visual
and haptic displays. Auditory rendering is especially helpful when
haptic feedback is not available. Synthetic sound can be used to
approximate the real sound generated fromthe physical assembly,
which can make the simulation more realistic.
Physics-based sound modeling is too computationally expen-
sive for real-time rendering required for virtual assembly. Spectral
modeling can be used as the basis for sound synthesis in virtual
assembly simulation. Its general form is [169]:
st
X
N
k1
A
k
sinv
k
t u
k
rt (4)
where s(t) is the input sound signal; A
k
, u
k
, and v
k
are the
amplitude, frequency and phase of the kth sinusoid, and r(t) is the
residue. The sinusoidal, or deterministic, components in the sound
model correspond to the main modes of vibration in the physical
system. The residue, which is stochastic in nature, comprises the
energy that is not transformed into deterministic vibrations. The
output of spectral modeling consists of a set of peak frequencies,
magnitudes, and phases corresponding to the sinusoidal compo-
nents, as well as the residual part of the time-varying signal. After
performing Fast Fourier Transform (FFT) for each windowed
portion of a given signal, a series of complex spectra are obtained,
fromwhich the magnitude spectra are calculated. After the sounds
sinusoids have been obtained, the next step is to obtain the
spectrum of the residual part. This can be done by subtracting the
sinusoids from the original sound in the time domain and then
performing FFT on the resulting signal using the same window
function as that used for the original sound signal.
For auditory rendering, sound synthesis is performed rst by
transforming the input peak frequencies, magnitudes and phases
into time-domain sinusoids and then adding the sinusoids frame
by frame. The synthesis of the residual part of the sound takes the
residues enveloped spectrum and applies Inverse Fast Fourier
Transform (IFFT) with a window function to this spectrum to
generate a stochastic signal in the time domain. The sinusoidal and
residual parts then are added together frame by frame to create the
synthesized sound for auditory rendering, which outputs the result
of the synthesis to sound generation hardware such as a sound card
and loud speaker, so that the user of the VE can hear it [136].
3.4. Multi-modal rendering
A major challenge in developing a multi-modal VR systemis the
coordination of computations for rendering graphics, haptics and
sound, which require very different update rates. Multi-threading
can be used to simultaneously satisfy the different requirements of
update rates for the various rendering modalities. For example,
multi-modal rendering computations can be conducted in the
following threads:
Fig. 20. IDEF0 diagram: inputs, outputs, mechanisms, and controls governing
product simulators.
Fig. 22. Role of simulation throughout manufacturing system life cycle [53].
Fig. 23. IDEF0 diagram: inputs, outputs, mechanisms, and controls governing
system simulators.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 815
through adaptation. The unied commonality pattern illustrates a
recurring footprint that relates the different components of
manufacturing. Finding those patterns leads to the development
of targeted, integrated product design and assembly system
synthesis models.
Those issues that have arisen because of the increased variety of
products and high complexity of modern manufacturing systems
make use of simulation during the manufacturing/assembly
system design stages a necessity. Simulation would be used in
analyzing, evaluating, and comparing system design alternatives,
and selecting those that would best suit the changing and more
customized products and integrate their design and synthesis with
the design of their manufacture and assembly systems.
Early design demonstration, verication, and testing offers the
best chance to improve the design of new products, processes, or
systems, especially for complex assemblies, and to enhance the
quality of digital simulation models based on real data capture. The
development of didactic manufacturing systems has made more
real-time data available. The transition from digital to physical
factories is necessary to facilitate the intelligent utilization of
online data. Saving modeling time and helping industrial engineers
with limited simulation knowledge and experience to conduct
simulation studies is a benet. It also provides a method for rapid
system prototyping.
An example of a full-scale physical assembly simulator is the
state-of-the-art transformable assembly platformat the Intelligent
Manufacturing System Center (IMSC) at the University of Windsor
[57] used for integrating product design with system design,
planning, and usage, as shown in Fig. 24. Assessing and managing
complexity at the earliest stages of product and assembly system
design and before the physical system exists is necessary for
avoiding time-consuming and costly changes in the physical
assembly system.
In todays manufacturing environment, change and increased
variety have become constant. Variety may increase prot because
of increased sales, but it can contribute substantially to increased
cost and complexity of manufacturing. In order to enhance prots
due to the increased variety, the complexity of the product/system
should be managed. The economic importance of assembly has led
to research efforts to improve the efciency and cost effectiveness
of assembly by measuring product assembly complexity [164],
system complexity [165] to manage their mutual effects on the
integrated product/assembly system design [163].
7.6. Role of assembly system simulators in education and training
In the last few years, the concept of Learning Factories has
gained popularity, and some have been installed in Europe
[1,88,176,193] and North America [57]. The objective is to provide
engineers, students, and researchers with a valuable learning and
training experience in a realistic setting. Wagner et al. [186]
recently conducted a comprehensive literature survey to investi-
gate the existing learning factories as prototypes for changeable
and recongurable manufacturing systems. They established a
classication scheme to explore and evaluate the state of the art of
learning factories and to examine their suitability for teaching and
research. Learning factories are not present in developing countries
due to the high cost associated with establishing and operating
them. However, simpler or limited variants exist in many other
countries and prove very useful for education, research, and
industrial development purposes [186].
Learning factories comprise both physical and digital environ-
ments. The physical environment includes real system compo-
nents, such as machining, assembly, logistics, controls, and
information and energy ow modules. Integrated planning,
modeling, visualization, and simulation tools are part of the
digital environment, which is also integrated with the physical
system. This offers new possibilities for transferring digitally
created solutions to a real system for testing, evaluation, and
demonstration. Furthermore, there is automatic feedback fromthe
real system components to the digital environment for adaptation
and change planning [57].
Digital system simulation methods and tools have seen
signicant advances in the last two decades and noware equipped
with powerful graphical user interfaces for model and data input
and dynamically animated displays of simulation results. Some
simulators also have 3D visualization capabilities, which yield a
realistic and immersive experience for evaluating the systems
being simulated and assessing their performance. All of these
simulation tools can effectively enrich the experiential learning of
students and allowusers to make better decisions about the design
and operation of the modeled manufacturing/assembly systems.
They can be used by senior undergraduate and graduate students
as well as researchers and practicing engineers.
As an example of a learning factory, the iFactory [65] at the
University of Winsor can be changed physically in a short amount
of time. In response to changing products and production
demands, it can be recongured into different production lines
comprised of individual modules of production cells, such as
conveyors, branches, automated storage and retrieval systems,
various assembly cells, and inspection cells, including the newest
automation technology of drives, assembly robots, and vision
systems. The plug & play intelligent system interface and its
modularity enable quick and simple implementation of many
different production layouts and system component combinations
for effective and creative learning and experimentation. It can
physically demonstrate the impact of new technologies, product
and system innovations, and changes in market conditions. This
system is supported by advanced CAD software, designers
interactive screens, and the latest rapid prototyping equipment,
system simulation software, and recongurable process and
production planning.
The original product assembled by the iFactory system had
desk-sets of 200 variants. Variety was created by the different cups
and gadgets that could be placed on the top of the product
platform. Other product platform, cup, and gadget variants also
have been produced using rapid prototyping to increase the ability
to respond to changing customer needs.
8. Applications
CADmodel based simulation provides many benets in product
development and assembly. In product development, it can reduce
the number of modications, leading to reduced product cost and
time to market. In assembly design, it can be used for assembly
operation planning and the design of tools, xtures, cells, and an
assembly line. In assembly process verication, it can be used for
accessibility verication, error prevention/reduction, interference
checking, and ergonomic analysis. In assembly training, it can be
used for documentation, computer-assisted training, VE training,
Fig. 24. Integrated digital and physical system simulator at IMSC, University of
Windsor.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 816
and performance assessment. Furthermore, it can be used with
suppliers and customers to visually communicate shared solutions
and databases, and accumulate and manage technology know-
how. This section provides application examples that have
benetted from utilizing some of the technologies discussed in
the previous sections.
8.1. Automobile assembly planning
At Husqvarna (BMW group), motorcycle assembly planning is
conducted at the design stage by product designers based on
simulations generated from CAD data [115]. Other aspects such as
machine layout design, line balancing, scheduling, etc. are carried
out at the manufacturing stage using various simulation tools that
are not always based on CAD data [181].
CAD based modeling and simulation has been implemented by
Piaggio, the biggest European manufacturer of motorcycles, as
shown in Fig. 25 and detailed in Fig. 26 [180]. Fig. 25 shows that
designers and manufacturing specialists concurrently design
scooter parts and assembly devices. Manual and automated
assembly tools are modeled using a CAD system. Both standard
tools (screwdrivers, gauges, etc.) and custom devices (calipers,
pallets, xtures, etc.) are included in the simulated assembly
sequences in order to evaluate feasibility, check for interferences,
assess tolerances, and interactively make any necessary changes to
both parts and tools. Snapshots and short movies of assembly
phases are included in the manufacturing plan as instructions for
documentation and staff training purposes.
Fig. 26 details the main benets of this CAD model based
approach. Tools such as screwdrivers and go/no-go gauges can be
assessed before they are purchased, and suppliers can visualize the
use of custom devices to reduce design errors, leading to co-
makership and co-design. Through the use of CAD model based
simulation, assembly xtures can be matched with parts, and tool
accessibility can be virtually tested from different directions.
PROKON (PROduktionsgerechte KONstruktion), which means
design for good assembly ability, is currently applied by Magna
International Inc., a world leader in automotive supply. Geometric
and physical information of parts and their relationships are
extracted fromCADmodels and evaluated alongside other product
information. Different assembly options are evaluated according to
a set of 10 rules by a team of PROKON designers and industrial
engineers in order to achieve easier assembly and consequently
reduce costs. The method has saved on the order of 2040% of time
required for product development. CAD models or product
specications are received from car manufacturers, and parts
are ready for production in six months. Fromthis experience, it can
be concluded that simplied and standardized methods often can
produce signicant practical benets and are easier to implement
in large-scale manufacturing.
Ford Motor Company [123] has developed a system called the
Human Occupant Package Simulator (HOPS). This system has a
large database of captured motions of drivers and passengers
inside vehicles. Ford designers use digital humans informed by
these motion datasets inside virtual vehicle designs to analyze
their interactions with the vehicles. This helps them improve the
ergonomics of their vehicle designs as much as possible before
building physical models and prototypes. As another example,
simulation tools are applied in [147] to a work cell with
cooperating robots in mass customization in the automotive
industry.
8.2. Aircraft assembly simulation and ergonomic analysis
Assembly processes usually involve a number of manual
operations performed by human operators working on the shop
oor. For example, fastening is a major operation performed in
aircraft assembly. Mechanics performing fastening operations at
awkward postures may risk ergonomic injuries [97]. Ergonomics is
an important issue because nearly one-third of workplace injuries
are ergonomically related [30]. To design safe workplaces, the
probable causes of injuries can be identied by simulating work
conditions and quantifying risk factors [16,80].
Researchers at the Missouri University of Science and Technol-
ogy have developed a methodology using a low-cost motion
capture system to track assembly operations using both a physical
mockup and an immersive virtual environment, with the captured
motion data used in a CAD model based simulation for ergonomic
Fig. 26. CAD model based assembly tool and xture design.
M.C. Leu et al. / CIRP Annals - Manufacturing Technology 62 (2013) 799822 817
analysis [43,150]. They have demonstrated this systems utility for
investigating the fastening operation and its potential cause of
ergonomics related injuries in the aircraft manufacturing industry.
8.2.1. Simulated assembly using a physical mockup
A physical mockup for a fuselage belly section, as shown in
Fig. 27, was built to perform a simulated fastening operation.
Twelve Optitrack cameras were set up as a motion capture system
to eliminate any possible occlusion from the mockup. A Kalman
lter was implemented to increase the accuracy and stability of the
data obtained by the motion capture system. The generated data
were used in simulation with Siemens Jack software for ergonomic
analysis [150].
8.2.2. Virtual assembly inside a CAVE
A 3 m 3 m 3 m four-walled CAVE (Cave Automatic Virtual
Environment) was utilized to provide a realistic 3D virtual
environment. The layout of this CAVE included three rear-
projected walls and a down-projected oor using CRT projectors.
The projections on the walls and oor of the CAVE were monitored
by four synchronized computers that formed a cluster, with one
computer serving as the master and the others as slaves. The
scenes rendered on the walls and oor were active stereo images
created at a frame rate of 85 Hz. Shutter glasses were used in sync
with the frequency of the stereo vision to create a stereoscopic
viewing effect. Virtual reality toolkits, including VR Juggler and
OpenGL, were used to create a VR environment in the CAVE. The VR
environment was congured using VRJCong, a Java based
graphical user interface. A CAD model of the belly section of an
aircraft fuselage to be displayed in the CAVE was created. A
triangular mesh representation of the CAD model and texture in
bitmap format were used. A polygon rendering algorithm was
developed and implemented with OpenGL to render the scene. The
VR scene was composed by placing the four rendered scenes side
by side in a predened layout using the information from the VR
Juggler conguration le.
The left of Fig. 28 shows a virtual fastening operation on the
virtual fuselage inside the CAVE. After setting the world coordinate
system, the motion capture system recorded the initial position
and orientation of three body segments of the human wearing a
body suit with markers on it. This information was used to map the
human performing the virtual assembly task onto the digital
human model. Once the body pose information was recorded, the
systembegan recording the motion data and simulating the virtual
assembly in real time; see the right of Fig. 28.
8.2.3. Ergonomic analysis
Jacks Task Analysis Toolkit (TAT) is a set of human factor
analysis tools that can be used to perform ergonomic analysis of
simulated human movements. Lower Back Analysis, Static
Strength Prediction, NIOSH (National Institute for Occupational
Safety and Health) Lifting Analysis, Fatigue Analysis, and RULA
(Rapid Upper Limb Assessment) are some of the ergonomic
analysis tools available from TAT.
The fastening operation predominantly involves the upper body
of the operator, so RULA is a useful tool for ergonomic analysis.
RULA has been developed for use in ergonomic investigation of
workplaces [120], and is especially useful for scenarios in which
work-related upper limb disorders are reported. RULA uses a
scoring system based on posture, muscle use, and force exertion to
assign an action level to the evaluated task. After setting the values
of these parameters, the result of RULA analysis can be readily
obtained. The RULA analysis can be used to determine the risk
levels associated with particular postures and to suggest actions
needed in order to reduce the risk of long-term ergonomic injuries
and to design safer workplaces [86,211].
8.3. Assembly inspection planning
Design of an assembly inspection system also can take
advantage of CAD modeling of a product and its components to
be inspected. As an example, CAVIS (Fig. 29 ) is a CAD model
based inspection system design tool developed for the car lock
manufacturer Motrol [106]. Synthetic images are generated from
CAD models in order to develop a vision system before the actual
assembly line and products are available. The basic principle is to
use a CADmodeler to identify potential assembly errors, e.g., use of
wrong components, as shown in Fig. 29 . Graphic rendering
(Fig. 29 ) then is used to simulate the position of cameras (Fig. 29
) and the effect of lighting (Fig. 29 ) in order to enhance the
differences between correct and wrong components and to select
image analysis algorithms (Fig. 29 ). Despite recent research in
image rendering, real part variability still requires an on-line
learning phase to ne-tune the inspection system (Fig. 29 ).
9. Technology gaps and future R&D needs
Some technology gaps and future R&D needs for the develop-
ment of CAD model based assembly simulation, planning, and
training systems are discussed in this section.
Fig. 27. Motion data captured from a physical fuselage mockup by an Optitrack
motion capture system and real-time simulation in Jack.