Você está na página 1de 116

Control module for adhesive application industrial robots

with automatic defect correction and trajectory optimization

João Miguel Antunes Cabrita

Thesis to obtain the Master of Science Degree in

Mechanical Engineering

Supervisors: Prof. Jorge Manuel Mateus Martins


Eng. Fábio Miguel Fitas Miranda

Examination Committee
Chairperson: Prof. Paulo Oliveira
Supervisor: Prof. Jorge Martins
Member of the Committee: Prof. Susana Vieira

November 2019
ii
To my wife Ana Pereira and my son Elias, whose love makes me a better human being.

iii
iv
Acknowledgments

To professor Jorge Martins for giving me the oportunity to develop my work on this subject.

To the Introsys’ IDI department for all the support and knowledge,specially to Fábio Miranda,Magno
Guedes and André Silva.

Last but not least, a special thanks to my parents for all the pacience,love and for always pushing me
forward.

v
vi
Resumo

A conciliação do aumento da utilização de materiais mais leves na industria automóvel com a neces-
sidade de manter os níveis de segurança originou um aumento da utilização de juntas adesivas. De-
vido a factores ambientais incontroláveis aquando a aplicação de cordões de adesivo, podem surgir
interrupções de cordão, comprometendo a qualidade do produto final. O objectivo deste trabalho é
proporcionar uma solução para este problema, através da correcção automática de interrupções após
a aplicação e inspecção do cordão, utilizando um manipulador robótico. Em primeiro lugar, um estudo
sobre as diferentes tecnologias de visão artificial é feito, com o objectivo de escolher a melhor opção
para inspecção. Um módulo de software foi projectado e desenvolvido para receber e processar dados
de inspecção, nomeadamente coordenadas de interrupção, organizar/reordenar através de um algo-
ritmo de optimização, produzir comandos de trajectória e enviá-los para o controlador do robot. Foram
utilizadas duas células robóticas como bancada de teste para provar o módulo desenvolvido, uma com
um robot colaborativo UR5 e outra consistindo numa célula industrial tradicional com um robot ABB
IRB2400. Foram também provados resultados utilizando algoritmos sampling-based motion planning
para executar trajectórias livres de colisão através da plataforma ROS MoveIt!. Consegiu-se atingir uma
taxa de sucesso de correcção de 90%, sendo que os casos de insucesso se devem a problemas de
calibração que podem ser contornados através de uma instalação de hardware mais robusta. Os algo-
ritmos Dijkstra e RRTConnect provaram ser os mais adequados para reordenar coordenadas e planear
trajectórias livres de colisão, respectivamente.

Palavras-chave: Adesivos Industriais, Optimização de trajectórias, Inspecção Automática,


Correcção Automática de Defeitos, Robótica Industrial

vii
viii
Abstract

A recent demand to increase the usage of lightweight materials in the automotive industrywhile main-
taining the overall safety has lead to an increase usage of structural adhesive joins.Due to uncontrolable
environmental factors when applying adhesive beads, gaps can occur, compromising the final product
overall quality.The goal of this work is to provide a solution to this problem, through the automatic cor-
rection of the gaps after the application and inspection of the bead, using a robotic manipulator.Firstly
a study on different machine vision technologies is performed, in order to assess the best option for in-
spection.A main software module was designed/developed to receive and process inspection data, orga-
nizing/reordering it using an optimization algorithm, writing/producing trajectory commands and sending
them to the robot controller. Two robotic cells were used as test bench to prove the developed module,
one with a collaborative robot UR5 and other consisting of a traditional industrial cell with a robot ABB
IRB2400. Results were also proven using sampling-based motion planners to execute collision free tra-
jetories within a ROS MoveIt! simulation environment. A correction success rate of 90% was achieved,
being that the unsuccess was due to calibration problems that can be overcome through a more robust
hardware integration. The Dijkstra and RRTConnect algorithms proved to be the most suitable to reorder
targets and plan collision free trajectories, respectively.

Keywords:Industrial Adhesives, Trajectory Optimization, Automatic Inspection, Automatic De-


fect Correction, Industrial Robotics

ix
x
Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Adhesives in the automotive industry . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 The concept of Zero Defects in quality management . . . . . . . . . . . . . . . . . 3
1.1.3 Non Destructive Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 The importance of Machine Vision in bead inspection . . . . . . . . . . . . . . . . 4
1.2 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Robot Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Technologies for Point Cloud aquisition . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Sampling-Based motion planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Theoretical Background 13
2.1 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.1 Direct Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.3 Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Robot Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.1 UR5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.2 ABB IRB2400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.3 ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

xi
2.3.1 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.2 MoveIt! and the OMPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.3 Ordering the Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Implementation 39
3.1 Demonstrator-UR5 on Post Process Inspection . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.1 Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.2 Designing the generic main software module . . . . . . . . . . . . . . . . . . . . . 41
3.1.3 Choosing the technology for Guidance and Inspection . . . . . . . . . . . . . . . . 44
3.1.4 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1.6 Work Object and Tool Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2 Industrial Cell-ABB IRB2400 on Post Process Inspection . . . . . . . . . . . . . . . . . . . 47
3.2.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.2 Real environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3 Industrial Cell-ABB IRB2400 on In Process Inspection . . . . . . . . . . . . . . . . . . . . 56
3.4 ROS MoveIt! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5 User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4 Results 63
4.1 UR5 Demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2 Industrial Cell(ABB IRB2400 on post-process inspection) . . . . . . . . . . . . . . . . . . 67
4.3 ROS MoveIt! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4 Algorithm for ordering targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5 Conclusions 75
5.1 Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Bibliography 77

A Algorithms 81
A.1 Dijkstra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
A.2 Sampling-based motion algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
A.3 RAPID and URSCript correction files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.4 PLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
A.5 ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

B Technical Datasheets 95
B.1 Adhesive Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

xii
List of Tables

1.1 Post-process solutions,taken from www.datalogic.com and www.cognex.com . . . . . . . 6


1.2 In-process solutions,taken from www.coherix.com and www. isravision.com . . . . . . . . 6
1.3 Comparative analysis of point cloud aquisition technologies [12] . . . . . . . . . . . . . . . 10

2.1 Denavit-Hartenberg parameter table for the UR5 . . . . . . . . . . . . . . . . . . . . . . . 16


2.2 Denavit-Hartenberg parameter table for the ABB IRB2400 . . . . . . . . . . . . . . . . . . 17
2.3 Joint angle and velocity limits for the ABB IRB2400 . . . . . . . . . . . . . . . . . . . . . . 26

3.1 Hardware for the demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45


3.2 Technical characteristics for the external computers . . . . . . . . . . . . . . . . . . . . . 45
3.3 Hardware for the Industrial Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Variables atributed to each slot on the PLC . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.5 PLC logic for job enabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.1 Classes for the vision system’s results classification . . . . . . . . . . . . . . . . . . . . . 63


4.2 Overall test classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 Results for the demonstrator:Test 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 Results for the demonstrator:Test 1, Inspection 3 . . . . . . . . . . . . . . . . . . . . . . . 64
4.5 Results for the demonstrator:Test 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.6 Results for the demonstrator:Test 2, Inspection 2 . . . . . . . . . . . . . . . . . . . . . . . 65
4.7 Results for the demonstrator:Test 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.8 Results for the demonstrator:Test 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.9 Results for the demonstrator:Test 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.10 Results for the demonstrator:Test 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.11 Confusion matrix for the demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.12 Test results for the industrial cell(ABB IRB2400 on post-process inspection) . . . . . . . . 67
4.13 Confusion matrix for industrial cell(the ABB IRB2400 on post-process inspection) . . . . . 68
4.14 Results for the industrial cell(ABB IRB2400 on post process inspection):Test 1 . . . . . . 68
4.15 Results for the industrial cell(ABB IRB2400 on post process inspection):Test 11 . . . . . . 69
4.16 Results for the industrial cell(ABB IRB2400 on post process inspection):Test 15 . . . . . . 69
4.17 Trajectory segments used for testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.18 Trajectory without obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

xiii
4.19 Trajectory with obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.20 4 segment trajectory without obstacle, implementation through MoveIt! using RRT . . . . 71
4.21 4 segment trajectory with obstacle, implementation through MoveIt! using RRT . . . . . . 72
4.22 Path length and computation time t, in dm and ms, respectively, obtained using different
algorithms for target ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

A.1 RRTstar[56] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

xiv
List of Figures

1.1 The rise in adhesive usage in the automotive industry,taken from www.adhesives.org . . . 2
1.2 Types of defects on adhesive beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Quiss RTVision www.quiss.net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Stereo triangulation[15] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1 The UR5 and the ABB IRB2400, taken from www.universal-robots.com and www.grupro.com 13
2.2 UR5 frames assignment [31] and dimensions,taken from www.zacobria.com . . . . . . . 16
2.3 IRB2400 dimensions[33] and frame assignment . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Eight possible configurations for a prescribed pose in a 6DOF manipulator [35] . . . . . . 18
2.5 Ortho-parallel manipulator with spherical wrist, side and back view . . . . . . . . . . . . . 18
2.6 Ortho-parallel substructure [34] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 3-RRR planar arm structure obtained with θ1 = 0 [34] . . . . . . . . . . . . . . . . . . . . 19
2.8 Geometric construction of top view of the two configurations obtained by rotating θ1 [34] . 20
2.9 Geometric representations for finding θ1 (top left), θ5 (top right) and θ6 (bottom)[31] . . . . . 24
2.10 Universal Robot Teach Pendant [38] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.11 The IRC5 and the FlexPendant[39] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.12 Basic publisher subscriber interaction www.mathworks.com . . . . . . . . . . . . . . . . . 28
2.13 Client to server relationship, taken from mathworks.com . . . . . . . . . . . . . . . . . . . 29
2.14 TF tree for a 6 DOF manipulator, each edge represents the transform from the parent
coordinate frame to the child coordinate frame . . . . . . . . . . . . . . . . . . . . . . . . 30
2.15 Moveit! high-level architecture [47] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.16 Blend radius for the MoveP instruction [37] . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.17 Zone for the MoveL/MoveLDO instruction[41] . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.18 MoveL/MoveP and MoveJ [41] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.1 Scene for the demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40


3.2 Flowchart for the demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 General trajectory for gap correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 SocketTest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5 Architecture for the demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 Work Object frame(demonstrator) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

xv
3.7 Diagram of the workcell on post process inspection . . . . . . . . . . . . . . . . . . . . . . 47
3.8 Flowchart for the workcell on post process inspection . . . . . . . . . . . . . . . . . . . . 48
3.9 RobotStudio environment with the CAD of the workcell . . . . . . . . . . . . . . . . . . . . 49
3.10 Control architecture of the simulation of the industrial workcell on post process inspection 50
3.11 New trajectory for the correction segment . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.12 Control architecture for workcell on post process inspection . . . . . . . . . . . . . . . . . 54
3.13 Work Object Frame(ABB IRB2400) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.14 MoveIt! setup assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.15 Flowchart of files to launch the gazebo world . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.16 Flowchart of files to launch the MoveIt! package, connected to gazebo . . . . . . . . . . . 59
3.17 Graph representing the action, arm_controller, responsible for connecting MoveIt! and
gazebo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.18 Interaction between ros action server, MoveIt! and the main software module . . . . . . . 60
3.19 Graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.20 Graphical user interface(child window) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.1 Test 2, Inspection 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65


4.2 Simulating several defects to test target ordering algorithms . . . . . . . . . . . . . . . . . 73
4.3 Path length/computation time of the different algorithms for target ordering . . . . . . . . . 74

A.1 Recursive iterations on the roadmap constructions according to the PRM algorithm [29] . 82
A.2 Iterations on the tree construction using RRT [29] . . . . . . . . . . . . . . . . . . . . . . . 83
A.3 Example of KPIECE discretization using 3 levels, taken from [57] . . . . . . . . . . . . . . 84
A.4 Master grafcet for station 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
A.5 Ladder diagram for station 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.6 ROS control architecture[45] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
A.7 Gazebo+ROS control architecture[46] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
A.8 ROS Graph representing the active nodes and topics in the MoveIt! implementation . . . 92
A.9 TF graph for the MoveIt! implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

B.1 SikaPower-492 technical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

xvi
Nomenclature

Manipulation

R Rotation matrix

T Transformation matrix

J Jacobian matrix

w Generalized 3 dimension angular velocity

 Vector part of a unit quaternion

η Scalar part of a unit quaternion

q Joint variable

Q Joint variables vector

Mathmatics

t Time

sin, cos Sine and cosine

tan Tangent

xvii
Mechanics

N Newton

a Acceleration

v Velocity

Subscripts

e End-effector coordinate frame

x, y, z Cartesian components

i, j Coordinate reference frames notation

start Starting point of gap

end Ending point of gap

xviii
Glossary

API Application Programming Interface.


BIW Body In White is the production stage where
the parts have been assembled,except for the
motor,chassis or trim,and not yet painted.
CAD Computer Aided Design.
CSA Cuckoo Search Algorithm is a meta-heuristic.
Cycle Time Total elapsed time to move a unit of work from
the beginning to the end of a physical process.
DOF Degree Of Freedom.
EFFRA European Factories Of the Future Research
Association.
End-effector The last link of the manipulator.
FCL Flexible Collision Library.
FTP File Transfer Protocol.
GUI Graphical User Interface.
I/O Input/Output.
KPIECE Kinodynamic motion Planning-Exterior Cell Ex-
ploration is a sampling-based motion planning
algorithm.
NDT Non-Destructive Testing is the determination of
the physical condition of an object without af-
fecting that object’s ability to fulfill its intended
function.
OMPL Open Motion Planning Library.
PID Proportional Integrative Derivative controller.
PLC Programmable Logical Controller.
PRM Probablistic Road Map is a sampling-based
motion planning algorithm.
Pose Expresses the end-effector position and orien-
tation.

xix
RGB Red Green Blue.
ROS Robotic Operating System is an open source
platform for the development of robotic aplica-
tions.
RRT Rapidly exploring Random Tree is a sampling-
based motion planning algorithm.
SFTP Safe File Transfer Protocol.
SRDF Semantic Robot Description Format is intended
to represent information about the robot that is
not in the URDF file.
TCP/IP Transmission Control Protocol/Internet Proto-
col.
TCP Tool Center Point.
TF Transform is a ROS package that lets the user
keep track of multiple coordinate frames over
time.
TSP Traveling Salesman Problem is a comunication
protocol.
UAS Unmanned Aerial Systems.
UDP User Datagram Protocol is a comunication pro-
tocol.
URDF Unified Robot Description Format is a XML for-
mat for representing a robot model.

xx
Chapter 1

Introduction

1.1 Motivation

When it comes to the automotive industry,safety,both for driver and passangers,is the number one
priority. This implies the assurance that all the components are manufactured and assembled in a way
that fulfills strict quality criteria. Tipically, the vehicle quality is guaranteed by tests that are performed
after the final assembly,which entails the risk of generating unnecessary waste,increase costs and ham-
per fast delivery of products to the client,making it non-desirable for manufacturers. Researchers and
manufacturers have been working alongside in order to develop systematic quality control procedures
to detect potential problems and defects earlier in the production line. This is accomplished by testing
que quality of the parts and components,partially assembled,along the production line as well as the
operational state of automation equipment.

Introsys,leader in development of control and automation systems for the automotive industry,has
been developing comprehensive solutions to allow product quality control,combining predictive mainte-
nance,with the purpose of potentiating industrial production according to Zero-Defect philosophy. Cur-
rently ,the company has been working on an industrial pilot solution,under the See-Q project. The
SEE-Q project,co-financed in the scope of the P2020 program,by the Portuguese Government and the
European Comission,aims to validate industrially a groundbreaking solution for a current problem that
comes from one of the most important manufacturing processes on an automotive assembly line:The
continuous quality assurance of structural adhesive bead application.

This work intends to contribute for this subject,ofering a concept that allows the automatic correction
of defects found on adhesive beads,before the parts,in which these are applied,are joined.

1
1.1.1 Adhesives in the automotive industry

Nowadays,automakers are racing to meet the EU’s


mandatory emission reduction targets,such as those
disclosed in the Climate Action EU no. 333[1]. To
meet these requirements,preference for lightweight
and stronger materials such as aluminium alloys and
carbon fibers has been growing among manufactur-
ers. Not only do these materials improve efficiency
and,consequentialy,reduce emissions,but they also
represent an increase in safety. Carbon-fiber rein-
forced polymers are 70 % lighter than steel and 40 Figure 1.1: The rise in adhesive usage in the au-
% lighter than aluminum,according to Plasan Car- tomotive industry,taken from www.adhesives.org
bon Composites,the leading U.S. Tier 1 supplier of
CFRP(carbon fiber reinforced polymers) parts and assemblies[2].Since these materials cannot be joined
using weld,the need for using adhesive beads arises[3].

In North American light vehicles,between 2004 and 2014,the value of adhesives and sealant content
jumped 52 % according to [4]. Adhesive manufacturer Henkel,claims that its Teroson line of adhesives
is 20 times more resistant to fatigue than welding,making it much more appealing to automotive and
aerospace manufacturers than older joining technologies. Even in welded parts,the pre application of
adhesive beads reduces vibration and infiltrations. BMW’s first mass-produced,all-electric vehicle(the i3)
featured a predominately CFRP body,connected to an aluminum frame by 160 meters of cutting-edge
adhesives [5]. According to Henkel North America[2],designs are being modified in order to replace
fasteners with adhesive bonding,so the overall weight can be reduced.

The new trend of adhesive use in automotive industry requires suitable quality control techniques
that ensure well manufactured and assembled components,with minimum waste. Manufacturers such
as Volkswagen currently resort to Destructive Testing in order to assess the quality of the joints. The
procedure consists of unjoining hundreds of parts and analysing the glue distribution, consistency and
continuity,entailing the loss of finished parts and implying the usage of human,financial and time re-
sources,providing only a statistical quality control.

Futhermore,to guarantee the desired physical and chemical glue properties in natural environment
is almost impossible. Its visosity is highly dependent on the environment temperature and even if a
temperature control is implemented inside the injector,when the glue makes contact with the part to
be joined,its temperature drastically changes which may lead to unpredictable phonemona on the glue
bead,originating the defects depicted below:

Defects of type 1 Correspond to an interruption on the bead.From now on will be refered as "gap".

2
Defects of type 2 and 3 Correspond to a non-compliant width measurement,caused by either excess
or lack of material.

Defects of type 4 Correspond to an excessive displacement of the adhesive bead.

Figure 1.2: Types of defects on adhesive beads

It should be noted that the interruptions on the bead can be corrected on the part itsef. All the men-
tioned defects,except for the case where the defect is originated by problems with the glue quality,come
from a sub-optimal response of the equipment to the physical characteristics of the glue when it’s applied.
Tipically,it’s possible to identify a decrease in quality of the adhesive beads applied along several suc-
cessive parts,before the arise of significant defects,i.e. outside the defined tolerances. Consequently,the
correlation between the identified deviation,the atmospheric conditions and the equipment parametriza-
tion,will allow the inference,with certain probablistic degree,of a necessity of equipment mainetance or
adjustment.

1.1.2 The concept of Zero Defects in quality management

Zero defects philosophy aims for the elimination of waste in a project. The concept of waste ex-
tends from material to unproductive processes,tools,employees and so on. Anything that doesn’t add
value to the project is considered waste and should be eliminated. According to EFFRA(European
Factories Of the Future Research Association)[6],this leads to improvement in quality and decrease
in costs,besides,the use of less resources reduces the environmental impact of manufacturing. The
Zero-Defect philosophy centers on four main principles[7]:

• Quality is a state of assurance to requirements

• The system of quality is prevention

3
• The performance standard is Zero Defects

• The measurement of quality is the price of nonconformance

In the scope of the industrial adhesive quality control problem,the second principle can be interpreted
as the prevention of the production of beads that don’t fullfil quality requirements. There should be an
assurance that no part has a defected adhesive bead, and since the quality cannot be controlled in a
first dispense,due to the chemichal properties of the adhesive,the path to achieve this goal relies on
the correction of the defects after the inspection,therefore,preserving the part and eliminating material
waste.

1.1.3 Non Destructive Testing

Resorting to destructive testing in order to assess the quality of the beads,like it’s currently done in
Volkswagen,is not concealable with a Zero Defect philosophy. Destructive testing generates waste not
only because it closes the possibility of a bead correction but because it destroys the part itself. The
American Society for Non-destructive Testing(ASNT) defines NDT as "The determination of the physical
condition of an object without affecting that object’s ability to fulfill its intended function. Non-destructive
testing techniques typically use a probing energy form to determine material properties or to indicate the
presence of material discontinuities(surface,internal or concealed) "[8].

There are several NDT Techniques currently used in industry such as Magnetic Particle Testing,Eddy
Current,Dye Penetrate,Ultrasonic Testing,Radiographic Imaging,Stereo Vision or laser triangulation. In
terms of the adhesive inspection problem,the first four techniques are discarded,since the adhesive is
a non-rigid,high viscosity hybrid epoxy material,which has non-magnetic properties and it’s inspection
cannot be performed through direct contact. Radiographic Imaging has too many drawbacks,namely the
fact that is expensive,dangerous,complex and requires specialized staff.

The manual activity of inspection is subjective and highly dependent on human factors,for instance
expertise,concentration,fatigue or mental and physical health. For these reasons,it cannot provide a
guarantee of quality and automatic inspection becomes of major importance[9]. It’s important to have in
mind that there is no ultimate general solution for the inspection problem,the best NDT technique chosen
highly depends on the application. Decision aspects to bear in mind may include safety,cost,accesibility,
layout,environmental conditions,computational capability and demanded accuracy[8].

1.1.4 The importance of Machine Vision in bead inspection

Adhesive inspection can occur post-process or in-process. In the first case,the inspection is performed
after the complete adhesive bead is applied. The dispensing robot usually moves away from the part and

4
then,either the part is transported to an inspection zone or,if the part is stationary,it may be inspected
by an array of fixed cameras,tilting cameras that sequentially inspect checkpoints or by a camera moved
by a robot over the part [2]. Either way,post-process inspection adds to cycle time,which is a big issue.
Integrating the concept of Zero Defect,the extra cycle time entails costs and adds no value to the project
in comparison to in-process adhesive inspection,where the inspection occurs while the adhesive is being
dispensed.

As mentioned before,bead inspection specifications vary between automakers. Some basic inspec-
tions such as verification of adhesive presence,location,gaps and width may be achievable with 2D
sensors. According to Michael Mai,a research and development group leader for Darmstadt,Germany-
based Isra Vision AG,for many BIW applications(applications performed in the production stage where
the parts have been assembled,except for the motor,chassis or trim,and not yet painted)in which round
beads are used,the inspection may be performed using 2D sensors,even though the bead volume can
only be estimated.However,for the joining of some specific parts,such as windshields and sunroofs,which
require triangular beads,3D sensors are needed for volumetric bead measurements[2].

In conclusion,an ideal situation would be a fast,automatic,in-process,3D inspection and correction to


assure the best quality while minimizing cycle time.

1.2 State of the art

This section is divided in three parts. On the first one,the solutions to the adhesive bead quality control
problem,currently available on the market,will be shown and compared. The second one concerns robot-
guidance,namely a comparative study between point cloud aquisition technologies. Finally,since the goal
is to implement the solution on a manipulator,state of the art path planning algorithms will be presented
as well as their advantages and drawbacks.

1.2.1 Competition

Currently,the solutions for the adhesive bead quality control problem present on the market do not
fully answer to the requested specifications in compliance with a zero defect philosophy. They can be
divided in three main groups:Post-process solutions, which consist in inspection systems decoupled
from the dispensing nozzle. In-process solutions,which consist in inspection systems coupled with the
dispensing nozzle and,finally,an in-process solution similar with the advantage of correcting the bead
when it’s quality is not up to standards.

Post-process solutions

Data Logic offers a series of stereo cameras,such as the the E100,the M-Series and the T-Series,in
which the quality of the inspection is dependent on lighting conditions and a 3D estimation needs several

5
cameras mounted on different angles. The COGNEX DSMAX 3D sensor uses laser triangulation for a
3D estimation,not needing aditional sensors and performing independently of lightning conditions. It
can scan at 18kHz with a resolution of 2000 profile points in a 31mm field of view(FOV). Both of these
solutions are not suitable for the problem,when taking into account the reduction of cycle time and
material waste,for the reason of being post-process inspection solutions.

Table 1.1: Post-process solutions,taken from www.datalogic.com and www.cognex.com

Data Logic E100 series COGNEX DSMax 3D

In-process solutions

The ISRA Vision BeadMaster,the


Coherix Predator 3D Beam Inspec- Table 1.2: In-process solutions,taken from www.coherix.com and
www. isravision.com
tor and the VMT SpinTop 3D are
some examples of products for in- Coherix Predator 3D Beam Inspection

process inspection. The ISRA Vi-


sion BeadMaster’s sensor consists
in 4 gigabit-ethernet cameras and a
blue laser technology which garan-
tees inspection of any kind of bead
shape and colour,reaching accuracy
levels beyond 0.5mm at 200 Hzi.e.
if the nozzle travels at 200mm \ s
ISRA Vision BeadMaster
a 3D measurement is made every
millimeter[10]. The Coherix Preda-
tor 3D Beam Inspection uses 4 laser
triangulation based sensors,with a
field of view of 40mm×40mm and
a measuring accuracy of 0.1 mm,to
provide a 3D view of the bead in any
direction,detecting skips or neck downs to 10mm depth at 400 Hz[11]. Finally the VMT SpinTop 3D
uses two lasers with a maximum of 300 scans per second,with an accuracy of 0.1 mm. All these prod-

6
ucts hold the advantage of making in-process inspection,which saves cycle time and suits Zero Defect
principles,on the other hand,they offer no solution in case of bead defects,turning the inspected part into
waste or adding unnecessary cycle time,by moving the part to another workstation,if the bead is to be
corrected.

In-process solutions with bead correction

The Quiss RTVision performs in-process inspection through 3 gigabit-


ethernet cameras with a 360 degree range and controllable RGB
lighting. The bead is measured while the adhesive is dispensed
and,if there’s gaps,the robot repeats the trajectory in order to correct
them. Although this product offers the most competitive solution,in
the scope of the Zero Defect philosophy,there’s still some room for
improvement. The procedure of repeating the trajectory in order to
correct the bead may be more time and energy consuming than nec-
essary,since sending the robot directly to the gaps using a shorter
path could be more advantageous.

Figure 1.3: Quiss RTVision


1.2.2 Robot Guidance
www.quiss.net
Robot guidance can be described as the process of giving eyes to
a robot for it to perform a certain task. There can be a need to collect
information regarding the surroundings of the robot,in this case the guidance system is called scene-
related and a given number of sensors,depending on the technology being used,are usually mounted
on mobile robot or on a fixed point anywhere in the working area. When there’s a need to collect
information regarding a more reduced region of interest(ROI)concerning the task to be performed,(to
pick an object e.g.)the system is called object-related and the sensors are usually attached to the robot’s
end effector,this is called an eye-in-hand configuration. [12, 13]. This process can be divided in two main
stages,the first involves the acquisition of the point cloud and the second concerns it’s processing[12].
Both of them are decisive for the quality of the final solution. The software for point-cloud processing
was developed by Introsys and it’s content is confidential,therefore,that subject is out of the scope of this
work.

How to choose the best technology for Robot Guidance?

There are several approaches to robot guidance using machine vision techniques,such as Stereo
Vision,Time of Flight,Structured Light,Light Coding and Laser Triangulation. According to[12, 14] the
choice of machine vision sensors and techniques for robot guidance must be done taking into account
the following factors:

• The range of the sensor

7
• The accuracy of resolution and point cloud alignment

• Light weight

• Processing time

• Safety issues

• Scanning environment

• Hardware and software integration

• Budget

The enunciated approaches to robot guidance using machine vision will be revised in the light of these
factors.

1.2.3 Technologies for Point Cloud aquisition

Stereo Vision

Stereo vision consists,in the most sim-


ple form,of a pair of cameras placed with a
certain distance,similarly to the human vi-
sion system. The distance to an object is
estimated through the concept of parallax
and triangulation,where the displacement
between the relative positions of that same
object in different images inversely relates
to the distante of that object from the cam-
era system[15].
The difference XL − XR is called dis-
parity and it’s obtained through the iden-
Figure 1.4: Stereo triangulation[15]
tification of the same object in the two
different images. Having the advantage
of holding low implementation costs,stereo vision is commonly used alongside other vision tech-
niques,mainly to overcome lightning issues [16]and it’s largely used to solve bin-picking and object track-
ing problems[16, 17]. In [16],a standard stereo camera system,alongside with a projector and a machine
learning algorithm,is used to successfully track house objects(doors and drawers) with a success rate
above 75% up to a distance of 2.3m from the tracking object. Regarding the bin-picking problem,where

8
it’s necessary to have robust object identification in terms of it’s features and position,for the robot con-
troller to compute a collision-free path towards the target object[12],[17]proposed using a stereo-vision
based robotic system,with an eye-in-hand configuration alongside a machine learning algorithm. With a
working distance up to 0.9m,the position of the target objects was computed with an average error below
0.5mm and the problem of the vision system not being able to determine the part’s location is overcome
by the possibility of inspecting from multiple angles,due to the eye-in-hand configuration.

Laser Triangulation

Consisting in a laser emitter and a laser sensor,laser triangulation is a machine vision technique
used to capture 3-dimensional measurements by pairing a laser emitter with a camera. The angle from
which the laser is emitted is known,as the distance between the laser emitter and the camera. These two
components,plus the target where the laser is reflected,form a triangle where there is enough information
to,through trigonometry,compute its shape and size and,consequently,extract the target’s 3D point. The
major drawback when using this technology comes from the fact that,to scan an object,there has to be a
relative motion between the sensor and the object,i.e. one of those two elements have to be moving[12].

Time of Flight

The concept behind the time of flight technology is,basically,the measurement of a distance between
the camera and a given point resorting to the phase shift between the illumination and the reflection.
A distance is attributed to every pixel,resulting in a collection of 3D points called depth map[18]. Time
of flight has the advantage of allowing 3D acquisition in real time,with high frame rates,using compact
and lightweight sensors. On the other hand,the resulting raw data is very noisy and requires significant
processing[12].

Combinations of time of flight and color cameras have been used to obtain a 3D point cloud with real
features,consequently overcoming the issue regarding the noise in raw data,and providing more robust
solutions to be applied in object recognition,person awareness,gesture recognition and mapping[19].
In robot guidance, regarding eye-in-hand configurations,reconstruction of objects and surfaces using
time of flight results depend strongly on multiple 3D point clouds being obtained and combined,with an
accuracy in the order of 10mm to a range of distances from 0.8 to 8m[12].

Structured Light

Structured light consists of projecting a known pattern light to a given surface/object.The deformation
of each point of the path reflected on the surface is captured by one or several cameras and indicates
the correspondent depth. This approach overcomes the challenge of lightning independently of the
scene,which is a major advantage in comparison to stereo vision systems[20],on the other hand,it has
the drawback of it’s sensors(the projector) being large sized,and difficult to attach to the end effector of
the robot,under penalty of the projector’s filaments break from vibrations [12]. Although [20] proposed

9
a solution for this, using a fixed projector and a single camera on eye-in-hand configuration,structured
light can become challenging with the increasing of the complexity of the target object (i.e number of
features),since it demands high computational costs[20] and the pattern choice may require a previous
study of the scene. According to [12],accuracy can reach 0.127mm and this technology is not Industrially
certified.

Light Coding

Being the most recent technology presented in this chapter,light coding was popularized by the Mi-
crosoft Kinetic Sensor which was designed for human tracking in video games. The Microsoft Kinetic
Sensor is composed by a laser emitter alongside an infrared camera and a RGB camera. The infrared
laser emitter sends a laser beam which is scattered by diffraction forming a constant,pseudo-random
pattern of dots that are projected onto the scene. The scene reflected pattern is analyzed by the in-
frared camera and compared to a reference pattern,which is obtained by capturing a plane at a known
distance,taking into account the distance between the two patterns and lens distortion. From the defor-
mation of the projected dots,the depth of each dot can be computed and a point cloud is obtained. The
RGB camera serves the purpose of adding colour to the point cloud [12, 21].

[21]Concluded that the quality of the results depend strongly on accurate calibration of both cam-
eras and the optimal work range is between 1-3m,to ensure minimum noise and maximum resolution.
A study on quality performance [21]pointed that error growth and resolution decrease (both in depth
measurements) are quadratically related to the increasing of the distance from the sensor. Maximum
error occurs at a 5m distance and it’s value is 4cm,maximum point spacing ( in depth) is in the order of
7cm,also occurring at a 5m distance.

In the field of robotics,light coding has been used[22],combined with thermal sensors,to detect hu-
mans in scene-related tasks,using computer vision transformations allied with machine learning classi-
fiers,reaching an accuracy level of 96.74% and 1.88% of false negatives. Other heat sources originate
phantom detections responsible for a false positive rate of 4.64.

Table 1.3: Comparative analysis of point cloud aquisition technologies [12]

Technology Stereo Vision Laser Time of Flight Structured Light Light Coding
Triangulation
Work range X X 0.8-8m X 1-3m
Accuracy 50µm X 10mm 0.127mm 10mm
Major Accuracy Accuracy Independent of Accuracy Inexpensive
advantages Inexpensive lightning
Major Light Object/camera Low accuracy Uncertified Low accuracy
drawbacks dependent has to move Industrially

10
1.2.4 Sampling-Based motion planning

After the aquisition and processing of the point cloud and the target for the robot is obtained,the
following problem arises:How to determine best path for the robot to follow in order to reach the
target?

This question introduces the motion planning problem which can be defined as the problem of finding
a continuous,optimal path connecting a start state to a goal region while satisfying a set of constraints
(e.g. collision avoidance and limiting velocity)[23]. Although this notion is directly related to robotics,the
applications of motion planning can be extrapolated to different scientific fields (such as computational
biology and computer-aided verification[24]),since the problem can be more abstractly described as
search in continuous spaces.

Early studies on complexity of the motion planning problem shown it to be PSPACE-complete and ex-
isting solutions are difficult to implement and extremely computationaly demanding[23]. Currently,there
are several approaches to solve this problem,such as Artificial Potential Fields,Cell Decomposition,
Sampling-Based Motion Planning,Computational Intelligence or Grid Based algorithms[25]. [26]States
that,in comparsion to Computational Intelligence methods,Sampling-Based Motion planners are better
to use in real production scenarios,because they generate smoother paths. Despite being easy,efficient
and extensible to adding objects to the scene,Artificial Potential Fields algorithms have the drawback of
getting stuck in local minima [25]. As far as Grid Based algorithms,when it comes to planning on high
Degree of Freedom(DOF’s) robots,they become inefficient due to the increase of the graph size,which
translates to an increase in planning time and Cell Decomposition entails complex implementation with
poor time efficiency[25]. Recent studies focus on a Sampling-Based Motion planning approach,which
provide large amounts of computational savings [27]and have been successfully implemented to solve
difficult planning problems on robots of practical interest despite most of it’s algorithms being probablis-
tically complete,i.e a solution will eventually be found with probability 1 if one exists[23].

The OMPL includes several state of the art Sampling-Based Motion algorithms and it can be integrated
with the platform Moveit! in order to target those algorithms to solve path planning problems in the robotic
field[23]. [28]Successfully integrated a multi-rotor UAS equiped with a multi-joint robotic arm within
MoveIt!,resorting to the OMPL’s Sampling-Based motion planning algorithms. [29]Also implemented,with
success,sampling-based motion planning algorithms,through the OMPL and MoveIt! to perform trajec-
tory planning in a manipulator for grapevine prunning. OMPL presents several motion planning algo-
rithms such as the algorithms from the PRM family,RRT and its variations,EST,KPIECE,STRIDE,PDST,
FMT or BFMT(single-query). It would be too exhaustive to describe them all in detail,therefore,a brief
description of the PRM,RRT,RRTstar,RRTConnect and KPIECE,which are among the most commonly
used,is present on A.2.

11
[29] and [28] performed benchmarking with sampling-based motion planning algorithms for multi-joint
robotic arm trajectory planning. [28] came to the conclusion that the RRT family algorithms have the
most suitable performance,specifically shortest path planning times and least failed planning attempts.
[29] Concluded the same. In the scope of tridimensional path planning for 6 DOF manipulators,the RRT
family algorithms present the best computation time.

1.3 Objectives

The core of the presented work is the design of a control module for automatic adhesive bead defect
correction. It is intended for the module to be transversal to several robot frameworks and the implemen-
tation of an optimization algorithm to perform the correction with an optimized trajectory is proposed. The
concept is demonstrated using a collaborative robot,and improved on an industrial environment. On the
top of that,it is proposed the integration of the module with state of the art sampling-based motion algo-
rithms through the robot work frame ROS MoveIt!,with the main goal of producing trajectories advoiding
static objects in constant changing scenarios.

1.4 Thesis Outline

The beginning of Chapter 2 covers the theoretical background in terms of kinematics for the robots
on which the module was implemented. A brief notion of how to operate with the robot frameworks pro-
vided by the manufacturers of these robots as well as an alternative framework,ROS,is also presented.
In the second part,trajectory planning methods are described in the scope of the frameworks presented
previously. Finally,a suggestion to optimize the correction trajectory is presented,covering different op-
timization algorithms. Chapter 3 covers the implementation of the module in detail,on four use cases.
Firstly,a demonstrator using a collaborative robot on a non-industrial environment. Second,an imple-
mentation on an industrial environment using a industrial robot,on a post process inspection,followed by
an implementation on the same scene,on in-process inspection. Finally,an adaptation of the module to
the framework ROS MoveIt! is presented. In Chapter 4,results for the different use cases are shown as
well as results for tests with optimization algorithms to produce shorter trajectories.

12
Chapter 2

Theoretical Background

The solution was developed to be implemented in 4 use cases,which will be explained in detail on further
sections. The first use case concerns a demonstrator in a non-industrial environment,for which the
UR5 collaborative robot will be used. The other three use cases concern implementations on industrial
environments,for which the ABB IRB2400 will be used.

On this chapter,kinematic concepts like direct, inverse and differential kinematics, as well as singular-
ities will be approached in the scope of both robots. There’s a brief description on how to operate them,
in terms of frameworks and programming languages provided by the manufacturers and an alternative
to these frameworks, using the platform ROS MoveIt!, will be addressed, explaining the basic concepts
and architecture. There’s also a description of trajectory planning methods, resorting to built in functions
present in the robot’s programming languages and to Sampling-Based motion algorithms, implemented
through the ROS platform MoveIt!. Finally, optimization algorithms to decrease the task’s cycle time will
be presented.

Figure 2.1: The UR5 and the ABB IRB2400, taken from www.universal-robots.com and www.grupro.com

13
2.1 Kinematics

A manipulator can be represented as a chain of rigid bodies(links) connected though joints(either rev-
olute or prismatic). The motion of each link with respect to the previous one builds the motion of the
structure and dictates the end effector’s(last link on the chain) position and orientation(pose).[30]

In terms of modelling, the robot state representation can be made in joint space or operational space.
The first one concerns the n-dimensional vector of joint variables that defines the state of the manipula-
tor joints, the second one includes only the information about the end effector’s pose(in a 3-dimensional
cartesian space, it consists in a vector describing the end effector’s position(x, y, z) plus the it’s orienta-
tion, which will vary in size, depending on the representation chosen). Three parameters are sufficient
to describe orientation of a rigid body in space[30], so a minimal representation of orientation can be ob-
tained by using a set of three angles. The most usual representations resort to Euler Angles(ϕ,θ,ψ) and
angle/axis(θ,rx ,ry ,rz ), but since the classical angle/axis representation has the drawback of having two
possible orientations per set of parameters, its most well known form is the unit quaternion(η,x ,y ,z )
representation, which provides a workaround for this issue. In the scope of operational space, to com-
pute the end effector’s pose, it is necessary to know each link’s pose. Attaching a coordinate frame to
each link will not only describe it’s position and orientation but will also allow the possibility of using an
important algebric operator, the rotation matrix Rji , which expresses the orientation of the coordinate
frame corresponding to link j in respect to the reference frame corresponding to link i. To be able to
compute the full pose vector, it becomes necessary to have information about the translation and that
information is integrated with the rotation matrix in the homogeneous transformation matrix, Aii−1 . This
allows the computation of a link’s pose resorting to the information regarding the joint values of the
previous links, plus the translations between the coordinate frames of each link.

The computation of the link’s pose in respect to the previous link’s coordinate frame can be extended
to the whole robotic link chain i.e. from the base frame to the end effector. Direct kinematics is a
function, expressed by the homogeneous transformation matrix Teb (q), from the base coordinate system
to the end effector and it opens the possibility of computing the end effector’s pose as a function of the
joint variables. The inverse operation is called inverse kinematics and it consists of the determination
of the joint variables corresponding to a given end effector’s pose. It’s of major importance to convert a
goal assigned to the end effector in operational space to the corresponding joint space values that allow
execution of the desired motion [30].

The relationship between the joint velocities and the corresponding end effector’s velocities(both an-
gular and linear) is given by the differential kinematics. This mapping is described by the geometric
Jacobian matrix.

14
2.1.1 Direct Kinematics

In this section, an overview of the mathematical tools necessary to compute the direct kinematics is
presented.

As mentioned in the previous section, the rotation matrix


 
cos(θi ) −sin(θi ) 0
 
Rji = sin(θi ) (2.1)
 
cos(θi ) 0
 
0 0 1

which is orthogonal, i.e.


Rji = (Rij )−1 = (Rij )T (2.2)

and can be (de)composed in several sub-rotations, i.e

Rki = Rji Rkj (2.3)

allows the orientation of the link j to be expressed in respect to the reference frame of the link i, using the
joint variable θi . In order to get the full pose, an integration of the rotation matrix with a vector describ-
ing the translation between the origins of the two reference frames is made, using the homogeneous
transformation matrix  
Rji oij
Aij =   (2.4)
0T 1

where oij denotes the position of the origin of the reference frame j in respect to reference frame i.
Finally, the operation is extended through the whole kinematic chain, multiplying all the homogeneous
transformation matrices, from the base frame to the end effector, resulting in the homogeneous transfor-
mation matrix Teb (q)
 
nbe (q) sbe (q) abe (q) pbe (q)
Teb (q) = Ab1 (q1 )A02 (q2 )...Ae−1
e (q e ) =   (2.5)
0 0 0 1

where q is the (n × 1) vector of joint variables, ne = [nx ny nz ], se = [sx sy sz ], ae = [ax ay az ] are the unit
vectors of the frame corresponding to the end effector, and pe = [px py pz ] is the position vector of the
origin of that same frame with respect to the origin of the base frame.[30] The Denavit-Hartenberg table
presents the needed variables to construct the homogeneous transformation matrices coherently and
systematically. On the next section, the tables are presented for each robot along with a picture of the
links and the corresponding reference frames. The tables where derived from [31] and [32] respectively.

15
UR5

Figure 2.2: UR5 frames assignment [31] and dimensions,taken from www.zacobria.com

Table 2.1: Denavit-Hartenberg parameter table for the UR5

Linki ai αi di θi
π
1(shoulder) 0 2 0.08916 θ1
2(shoulder) 0.425 0 0 θ2
3(elbow) 0.39225 0 0 θ3
π
4(wrist) 0 2 0.10915 θ4
−π
5(wrist) 0 2 0.09456 θ5
6(wrist) 0 0 0.0823 θ6

16
ABB IRB2400

Figure 2.3: IRB2400 dimensions[33] and frame assignment

Table 2.2: Denavit-Hartenberg parameter table for the ABB IRB2400

Linki ai αi di θi
π
1(shoulder) 0.1 2 0.615 θ1
2(shoulder) 0.705 0 0 θ2
−π π
3(elbow) 0.135 2 0 θ3 − 2
π
4(wrist) 0 2 0.755 θ4
−π
5(wrist) 0 2 0 θ5
6(wrist) 0 0 0.085 θ6

2.1.2 Inverse Kinematics

As described before, the goal of inverse kinematics is to find the set of joint configurations Q = qi where
qi = [θ1i , ..., θni ] are the vectors of joint variables that satisfy eq.2.5.

Since the ABB IRB2400 is ortho-parallel manipulator with a spherical wrist[34] and the UR5 is an
ortho-parallel manipulator with a non-spherical wrist, for any prescribed end effector pose, there will be
8 possible solutions for both robots, or 8 possible joint vectors.

17
Figure 2.4: Eight possible configurations for a prescribed pose in a 6DOF manipulator [35]

ABB IRB2400

The equations for the ABB IRB2400


inverse kinematics presented on this
section where derived from[34]. The
first step should be the kinematic de-
coupling, i.e the problem is divided
in two subproblems, since the so-
lution for the position is decoupled
from that for the orientation. The first
part of the solution concerns only the
ortho-parallel manipulator, and the
second concerns only the spherical
wrist.

Figure 2.5: Ortho-parallel manipulator with spherical wrist, side


and back view

Ortho-parallel manipulator

Considering the ortho-parallel


substructure in Fig.2.6, the coordi-
nates of point C are obtained by moving from the desired end effector position u0 into the negative
ze direction of the end effector coordinate frame, by the length c4 :

       
cx0 ux0 0 n0 s0x a0x
       x 
cy0  = uy0  − c4 Re0 0 with Re0 = n0y s0y a0y 
       
       
cz0 uz0 1 n0z s0z a0z

18
Figure 2.6: Ortho-parallel substructure [34]

As Fig.2.7 suggests, when θ1 = 0, the partial structure obtained is a 3-RRR planar arm. The coordinates
of point C in coordinate frame 1 are denoted as

cx1 = c2 sinθ2 + ksin(θ2 − θ3 + ψ3 ) + a1 (2.6)

cy1 = b (2.7)

cz1 = c2 cosθ2 + kcos(θ2 + θ3 + ψ3 ) (2.8)


p
where ψ3 = atan(a2 /c3 ) and k = a22 + c23

Figure 2.7: 3-RRR planar arm structure obtained with θ1 = 0 [34]

The coordinates of point C in the coordinate frame 0, as a function of the first three joint angles are:

19
cx0 = cx1 cosθ1 − cy1 sinθ1 (2.9)

cy0 = cx1 sinθ1 − cy1 cosθ1 (2.10)

cz0 = cz1 + c1 (2.11)

To find the joint space solutions for a given point C in space, the following geometric relations are derived
from Fig.2.7.

The distance between point C and joint 2 in direction x1 :

nx1 = cx1 − a1 (2.12)

The normal distance between joint 2 and point C:

q q q
s1 = n2x1 + c2z1 = (cx1 − a1 )2 + c2z1 = c22 + k 2 + 2c2 kcos(θ3 + ψ3 ) (2.13)

Two possible solutions for θ3 are:

s21 − c22 − k 2
θ3 = ±( ) − atan2(a2 , c3 ) (2.14)
2c2 k

Now, lets take into consideration the configuration where θ1 is rotated(shoulder backwards).

Figure 2.8: Geometric construction of top view of the two configurations obtained by rotating θ1 [34]

Note the geometric correlation between the two configurations:

20
ñx1 = nx1 + 2a1 (2.15)

Similarly to 2.13, for the second configuration:

q q q
s2 = ñ2x1 + c2z1 = (nx1 + 2a1 )2 + c2z1 = (cx1 + a1 )2 + c2z1 (2.16)

Which yelds another pair of solutions for θ3 :


s22 − c22 − k 2
θ3 = ±( ) − atan2(a2 , c3 )
2c2 k
Considering that the projection of the rotated point C about the z0 axis to the ground plane can be
computed as the sum of

ψ1 = atan2(b, nx1 + a1 )

and(according to Fig.2.1.2)

atan2(cy0 , cx0 ) = θ1 + ψ1

hence

θ1 = atan2(cy0 , cx0 ) − atan2(b, nx1 + a1 ).

The second solution for θ1 comes:

θ1,ii = θ1 − θ˜1 = θ1 + 2ψ1 − π.

Resorting to Fig.2.7, the solution for θ2 can be found as:

θ2 = atan2(nx1 , cz1 ) − ψ2 (2.17)

where

s21 + c22 − k 2
ψ2 = acos( ). (2.18)
2s1 c2

This solution corresponds to an "elbow down" configuration, for "elbow up" configuration the solution is

θ2,ii = atan2(nx1 , cz1 ) + ψ2 (2.19)

and the remaining solutions can be obtained substituting s1 by s2 . Note that, according to 2.2 and [34],
c1 = 615mm, c2 = 705mm, c3 = 755mm, c4 = 85mm, a1 = 100mm, a2 = 135mm and b = 0.

21
Spherical Wrist

Finally, a solution for the spherical wrist will provide the rest of the joint angles(θ4 , θ5 , θ6 ). The key for
the solution lies in the rotation matrix
 
nc scx acx
 x 
Rec = ncy scy acy 
 
 
ncz scz acz

and in the fact that, since these angles constitute a set of Euler angles ZYZ, they can be directly extrated
from it.

Due to the properties 2.2 and 2.3 Rec can be computed without the knowledge of θ4 , θ5 and θ6 because:

T
Rec = Rc0 Re0 (2.20)

Therefore, the only variables needed are the rotation matrix R30 , which can be computed through direct
kinematics knowing the first three joint angles, and the end effector’s pose. The general solution comes:

θ4 = Atan2(Rec (2, 3), Rec (1, 3))

q
2 2
θ5 = Atan2( Rec (1, 3) + Rec (2, 3) , Rec (3, 3))

θ6 = Atan2(Rec (3, 2), −Rec (3, 1)) (2.21)

for θ5 ∈ (0, π) i.e with the wrist pointing downwards, and

θ4 = Atan2(−Rec (2, 3), −Rec (1, 3))

q
2 2
θ5 = Atan2(− Rec (1, 3) + Rec (2, 3) , Rec (3, 3))

θ6 = Atan2(−Rec (3, 2), Rec (3, 1)) (2.22)

for θ5 ∈ (−π, 0) i.e with the wrist pointing upwards.

UR5

The equations for the inverse kinematics present on this subsection were derived from [31].

22
First, using the position of the 5th joint, θ1 shall be computed resorting to the transformation from
frame 1 to frame 5 using eq.2.5, which results in:

sinθ1 (px − d6 zx ) + cosθ1 (py − d6 zy ) = −d4

which is known as a phase-shift equation, where px and py are the x and y coordinates of the end
effector, respectively. Its solution, considering Fig.2.9 can be found as:
0
p5y
tanα1 = 0p
5x

d4 d4
tanα2 = =q
R 0 p2 +0 p2
5y 5x

Hence

π d4 π
θ1 = α1 + α2 + = atan2(0 p5y ,0 p5x ) ± cos−1 + (2.23)
2 R 2
q
Note that, since there’s no possible physical configuration where 0 p25y +0 p25x ≤| d4 |< 0, which is easy
to see in Fig.2.9, both α1 and α2 always exist if an inverse solution exists. Also note that, similarly to the
ABB case, there are two solutions for θ1 , where the shoulder is facing forward or backward. Using θ1 ,
θ5 can be found resorting to the transformation from frame 1 to frame 6. The following equality can be
formed:

 
p
h i  x
−sinθ1 0 py  = −d4 − cosθ5 d5
 
cosθ1
 
pz

Which leads to:

px sinθ1 − py cosθ1 − d4
θ5 = ±cos−1 (2.24)
d6

Once again, there are 2 possible solutions, corresponding to the wrist facing upwards or downwards. To
solve for θ6 , we look at the 6 y1 coordinate axis:
   
−xx sinθ1 + xy cosθ1 −cosθ6 cosθ5
   
 −yx sinθ1 + yy cosθ1  =  sinθ6 sinθ5 
   
   
−zx sinθ1 + zy cosθ1 −cosθ5

From Fig.2.9, it can be seen that this equality forms a spherical coordinate expression for the vector 6 y1 .
The x and y coordinates of this vector form a system solved as:

yy cosθ1 − yx sinθ1 xx cosθ1 − xy sinθ1


θ6 = atan2( , ) (2.25)
sinθ5 sinθ5

23
θ2 , θ3 and θ4 can be derived considering that they act as a 3-RRR planar arm. Once the other joints are
found, the location of the base and end effector of this plannar arm is known, then the computation of
these 3 joints is trivial. There are 2 possible solutions for this planar arm, which correspond to "elbow
up" and "elbow down" confgurations .Note that from Table2.1, d4 = 109, 15mm, d5 = 94, 56mm and
d6 = 82, 3mm

Figure 2.9: Geometric representations for finding θ1 (top left), θ5 (top right) and θ6 (bottom)[31]

2.1.3 Singularities

As explained on the previous section, the (6x6) Jacobian matrix J relates the operational space ve-
locity, described by vector ν to the joint space velocity, described by vector q̇, respecting the following
equation  

ν =   = J(q)q̇ (2.26)
ω

A configuration of singularity means that J becomes ill conditioned, i.e. rank(J) < 6 which leads to
the occurrence of constraints on the feasible end effector velocity[36]. To explain it on simple terms, a
singularity occurs when a configuration is presented where there can be infinite joint space solutions for
a given operational space goal. It is not possible to impose an arbitrary motion to the end effector in this
situation, plus, in the neighbourhood of a singularity, small velocities in the operational space may cause
large velocities in the joint space [30], which arises safety issues.

In the case of both 6-DOF manipulators considered, three main configurations corresponding to sin-
gularities should be taken into account:

24
• Elbow singularity It occurs when the elbow is stretched out.

• Shoulder singularityIf the intersection point of the axes of joints 5 and 6 lies in one plane with the
axes of joints 1 and 2(UR5) or if the wrist point intersects the axis of joint 1(ABB), it’s position cannot be
changed by rotating that joint and the manipulator is on a shoulder singularity.

• Wrist singularity When joint 4 and joint 5’s axis are parallel(UR5) or coincident(ABB), the robot is in
a wrist singularity. For this to happen all it takes is

senθ5 = 0 (2.27)

2.2 Robot Frameworks

On this section, a description on how to operate both robots through the frameworks and programming
languages provided by the manufacturers will be presented. An alternative to these frameworks, using
the ROS platform MoveIt! will also be described in terms of basic concepts and structure.

2.2.1 UR5

The UR5 is a ortho-parallel manipulator with a non spherical wrist with a maximum payload of 5kg and
a reach of 85cm. Each joint has a maximum range of ±360◦ and a maximum joint speed of ±180◦ /s.
An emergency shutdown is triggered when the force acting on the TCP exceeds 100N or the momentum
exceeds 10kgm/s. The Universal Robot can be controlled at three different levels:The Graphical User-
Interface Level(using PolyScope), The Script Level and the C-API Level. The first two run on the original
firmware with a low-level robot controller named URControl. The comunication between the URControl
and the joint motors is performed at 125Hz[37].

C-API, PolyScope, URSIM, URScript and ethernet connections

Controlling the UR5 through the C-API can be done resort-


ing to a C library provided by Universal Robots, which al-
lows the direct control of all joint motors. It is possible to
create a custom low level control that runs with a maxi-
mum updating rate of 125Hz. The Graphical User-Interface
is accessed through the teach pendant, which is running
a software called PolyScope, allowing online programming
through a visual programming language. Mainly developed
for users without a programming background, Polyscope
can be useful for fast implementation, control of basic ac- Figure 2.10: Universal Robot Teach Pen-
tions, prototyping or debugging, and it can be used to write dant [38]

25
scripts in URScript language as well. The URScript lan-
guage is the programming language used to control the robot at the Script Level. It contains types,
variables, flow of control statements(eg. IF-statements and WHILE-loops), functions etc.It also has
some built-in functions and variables in order to monitor and control I/O’s and trajectories(which will be
adressed in section2.3.1). In terms of communication, data regarding the robot state is continuously
streamed through TCP/IP server sockets in the controller. Through port 29999 the dashboard server is
used to load, play, pause and stop a PolyScope program. Ports 30001 and 30002 are reserved for data
streams that provide robot state information such as joint position, joint orientation, temperatures, Carte-
sian position, Cartesian orientation etc. Both ports accept commands at a 10Hz update rate, however,
commands are not read in a deterministic manner[37][38]. For streaming information, either receiving
robot data or sending commands, at the update rate of 125Hz, port 30003 can be used. URSim is an
offline simulator developed by Universal Robots which uses the same GUI as the physical UR manipula-
tor. It supports programming of the UR manipulator through Polyscope visual language and/or URScript,
similarly to the Teach Pendant, and it mainly runs on Linux(or inside a virtual machine that is based on
Linux).

2.2.2 ABB IRB2400

As mentioned before, the ABB IRB2400 is an ortho-parallel arm with a spherical wrist. With a max-
imum payload of 16kg(20kg with some limitations)each one of the six joints is actuated by a 3 phase
electric motor rotating around the central axis in compliance to the rule of the right hand. Their ranges
of motion as well as the joint’s maximum velocities are shown in the table below[33].

Table 2.3: Joint angle and velocity limits for the ABB IRB2400

Joint Range of motion Maximum speed


1 +180◦ to −180◦ 150◦ /s
2 +110◦ to −100◦ 150◦ /s
3 +65◦ to −60◦ 150◦ /s
4 +200 to −200◦ (Unlimited as optional)

360◦ /s
5 +120◦ to −120◦ 360◦ /s
6 +400◦ to −400◦ 400◦ /s

The IRB2400 triggers and emergency stop when a maximum load of 4100 ± 1900N or a maximum
torque of ±900N m are reached. In manual mode, the TCP has a maximum velocity of 250mm/s.

FlexPendant, RobotStudio and RAPID

The controller for the ABB IRB2400 is the IRC5, it is a PC-based controller divided internally in two
modules:The drive module, which contains the drive system and is responsible for powering the joint
motors, and the control module, which contins the main computer, the main switch, communication

26
interfaces, flexpendant connection, service ports and some space for customer peripherals(eg. I/O
boards). The IRC5 runs on a Firmware called Robotware and the PC is booted on an operating sys-
tem called VxWorks in which all the control and path planning is perfomed. There are three operating
modes:manual, manual at full speed and automatic. The IRC5 controller can be accessed by either
the FlexPendant or by a computer. Similarly to the UR’s teach pendant, the FlexPendant allows the
control of basic actions like jogging the robot, producing, editing and runnning programs and visualize
inputs/outputs through a visual programming language, making it easier to work with for users without a
programming background.

Figure 2.11: The IRC5 and the FlexPendant[39]

RobotStudio is a versatile software developed by ABB for modelling, simulating and offline program-
ming of robot cells. It can run on a laptop, which allows the user to program the robot in a virtual world,
since it emulates the Robotware installed in the IRC5 through a virtual controller. Once the application is
created and tested, it can be uploaded to the IRC5. RobotStudio allows the creation of CAD cells which
can be imported from a CAD software in order to approximate the simulation world to the real working
environment.Paths can be created graphically and converted to the high-level programming language
RAPID, or can be written directly in RAPID language.

RAPID is a high-level programming language developed by ABB for the programming of robot con-
trollers and manipulators of the same brand[40]. It runs on an hierarchical and modular program struc-
ture consisting on routines. The routines can be functions or procedures constructed with expressions
and instructions, which can be arithmetic, logic operations, conditions, data handling, read/write, motion
functions(which will be addressed further), waiting and interrupting. Like the URScript language, RAPID
also contains flow of control statements, like WHILE, IF or FOR loops. Multiple tasks can be executed
resorting to an option called Multitasking[41].

2.2.3 ROS

ROS(Robot Operating System) is an Open Source consisting in a set of tools, libraries and drivers
with the purpose of providing a unified environment for the development of robotic applications across
many robotic platforms. Regarding the field of industrial robotics, ROS is associated with some of the

27
main robot manufacturers (Universal Robots, ABB, Fanuc and KUKA, among others) which provide
compatible ROS drivers for their own robots as well as robot models, simulation tools, tutorials and other
support[42].

Nodes, topics services and actions

To understand the ROS architecture and it’s way of working, a brief explanation of the concepts of
packages, nodes, messages, topics and services will be provided.

Packages are the top level element in the ROS software organization, they contain nodes, libraries,
datasets, configuration files or anything else that may be essential to build a module without making it
too heavy, so other users can integrate it in their own work smoothly[43].

Nodes are processes that perform computation and communicate with each other, creating a message
graph in which the edges represent topics. A message is a simple data structure, comprising typed
fields. It can be published on a topic by a node and that topic can be subscribed by another node, over
a network maintained by ROS master [28].

Figure 2.12: Basic publisher subscriber interaction www.mathworks.com

Request/reply types of interactions are accomplished resorting to services, using a specific pair of
messages, a request and a reply. A ROS node provides the service, which must be always active and
can be called by a client (by sending a request) who then waits for the reply.

If it is desired to call a service while monitorizing information regardind the operation and perform
some other tasks in the meanwhile, ROS actions should be used. ROS actions have a specific client to
server communication protocol, the action client can send goals to the server as well as cancel them. It
can receive five different types of messages corresponding to the server status, the goal state, the goal
feedback and a message with the information about the success of the goal result.

28
Gazebo

Gazebo is a three-dimensional physics


simulator whose development has
been driven by the increasing use of
robots and the need to have a com-
prehensive tool to aid the develop-
ment of control mechanisms for in-
teracting with the three-dimensional
space. That being said, it has the
capability of modeling the physics
of complex environments, attributing Figure 2.13: Client to server relationship, taken from math-
parameters such as mass, inertia, works.com
velocity and friction to a simulated
object in order to accurately emulate it’s behaviour when subjected to an input or a disturbance[44]. Sim-
ulated objects can have sensors, cameras or any interaces associated no facilitate data flow. Gazebo
reads URDF(Unified Robot Description Format) and/or SRDF(Semantic Robot Description Format) files,
which are XML format files, and builds the simulated world accordingly to the files’ content. All rigid bod-
ies are declared as links and, for every link, a joint have to be declared in order to connect it to the world
(or to another link). When declaring a link, information should be written regarding it’s position, orien-
tation, mass, intertia and geometry. When declaring a joint, indications about respective links identities
and their orientation in respect to one another should be provided, it’s also necessary to declare if the
joint is fixed, rotational or plannar. All this information makes the dynamic, kinematic and collision model
possible to obtain. Motors can be declared in the joint’s physical space, making it possible for force to
be applied to generate motion due to friction between two bodies[28, 43].

TF Trees

ROS has a package named TF which is used to keep track of multiple coordinate frames over time,
maintaining a the relationship between them in a tree structure and letting the user transform points or
vectors between any two coordinate frames[43]. This operation is, basically, an homogeneous transfor-
mation. For a given manipulator, a TF tree can be interpreted as the robot’s kinematic chain.

ros_control

ros_control is a set of packages, which takes as an input the joint states from the robot’s actuator’s
encoders and it uses a control loop feedback(tipically a PID controller) to control the output and send it
to the actuators. The packages in ros_control include[45]:

29
• Controller interfaces Responsible for processing the data and send commands to the hardware
interface, it has a list of controller plugins such as effort_controllers, to control force/torque and posi-
tion_controllers, velocity _controllers, joint_trajectory_controllers and joint_state_controller to perform a
kinematic control.

• Hardware interfaces It guarantees the communication between the hardware and the ROS con-
trollers.

• Transmissions This element is responsible for transforming efforts/flow variables. Its implementa-
tions maps effort/flow variables to output effort/flow variables while preserving power, eg. the product of
the input force and velocity is equal to the product of the output force and velocity. A diagram of the ROS
control architecture can be consulted in A.6.

gazebo_ros_control

When workin on a gazebo simulation, the robot’s controller can be


simulated using ros_control and gazebo_ros_control. The message
flow between simulation, hardware, controllers and transmissions is
depicted in A.7.
In order to make proper usage of gazebo_ros_control, it’s essen-
tial to[46]:

• Add transmission elements In order to link actuators to joints,


the tansmission element have to be added to the URDF. In it,
the joint names have to be declared as well as the type of trans-
mission(currently, only transmission_interface/SimpleTransmission is
implemented) and which hardware interface simulation should be
loaded.
Figure 2.14: TF tree for a 6 DOF
manipulator, each edge repre-
• Add the gazebo_ros_control plugin Adding the plugin to the sents the transform from the par-
ent coordinate frame to the child
URDF parses the transmission tags, loads the simulated hardware coordinate frame
interface and controller manager.

Moveit!

Moveit! is a motion planning ROS framework for manipulation


arms. It provides integration of motion planning algorithms, 3D sensors, kinematics and robot con-
trol interfacing[28]. Moveit!’s central node is called move_group and its services and actions can be
accessed by users in one of three ways[47]:

30
• C++ API using the move_group_interf ace package, primarily meant for advanced users and useful
when creating higher-level capabilities.

• Python API using the moveit_commander package. Meant for scripting demos and building applica-
tions.

• GUI using the Motion Planning plugin to Rviz(the ROS visualizer). Recommended for visualization,
introduction to Moveit! and for quick demonstations.

MoveIt! builds on several component technologies, such as the following:

Collision checking Using the FCL(Flexible Collision Library) package, Moveit! can integrate any
collision checker. The collision checking capabilities are implemented using a plugin architecure and
FCL provides the ability to do it continuously. Since it represents often the most expensive part of
motion planning, it is possible for the user to specify which pairs of bodies do not need to be checked
against each other through the allowed collision matrix, which is configured by the user when building
his Moveit! package[47].

Kinematics MoveIt! solves inverse kinematics using a plugin-based architecture and provides a na-
tive implementation for forward kinematics. It uses a numerical solver for inverse kinematics for any
robot, but any user can add their own solver, since analytic solvers are much faster than numerical ones.
There are plugin-based solvers with analytical approaches available, such as the IKFast[47].

Motion planning The motion planners work with Moveit! through a plugin interface, with the aim of
opening the possibility of using different motion planners from multiple libraries, making Moveit! easily
extensible. The interface is assured by a ROS Action or service. Moveit! integrates directly with OMPL
and uses it’s motion planners as default planners. Since the planners in OMPL are not necessarily target
for robotics(i.e are abstract), Moveit! configures the library and provides the back-end for it to work with
problems regarding robotics [47].

Planning scene The representation of the robot and it’s surroundings is assured by the planning
scene, which also stores the state of the robot itself. It is maintained by the planning scene monitor
inside the move group node. The planning scene monitors listens to the RobotState Information on the
joint states topic and using transform information from the ROS TF transform tree, Sensor Information
using a world geometry monitor that integrates 3D occupancy information and other object information
and World Geometry Infomation which is basically from user input or other sources. The planning scene
interface allows the users to modify the state of the world in an intuitive way[47].

Through a ROS interface, Moveit! can connect directly to a robot(either a physical robot or a simulation
in gazebo). There are some requirements for all to work properly:

31
• Joint states and transform information Information regarding the joint states, at least the position
of each joint, should be published on the joint states topic at a reasonable rate(tipically at 100Hz). The
joint state information is used for Moveit! to maintain it’s own TF tree internally.

• Trajectory Action Controller It is required the existence of a trajectory action controller that supports
a ROS action interface using the FollowJointTrajectory action in the controlmsgs package. This can be
created through a .yaml file (when simulating in gazebo) or can be provided either by the community
or by the manufacturers. It has the responsability of translating the high-level "ROS language" to the
low-level robot language.

Figure 2.15: Moveit! high-level architecture [47]

On fig.2.15 the MoveIt! high level architecture is depicted. On the top block, the ROS Param Server
refers to the ROS parameter server, where the robot/world descripton is loaded according to the several
URDF or SRDF files that describe it. That information is sent to the main node move_group, this node is
responsible for interacting with the robot. The gray blocks represent the different interfaces by which the
user can access MoveIt!. The blue blocks represent all the actions related with actuators and sensors.

2.3 Trajectory Planning

2.3.1 Trajectory Planning

In this section, methods for trajectory planning will be approached in the following order:

• Trajectory planning for the UR5, resorting to the built in functions present on the URScript language,
succintly describing each one of them as well as their syntax.

32
• Trajectory planning for the ABB IRB 2400, resorting to the built in functions present on the RAPID
language, succintly describing them and their syntax.

• Trajectory planning resorting to Sampling-Based motion planning algorithms present on the Open
Motion Planning Library, integrated on the MoveIt! platform.

Having addressed the trajectory planning problem, for a specific target, the problem of how to order
the targets arises. This is an important matter and it will be covered by the end of this section.

UR5

The Move instructions are built in functions present on URScript programming language. There are
Move instructions that work on joint space and Move instructions that work on operational space, in
this case the controller performs the inverse kinematics to actuate the robot’s joints. The main Move
instructions for the UR5 are the following:

• moveJ The right choice if it is desired for the robot arm to move fast between waypoints, moveJ
calculates de movements in the joint space of the robot arm. The computation is made so all the joints
reach the desired location at the same time. The input arguments for this type of command are the
maximum joint speed and joint acceleration, it is important to note that the path of the TCP between the
waypoints is disregarded.

• moveL Plans the path between two waypoints with the main constraint of keeping the end effector
on a straight line path, this means that each joint performs a more complicated motion so the constraint
won’t be violated. The input arguments for moveL are the TCP speed, acceleration and a feature. The
feature can be a reference frame to where the positions are represented in. This brings the advantage
of planning on multiple reference frames on the same chain of instructions without having to compute
transforms to a general reference frame.

• moveP Similar to the moveL instruction, moveP also moves the end effector linearly with constant
speed, with the addition of inserting a circular blend to smooth the trajectory(blend radius) when there’s
a change in direction, this is very important since an abrupt change in the end effector’s path direction
means a discontinuity in velocity and, therefore, a mandatory TCP stop.

The general syntax is[48]:

movei(pose, a, v, t, r) , i ∈ {J, L, P }

pose Target pose (in operational space), which can be specified in joint space for moveJ. The units for
position are m, Euler angles for the orientation and rad for the joint space. The reference frame(which
is previously defined and stored in memory) for the pose should also be indicated.

33
a Aceleration for the TCP, in m/s2 or joint acceleration of the leading axis, in rad/s2 , if moveJ is used

v Velocity for the TCP, in m/s or velocity of the leading axis in rad/s in case of moveJ

t Time, in s. This argument is only available for moveJ and moveL

r Blend radius, expressed in m

Figure 2.16: Blend radius for the MoveP instruction [37]

IRB2400

Similarly to the UR5, the ABB IRB2400 also has Move instructions to work in joint space and opera-
tional space:

• moveJ It’s similar to the UR5’s moveJ.

• moveL This command performs in the same fashion than UR5’s moveP, the inputs are the target
pose, speed and zone,which is the equivalent of the UR5’s blend radius,i.e. the radius of the semi-curve
joining two path segments.

• moveC Used to move the end effector circularly to a given destination, moveC performs the motion
without changing the orientation. The input arguments are the same as the input arguments for the
previous instruction.

• moveLDO Works similarly to the moveL instruction, with the difference of setting/resetting a specified
output in the middle of the corner-path,i.e. in the zone’s middle point.

The general syntax is:

M ovei[T oP oint][Speed][Zone][\W Obj][Signal][V alue] , i ∈ {J, L, C}

34
Figure 2.17: Zone for the MoveL/MoveLDO instruction[41]

ToPoint Data type: robtarget. It is defined as a named position and it describes the target pose.
The units for the position are mm and unit quaternions for the orientation. The robot configuration can
be chosen by declaring a value between 0 and 7, corresponding to the 8 different inverse kinematics
solutions.

Speed Data type: speeddata. It defines the velocity at which the end-effector moves in mm/s.

Zone Data type: zonedata. It describes the size of the generated corner path. In other words, it’s
value specifies how a position is to be terminated, i.e. how close to the programmed position the axes
must be before moving towards the next position. If the target is to be exact, the string f ine should be
written in the argument’s position.

WObj Data type: wobjdata. The coordinate system to which the robot position target in the instruction
is related. This argument is optional, if omited, the position is related to the world coordinate system.

Signal Data type: signaldo. Used in the moveLDO instruction, indicates the name of the digital signal
to be changed.

Value Data type: dionum. Indicates the desired value of the signal(0 or 1).

Figure 2.18: MoveL/MoveP and MoveJ [41]

35
2.3.2 MoveIt! and the OMPL

State space can be defined as all the translations and rotations for a free-flying rigid body. This
definition can be extended to n-dimensions when considering a n-DOF manipulator. It is also the union
of the set of configurations the robot can assume where it does not contact any obstacle(valid states) and
the set of configurations where the robot is in collision(non-valid states). Depending on the constraints
definition, collision may not be the only criteria to define if a state is valid or non-valid.

Sampling-Based motion planning consists of the approximation of the connectivity of the state space
to a graph structure. The state space is sampled(discretized) and the randomly selected samples(which
must represent valid states) form the vertices of a graph, the selected sampled states are then con-
nected through edges that represent paths[23]. At this time, it may be important to introduce the notion
of completeness:An algorithm is complete if it finds a solution whenever one exists. An algorithm is
"Resolution-Complete" if it is able to find a solution, if one exists, when the discretization is fine enough
to capture relevant information, finally, an algorithm is "Probablistic-Complete" if the probability of find-
ing a solution, if one exists, converges to 1 with the increase of planning time. As mentioned in 1.2.4,
most Sampling-Based algorithms are probabilistically complete[25]. Sampling-Based motion planning
algorithms can be divided in two main categories:

• Single-query planners Solve a query specified by a start state and a goal region by growing a tree
of configurations between them.

• Multi-query planners Build a roadmap of the entire environment to solve multiple queries.

As mentioned in 1.2.4, OMPL stands for Open Motion Planning Library consists of many state of
the art Sampling-Based motion planning algorithms. It’s integrated by the ROS package MoveIt and
the paths produced by OMPL are translated by MoveIt! into dynamically feasible trajectories, making
possible solving problems of collision avoidance while optimizing trajectories. The concept of optimal-
ity, in terms of path planning in robotics, can vary. An algorithm is optimal with respect to a certain
criterion if is able to find a path that reaches the goal while optimizing such criterion, which can be en-
ergy consumption, length, execution time, mechanical effort, among others. An algorithm can also be
"Resolution-Optimal" or "Probablistic-Optimal", these are weaker notions of optimality which are similar
to the weaker notions of completeness. OMPL can use these algorithms for geometrical planning, which
should be used when only geometric constraints are taken into account, and control-based planning,
which should be used when the system under consideration is subject to differential constraints, such
as maximum velocities or accelerations[23].

36
2.3.3 Ordering the Targets

Since this work aims to propose a more advantageous solution than the ones existing on the market,
and considering the room for improvement mentioned in section 1.2.1 regarding the Quiss RTVision, it
becomes necessary to present an alternative method to repeating the path for the whole bead to perform
the correction. The task executed by the robot, when it approaches the gaps, can be seen as an analogy
to the simetrical TSP(Traveling Salesman Problem). In this problem, a salesman has to visit a certain
number of cities traveling the shortest route without missing or repeating a city. Like the salesman, the
robot has to visit every single gap (city) traveling the shortest route without missing or repeating a gap.
Note that this is a simetrical TSP since the distance from gap A to gap B is the same as the distance
from gap B to gap A, assuming that there are no obstacles. The challenge associated with this problem
comes with the fact that it is a NP-complete optimization problem (Nondeterministic Polynomial-time
complete) and there’s no algorithm that runs in polynomial time to solve it. As a result, execution time
complexity of the solving algorithm will be exponential to the size of the input given[49, 50]. There several
approaches to solve the problem.[50]tests several algorithms to solve the problem and concludes that
the Discrete Cuckoo Search algorithm produces the best solutions in terms of tour length. According
to [49], for an accuracy of 100% (meaning that the solution found is equal to the best solution known),
the best computation time is is achieved with a population size equal to 5, a probability coefficient of 0
and a number of evolution equal to 6. [51] advocates the use of Dijkstra’s algorithm to find a solution for
this problem. Despite knowing that a comparison with the TSP might be exaggerated, since the location
and number of gaps that can occur on a bead will hardly reach the complexity of a typical TSP, the
comparison is more reasonable than the underestimation of the problem. The target ordering problem
is, in fact, analogous to the TSP with a reduced complexity, therefore an approach that works for the TSP
will work for it as well. Taking this fact into account, it was decided to test and compare the approaches
suggested by the literature, despite knowing at first that the meta-heuristic might be unsuitable for the
number of possible solutions in question.

37
38
Chapter 3

Implementation

The implementation can be divided in the following steps:

• A general and simplified case,on non-industrial environment,with the main goal of serving as a
demonstrator and prooving the concept,using the UR5.

• A use case,where the inspection and correction are performed post-process,on an industrial environ-
ment,using the IRB 2400.

• A second use case,where the inspection and correction are performed in-process,on an industrial
environment,using the IRB 2400.

• Adapting the algorithms from the previous implementations to the robot framework ROS MoveIt!.The
main goal is to integrate the OMPL planners,in order to apply the concept to parts with different ge-
ometries.This will build the basis for a future work to be developed by the research department of In-
strosys,where a collision model is constantly being updated through a vision system,and the planners
are able to recalculate collision free trajectories on a scenario that is frequently changing.

39
3.1 Demonstrator-UR5 on Post Process Inspection

3.1.1 Scene

The scene consists of a single part resting on a pallet, a conveyour, an inspection station with a
translucid dome for better lightning control, surrounding a vision system, a station for applying/correcting
composed by the UR5 with a bead dispenser mounted on the TCP, one position sensor for each station
and a start button.By activating the start button, the pallet transports the part to the applying station, the
robot draws a bead and, when the bead is drawn, the pallet transports the part to the inspection station.
If the inspection reveals no gaps, the process restarts and a new bead is drawn on a new part.

Figure 3.1: Scene for the demonstrator

If the inspection reveals defects of gaps, the pallet moves the part to the correction station where the
bead is corrected. The part is inspected once again and, if it still presents gaps, the correction/inspection
cycle repeats until they’re eliminated or a stop command is actuated. Otherwise, a new part is inserted
and the main process restarts.

40
Flowchart

Figure 3.2: Flowchart for the demonstrator

3.1.2 Designing the generic main software module

The programming language chosen for the implementation of the main module was Python. The main
reasons for this choice were the fact that MoveIt! has the Python API and it’s simplicity when it comes
to implementing in comparison to C++. The main goal is to create a python module that processes the
data coming from the inspection, transforms it into robot targets, organizes them in the most efficient
way and carries out the order for the robot to execute the trajectories. This module should be as more
transverse to the various implementations as possible.

The general trajectory for the correction segments is depicted below. Point 1 and 4 are approximation
points, while the correction itself happens on the trajectory from point 2 to point 3, where the adhesive
is being dispensed.

Figure 3.3: General trajectory for gap correction

41
Comunication with the Vision System, Data processing and target ordering

The communication is performed via


TCP/IP, through the IP 192.168.0.50 and the
port 2000, this IP corresponds to a server cre-
ated to ensure communication between the
vision system, the robot and the main mod-
ule, where the vision system and the robot are
clients. The server is implemented on a sub-
module called listener, resorting to the sock-
etpy library. To simulate the comunication with
any external devices, a program called Sock-
ettest was used. Sockettest can create both
TCP and UDP client or server and it can be
used for testing any client or server that uses
those communication protocols. The main Figure 3.4: SocketTest
module runs a thread which is always wait-
ing to receive data from the client, structured as an array of coordinates. The structure for the data
is:{[xstart , ystart , zstart , xend , yend , zend ]gap1 ...[xstart , ystart , zstart , xend , yend , zend ]gapn }, therefore, the co-
ordinates are sent by the vision system in multiples of six. The array is split and organized in an array
of points by a submodule called writer. Each point is a target for the robot consisting in a list of 3 floats,
(x, y, z), and the orientation is added later, so the TCP is perpendicular to the robot’s base frame on
every target. The middle point of each gap is calculated in order to know the distances between them,
so an algorithm for ordering in the most efficient way can be implemented.

An array with the coordinates for the middle points is sent to a submodule called optimizer, which
applies an optimization algorithm reorganize the points. On a first approach, a switch case was imple-
mented for three different algorithms:

• Dijkstra Algorithm Suggested as an approach for solving the TSP in [51]

• Cuckoo Search Algorithm Suggested as an approach for solving the TSP in [49].

• Default Order Algorithm In this case, the targets are organized in the order they emerged. This
option has the purpose of proving the benefit of using one of the other two algorithms.

The cuckoo search algorithm’s parameters were tunned accordingly to the results obtained in[49]. The
algorithms were adapted from [52] and [51]. A comparative study on the length of the path generated

42
by these three algorithms, as well as their computation time, is presented in section 4.4, the study
concluded that the Dijkstra’s algorithm is the most suitable.

Once the middle points are reordered, the main module constructs an array with the targets accord-
ingly to the order dictated by the optimizer.

Sending the Script to the controller

The next step comprises the communication with the robot itself. At this stage, to communicate with
the UR5 controller, it was chosen the URScript via simply because there’s more support available in
comparison to the C-API[53]. As mentioned before, the instructions for the ABB have to written in RAPID.
In the light of these facts, the targets have to be encapsulated in Move instructions for both robots. On
a first approach, coordinates were sent through TCP/IP directly to a program written in PolyScope. This
approach proved to be problematic since a variable had to be declared, in the PolyScope program, for
every point, for later to be encapsulated in a Move instruction. This demands the knowledge of how many
gaps the bead has before the initialization of the program. Before thinking about a workaround, and since
the problem would probably persist when comunicating with the ABB, another approach emerged which
proved to be simple and effective:Writting a script file in URScript containing the Move instructions
encapsulating the targets and seting/reseting the output to dispense the adhesive, within the writer
submodule, and then sending it to the robot controller. The file is sent through SFTP(Safe File Transfer
Protocol) resorting to the pysftp library on a submodule called sender. It’s crucial that the TCP performs a
smooth, known path with less speed variation as possible while dispensing adhesive, otherwise it would
not be possible to maintain a bead with constant width or to control where the bead is dispensed. Taking
into account the options presented in section 2.3.1, the instruction MoveP was chosen to encapsulate
the targets with a radius blend of 0.001m. An example of the code from a correction file is shown in A.3.

Interaction with the main software module

Two programs were created in PolyScope. The main program executes the "default" routine, where
the bead is drawn, the part is inspected and, if there are no defects, the cycle repeats. The correction
program consists of a correction followed by an inspection and it’s called in case of defects, repeating in
loop until the inspection reveals no defects or a stop command is actuated.

The coordination between these two programs is performed through the main module. In this par-
ticular implementation there’s a thread which is always running, containing a succession of conditional
statements. In this thread, a variable is assigned to a string to be received from the robot through port
29998. When the main program is starts, it sends the string "dont correct" which puts the thread on
waiting mode while the bead is drawn, the inspection is performed and the data is collected, processed
and sent to the controller. When the inspection is concluded, the main program sends the string "in-
spected". If the inspection reveals no gaps, no action is performed and the main program repeats for a

43
new part. If the inspection reveals gaps, a string is sent through port 29999 ordering the robot to stop
the main program, load, and play the correction program. The robot corrects the bead, resorting to the
script that was sent to the controller and, after a second inspection, the correction program sends the
string "corrected". If the inspection reveals gaps, the correction program repeats, if it doesn’t, a string
is sent through port 29999 ordering the robot to stop the correction program, load, and play the main
program.

Algorithm 1 Interaction with the main software module


procedure M ANAGING THE P OLY S COPE PROGRAMS
string ← string received from PolyScope
n_gaps ← number of gaps detected
Main ← Main PolyScope program
Correction ← Correction PolyScope program
top:
while string = "don’t correct" do
wait
if string = "inspected" and n_gaps = 0 then
goto top
if string = "inspected" and n_gaps > 0 then
stop Main
load Correction
play Correction
goto top
if string = "corrected" and n_gaps > 0 then
goto top
if string = "corrected" and n_gaps = 0 then
stop Correction
load Main
play Main
goto top

3.1.3 Choosing the technology for Guidance and Inspection

Although the concepts of NDT and Robot Guidance are different and usually serve different purposes,
in the scope of this work, the inspection and point cloud aquisition are performed in the same step, since
the information regarding the existence of a defect should come with it’s coordinates for it to be corrected.
Therefore, the choice of the right NDT technique should be in agreement with the choice of the right point
could aquisition technique. The analysis performed on section 1.2.3 leads to the conclusion that the two
most suitable candidates for the task are stereo vision and laser triangulation.In fact, this conclusion is
coherent with the solutions that are available on the market, presented on section 1.2.1. To generate a
3D model of the bead using laser triangulation, either the part or the sensor have to be in motion[12].
This drawback entails limitations, particularly in post-process inspections.

44
3.1.4 Hardware

To implement the demonstrator the following devices where used:

• Black acetic silicon-Simulates the industrial adhesive

• One IDS UI-5240-RE camera-Performs inspection.

• Nordson efd valvemate 7100/Gun-Controls the adhesive system and dispenses de bead.

• Two external PCs-PC1 runs the main software module and PC2 runs the inspection algorithm.

Table 3.1: Hardware for the demonstrator

IDS UI-5240-RE Nordson efd valvemate 7100 Nordson efd

Table 3.2: Technical characteristics for the external computers

PC CPU RAM Hard drive CPU cores Operating System


1 Intel® Core™ i5 3317U 6Gb 1 2 Ubuntu 16.04
2 Intel® Core™ i7-6600U 32Gb 1 2 Windows 10

45
3.1.5 Architecture

Figure 3.5: Architecture for the demonstrator

Since PolyScope allows the set/reset of I/O’s. The coveyour, sensors, camera and lights are all controlled
through PolyScope.

3.1.6 Work Object and Tool Speed

Work Object The work object’s reference


frame(called plane in PolyScope) is created
from three points. Guiding the TCP manu-
ally, using the teachpendant, to a given point
and marking the first point, defines the refer-
ence frame’s origin. The other two points to be
marked define the direction of Y and X axis, re-
spectively. As mentioned before, when a Move
instruction is written, there must be an indica-
tion of a plane, the target pose is then consid-
Figure 3.6: Work Object frame(demonstrator)
ered in relation to that plane.

Velocity The speed calibration was per-


formed empirically. The main goal is to have a TCP velocity that produces a bead with compliant width.
For a pressure in the dispenser of 6bar and after several iterations, the TCP velocity, for the segments
where the bead is dispensed, was set to 0, 01m/s. At this point it’s important to say that the robot’s
workspace is restricted in such a way that none of the configurations described in 2.1.3 are close to
being achieved, therefore, singularities are not a problem.

46
3.2 Industrial Cell-ABB IRB2400 on Post Process Inspection

3.2.1 Simulation

Scene

This implementation was performed on a cell containing six work stations. Two idle stations, one for
loading, one for unloading, one for inspection and one for applying/correcting. There is one sensor to
detect the presence of a pallet and one sensor to detect the presence of a part(in figure3.7 they’re rep-
resented as a single sensor for practicality)for each station, as well as one clamp and two elevators for
the applying/correcting and inspection stations. Similarly to previous case, there’s a manipulator(ABB
IRB2400)in the applying/correcting station with a bead dispenser mounted on the TCP and a dome sur-
rounding the vision system in the inspection station. There are four conveyours to transport five pallets
from station to station and since there are six stations, one of them is always empty in order to facilitate
the work flow. After a part has been loaded in the first station, it’s transported to the applying/correcting
station. The transport from station to station is assured by the corresponding conveyour(s) and the
temporary retract of the station clamp. Arriving on the applying/correcting station, the pallet is elevated,
the robot draws the bead, the elevator descends and the part is transported to the inspection station,
where is elevated once again and inspectioned. If the part has no gaps it is transported to the unloading
station and unloaded by a worker. In this case, the pallet can only leave the unloading station if the
sensor detects no part. It then passes through both idle stations and arrives on the loading station, thus
repeating the cycle. If the inspection detects a gap, the part passes through the remaining stations and
it’s transported to the applying/correcting station, where the bead is corrected. After a new inspection,
the part can either be transported directly to correction until it presents no defects or be unloaded on the
next station, if the correction is successful or the worker stops the cycle.

Figure 3.7: Diagram of the workcell on post process inspection

47
Flowchart

Figure 3.8: Flowchart for the workcell on post process inspection

Interaction with the main software module

PC 1 is running the main software module and SocketTest, which simulates the messages sent from
the vision system, while PC 2 is running RobotStudio. A server was created with the IP 172.16.7.4
for comunication bewteen the three elements. A CAD of the workcell was created in SolidWorks and

48
migrated to RobotStudio in order to build a collision model. If a move instruction is sent to the robot
where the respective path includes a collision configuration, that instruction isn’t executed.

Figure 3.9: RobotStudio environment with the CAD of the workcell

Like the previous case, the main software module receives data from the vision system through
TCP/IP, which is simulated also using SocketTest. Since there are 5 parts in the conveyour and a defect
part is only corrected on a second lap, i.e.after the other 4 parts are drawn and inspected, there is
a need for every correction file to be stored in the virtual controller. Taking this fact into account, the
array with the coordinates for the defects of each part comes with an extra element which contains the
part’s ID, which is unique for every part and it’s written on a qr code. The main software module writes
and saves the scripts with the move instructions, concatenating the ID in each file’s name. When the
conveyour concludes the first lap, a string from Sockettest is sent through TCP/IP to a main RAPID
program running in RobotStudio, containing the information for the robot to execute the correction, as
well as the ID of the part to be corrected(which is read by a camera mounted on the applying/correcting
station). Consequently, the main RAPID program loads and executes the corresponding file. After this
task is performed, the controller returns to the execution of the main RAPID program, waiting for further
instructions. If the inspection of that part reveals no gaps, the worker unloads the part from the pallete.
A new part, with a new ID, is loaded on that pallet and, when the new ID is read and sent to the main
RAPID program ordering it to load and execute a script with instructions to draw a new bead. After the
execution, the virtual controller returns to the waiting mode in the main program. The correction files
are sent/deleted through FTP, using the ftplib library, because RobotStudio doesn’t support SFTP. Since
every part has a unique identification number, the main software module has to ensure the correction
files’ management. This means that for every inspected part that reveals no defects, its correction file(if
it already exists)should be deleted from the controller. This is performed through the main software
module, also using the ftplib library.For the same reasons enunciated in the previous case and taking
into account section2.3.1, the instruction MoveL was chosen to encapsulate the target.

49
Algorithm 2 Interaction with the main software module
procedure M ANAGING THE RAPID SCRIPTS IN THE VIRTUAL CONTROLLER THROUGH THE MAIN
MODULE
path ← Path to the virtual controller
n_gaps ← number of gaps detected
ID ← Part’s ID received from the vision system
top:
while ID = f alse do
wait
file = "Seeq" + "ID" + "prg"
if n_gaps > 0 then
send file
goto top
if n_gaps = 0 then
if file exists in path then
erase file
goto top
else
goto top

In this simulation, no movement from the conveyours, elevators or clamps is regarded. The actual
drawing of the bead is not present in the simulation, therefore, move instructions do not take into account
the setting of an output to activate the dispenser and the zone and speed arguments are to be tuned on
the real environment, taking into account their influence on the bead. The main goal of the simulation
was to test the comunication between the main software module and RobotStudio/Virtual Controller, to
prove the concept of storing and managing several correction script files in the virtual controller, calling
and executing them from a RAPID main program with the correct syntax.

Architecture

Figure 3.10: Control architecture of the simulation of the industrial workcell on post process inspection

50
3.2.2 Real environment

The main difference between the simulation scene and the implementation on the real environment
lies on the decision power that the PLC has over the other elements on the control architecture. The
PLC controls all the actuators on the workcell, namely the clamps, conveyours and elevators, it also
controls the decision of which task the robot should perform(drawing a new bead or correcting). This
task was previously decided based on the existence/non existence of a correction file in the controller.
The adhesive dispenser is the only element that is not controlled by the PLC, despite being monitored
by it. New Move instructions were built to actuate the adhesive dispenser and a new approach for the
correction segments’ trajectory was developed. Everything else is performed as in the simulation except
for the fact that the virtual controller is now the IRC5.

New trajectory

At this point, it was decided to take a more robust approach to the way that the correction trajectory
is performed. To perform a trajectory like the one described on fig.3.3 proves the concept but, if the
gap is large enough and if it’s coincident with a bead curve, this correction method will produce a bead
segment that contains a defect of type 4. For this reason, it was decided to discretize the correction
segment into several line sements as its depicted on fig.3.11:

Figure 3.11: New trajectory for the correction segment

Where the red segment, which goes from p1 to p2, would represent the trajectory performed resort-
ing to the previous algorithm and the purple and blue segments, which pass through p1.1 and p1.2,
represent the new algorithm. The segment discretization is made resorting to an interpolation, per-
formed by the vision system, that takes into account the bead model. The point density increases with
the segment’s curvature.

Hardware

For this implementation the following devices where used:

• SikaPower-492-Industrial adhesive

• two Genie nano M1930-NIR cameras-Perform inspection

51
• one Genie nano M1930-NIR camera-Reads the ID on the drawing/correcting station

• Nordson VersaPail bulk melter-Controls the adhesive system

• Nordson AG-930s dispensing gun-Dispenses the bead

• Siemens Simatic S7-400-Controls the actuators and sensors on the workstation and manages the
job decisions.

• Two external PCs-PC1 runs the main software module and PC2 perfomrs the point cloud processing.

Table 3.3: Hardware for the Industrial Cell

Genie nano M1930-NIR Siemens Simatic S7-400

Nordson VersaPail bulk melter Nordson AG-930s dispensing gun

Controlling the adhesive system

Despite being monitored by the PLC, the adhesive system is controlled through the IRC5. There are
a series of I/O’s mapped on the adhesive controller, each one attributed to a specific funcionality. Their
purpose is for the robot to read states such as pressure and temperature, to check if the system is ready
and to send commands to enable motors and dispense the adhesive. Firstly, to initialize the system, a
RAPID module was created to safely setup the adhesive controller. The module contains, in order, the
following sequence of procedures:

52
• Initialize heater Sets the heater for the adhesive to achieve the temperature of 50◦ C and enables
the motor.

• Initialize temperature Gives the controller the information that temperature should be monitorized in
Celsius.

• Initialize pressure Gives the controller the information that pressure should be monitorized in bar.

• Start glue system Checks if the systems is ready, sets the motor and checks its state

In every procedure there’s a wait for an answer, from the adhesive controller, to confirm that the
command was executed. The execution of the next procedure only happens after the receiving of that
answer.

A procedure called MoveGL_Start and MoveGL_Finish were created to be used as functions that
can be called with the same arguments as the MoveL instruction. The procedures call a MoveLDO
instruction. MoveGL_Start sets the output to dipsense the adhesive at the beginning of the trajectory,
MoveGL_Finish resets it when the target is reached. Another procedure called ReadGlueData is then
executed, to monitor the temperatures in the adhesive system. For gap correction, MoveGL_Start was
used at the start of each gap segment, MoveL for the interpolation points contained in the segment and
MoveGL_Finish for the end of the segment. For gaps that don’t contain interpolation points, i.e. seg-
ments composed by a starting and an ending point only, an instruction called MoveGL was created. This
instruction sets the output at the beginning of the segment, resets it at the end and can be called using
MoveL arguments, similarly to the previous two. The value for the argument zone was set to 0,wich
means a radius of 0.3mm, on the MoveL instructions where the output is set, i.e. on the interpola-
tion points. This value guarantees smooth trajectories, reducing stops, withouth compromising position
tolerances. On the instructions that envolve the beginning or the ending of a gap segment, the zone
argument was set to f ine.

53
Architecture

Figure 3.12: Control architecture for workcell on post process inspection

Work Object, TCP and Tool Speed

The work object frame was defined accord-


ingly to fig. 3.13 with the Z axis pointing up-
wards. The proceedure was similar to the
one used when defining the work object for
the UR5:The origin was defined and 2 points
were marked defining the x and y direction.
The work object’s orientation is the same as
the robot’s base frame orientation. Similarly to
the previous case, the speed was calibrated
Figure 3.13: Work Object Frame(ABB IRB2400)
empirically to produce a bead with compliant
width measurement. After several iterations, for a pressure of 90bar, the speed argument on the move
instructions was set to vmax and the robot’s overall speed was set to 20%. The low speed value allowed
a more detailed monitorization of the following tests. Similarly to the demonstrator’s case, the workspace
is restricted so there’s no risk of crossing a singularity.

54
Programing the PLC

Managing the Workcell

The PLC was programmed through the software Simatic S7 using grafcet and ladder languages. A
Ladder program for each station was designed, guaranteeing the coordination between the conveyors,
the clamps and the signals comming from the sensors. When station n+1 is free and a pallet/part
is detected on station n, the conveyour and the clamp from station n are temporarily activated and
retracted, respectively. This allows the transport to the next station in safe conditions. For station 2
and 3, which are responsible for the applying/correcting and inspecting, respectively, grafcets where
designed to manage the procedures. When the pallet and part sensor of station 2 are set, the elevator
is actuated and the PLC receives the part’s ID, which is read by the camera mounted on station2. This
action corresponds to "Job 1". A decision of which job should be executed is performed by the PLC(this
process will be explained next)and a "Job X Enabled" flag is sent to the robot. Job 2 is the drawing of a
new bead and Job 3 is the correction of a defected one. When the robot finishes executing the assigned
job, a "Job X Ready" flag is sent from the robot to the PLC. The elevator is reset and the grafcet returns
to the initial block. The proccedure is similar for station 3. When the part arrives at the station, the
elevator is set, and a "Job 4 Enabled" flag is sent to the vision system. After performing the inspection,
a "Job 4 Ready" flag is sent from the vision system to the PLC along with the results of the inspection.
The elevator resets and the grafcet returns to the initial block. The master grafcet for station 2 as well as
the ladder diagram for moving the pallete from the station are depicted in A.4.

Deciding which job the robot performs

As stated before, the PLC is responsible for managing all the decisions and coordination between the
devices, except for the adhesive dispenser. This includes deciding which job the robot should perform
and ordering it to do so. For that, 3 variables for each pallete, summing up to a total of 15 variables,
where declarated. The variables are presented on the following table:

Table 3.4: Variables atributed to each slot on the PLC

Pallete_x Variable Type


Pallete_x.ID Array of chars
Pallete_x.drawn BOOL
Pallete_x.ok BOOL

where:

• Pallete_x.ID is an input received from the camera mounted on station 2, representing the ID of the
part that lies on Pallete_x at that time.

55
• Pallete_x.drawn is an boolean set to false by default and set to true by the robot after Job 2 is
completed on that part.

• Pallete_x.ok is a boolean set to false by default and set to true by the vision system when the part
presents no defects.

The following table represents the logic applied to manage the jobs using these variables.

Table 3.5: PLC logic for job enabling

Pallete_x Pallete_x.drawn Pallete_x.ok Action


Value 0 0 "Job 2 enable"
flag sent to the
robot, robot
draws new
bead
Value 1 0 "Job 3 enable"
flag sent to the
robot, robot
corrects gaps
using the
correction file
from the ID in
Pallete_x.ID
Value 1 1 Part unloaded
on unloading
station

3.3 Industrial Cell-ABB IRB2400 on In Process Inspection

The scene for this implementation is similar to the previous one, except for the fact that there’s no
need for an inspection station. The drawing, inspection and correction of the bead are performed on
the same station with an inspection ring attached to the TCP using 3 RGB cameras, similar to the ones
presented on table 1.2 and fig.1.3. The main goal was to inspect while drawing the bead and correcting
the gaps imediately after, in the optimal target order. An adaptation of the main software module to
this scene becomes trivial, at this point, since it is a simpler version of the previous implementation.
There’s no need to store and manage correction scripts, since a part only passes through the same
station once. The adaptation was performed and tested on simulation using RobotStudio. Due to delays
from the suppliers, the hardware wasn’t available in time for the implementation to be concluded and
documented on this work.

56
3.4 ROS MoveIt!

Creating the URDF

The URDF file describing the workcell was generated through the sw2urdf plugin. This plugin converts
a SolidWorks assembly file to an URDF file. The links out of the robot’s workspace were omitted for the
sake of making the simulation easier to run. The IRB2400 was defined as a kinematic chain having the
workcell as parent link, linked through a fixed joint that connects the workcell to the robot’s base link.
Since the inspection ring wasn’t built in time for the conclusion of this work and there is no specific part
to work on, the simulation includes only the robot, the conveyour, the PLC and an object to be inserted
later, in order to force trajectory replanning.

MoveIt! Setup Assistant

As mentioned before, MoveIt!


has a physical notion of the robot
through it’s URDF files, but these
files don’t provide the information
about which components to control
and which components should only
be regarded as colision objets. The
MoveIt! Setup Assistant GUI allows
the creation of a package with all this
information. Through the URDF file,
a collision matrix with all the pos-
sible joint configurations was gen-
erated, in order to detect collisions
amongst them, a trajectory planning
group named arm was created com- Figure 3.14: MoveIt! setup assistant
posed by the IRB2400’s joints and a
standard home pose was defined for the manipulator.

Preparing the gazebo world and connecting it with MoveIt!

A main launch file was created, called gazebo launch. This launch file includes another launch file
to initiate an empty gazebo world, and a main xacro file. This main xacro file includes three other
xacro files:The irb2400_12_155_macro, which is the robot’s URDF, with all the needed kinematic and
dynamic information about the robot and the world, the common gazebo and the irb2400_transmission

57
which add transmission elements and the gazebo_ros_control plugin, as mentioned in section2.2.3. The
URDF description of the robot itself is provided by the ABB package created by Shawn Edwards[43].

The main launch file also launches four nodes. The spawn model is responsible for loading the robot
parameteres contained in the irb2400_12_155_macro into the ROS parameter server and spawning
the robot into the empty world, the robot_state_publisher publishes the robot state to the tf, allowing
all the system components using tf to have access to this information. The joint_state_controller is
the controller interface, responsible reading data from the hardware interface, processing and sending
commands, as mentioned in section2.2.3. Finally, the arm_controller spawner launches a trajectory
action controller, calling a controller configuration file, which will be linking the gazebo robot to the main
MoveIt! as described in section2.2.3

Figure 3.15: Flowchart of files to launch the gazebo world

To launch the MoveIt! package, a main launch file launches two nodes:The robot_state_publisher
and the joint_state_publisher. Their job is to publish on joint_states topic and on tf, for MoveIt! to
maintain its own tf tree internally, as described in section2.2.3. It also launches a configuration file, the
joint_names, responsible for naming the robot’s joints accordingly to the gazebo’s robot joints.

The main file also includes three launch files:

• planning_context launches an URDF containing the description of the robot and the world for
the ROS parameter server to load and to send to the move_group, as mentioned in the section3.4.
It also launches the kinematics and joint_limits configuration files to choose the kinematic solver(the
TRAC_IKKinematicsPlugin was chosen) and to define the joint limits in terms of position, velocity and
acceleration.

• move_group launches a series of xml files that, among loading other parameters, launch the config-
uration file controllers. This file declares the name of the trajectory action controller, the type of controller,

58
it’s name and the joints to be controlled. Essentially its a configuration file whose content has to be equal
to the one called by the node arm_controller spawner in order to connect MoveIt! and gazebo. The syn-
tax of this file can be consulted in A.5. The move_group file also launches the ompl_planning_pipeline
which provides the Sampling-Based motion planners in OMPL to be used in the platform.

• moveit_rviz launches the rviz GUI.

Figure 3.16: Flowchart of files to launch the MoveIt! package, connected to gazebo

Figure 3.17: Graph representing the action, arm_controller, responsible for connecting MoveIt! and
gazebo

59
Interfacing MoveIt! with the python module

To adapt the main software module created in the previows implementations and interface it with
MoveIt!, an action client and action server were created. The server launches the trajectory1_action_server
node which waits to receive target poses to perform the correction segments as described in3.2.2. The
client is an adaptation of the writer submodule, used in the previous implementations, and operates in
a similar fashion in terms of data processing and communication with the listener and vision system.
The main differences are the launching of the node trajectory1_action_client, and sending of the goals
in groups of target poses to the server, instead of encapsulating them in move instructions to write a
script file. A python script was written to introduce an object on the robot’s work space, to force recal-
culation of the trajectories. The script is launched after the client sent the goal to the server to simulate
the unpredicted entrance of an obstacle or a different shaped part. Orientation constraints are defined
on the correction segments, guaranteeing, with a 0.1rad tolerance for the 3 axis, that the TCP stays
perpendicular to the work object. Tests were performed using the RRT family algorithms, performing a
simple trajectory evolving 2 gaps with and without an obstacle.

Figure 3.18: Interaction between ros action server, MoveIt! and the main software module

60
3.5 User Interface

The graphical user interface(GUI)


was developed using QtDesigner.
The aesthetic part of the GUI was
designed interactively, using QtDe-
signer. QtDesigner generates a .ui
file which is converted into a python
file where functionalities and code
are attributed to each button and la-
bel. When the connection with the vi-
sion system is established, the main
window, depicted on fig.3.19 pops
up. On the left side of the window,
there’s a row of slots, each slot rep-
Figure 3.19: Graphical user interface
resents a pallete and has an indica-
tion of the ID of the part that is currently on that pallete, as well as the number of gaps. Each slot
number is clickable and, when clicked, opens a new window,depicted on fig.3.20, showing the details of
the defects, namely their starting, ending and center coordinates.

Figure 3.20: Graphical user interface(child window)

61
62
Chapter 4

Results

For the real environment implementations,tests consist of drawing a bead on a part,generating gaps by
interrupting the compressed air flow, therefore, interrupting the bead. The results are exposed classifying
the vision system’s performance on each inspection,since the correction module’s performance is highly
dependent on it. This classification follows the notation:

Table 4.1: Classes for the vision system’s results classification

Acronym Class Vision System Manual inspection


TP True Positive Detects a Gap Gap exists
FP False Postive Detects a Gap No Gap
FN False NegativeDetects no Gap Gap exists

The overall performance is classified only taking into account the final result for the correction of each
defect part(regardless the number of corrections/inspections performed). This classification follows the
notation:

Table 4.2: Overall test classes

Acronym Class Vision SystemManual inspection


TP True Positive OK OK
TN True Negative NOK NOK
FP False Postive OK NOK
FN False Negative NOK OK

The confusion matrix was generated from this classification. Finally, the Correction Success Rate
is calculated by dividing the number of successefull tests(where the beads are successefully corrected)
by the total number of tests. Note that, in this chapter, defects other than gaps are being disregarded. A
bead with no gaps has an OK classification regardless of having other defects, unless these are caused
by deficient corrections.

63
4.1 UR5 Demonstrator

Table 4.3: Results for the demonstrator:Test 1

Test 1 Inspection 1 Inspection 2 Inspection 3


TP 2 0 0
FP 0 1 0
FN 0 1 1
Gaps 2 1 1

Overall test class = F P

Table 4.4: Results for the demonstrator:Test 1, Inspection 3

Manual Inspection Vision System

As can be seen in table.4.4, the vision system is pointing a type 2 defect, when in fact it’s a gap.
This is considered a false negative and it can be explained by the fact that the lightning conditions
in the laboratory aren’t totally controlable. The inspection is performed on a translucid dome,when
external light increases, shadows upon the part emerge, caused by the external light inciding on the
bead itself originating false negatives. The external light may also be reflected on the bead, originating
false positives, as can be seen on the following tests.

Table 4.5: Results for the demonstrator:Test 2

Test 2 Inspection 1 Inspection 2


TP 3 0
FP 1 1
FN 0 0
Gaps 3 0

Overall test class = F N

64
Table 4.6: Results for the demonstrator:Test 2, Inspection 2

Manual Inspection Vision System

Figure 4.1: Test 2, Inspection 2

Like mentioned previously, in 4.6 there’s a false positive caused by poor lightning conditions. Fig.4.1
ilustrates that, at the beginning and end of each segment of corrected bead(points 1 to 4), the bead
is dangerously close to not being in compliance with standarts. To solve this problem, a delay and an
advance of 0.3s in the reset and set of the output, respectively, were implemented. This way, the bead
starts being dispensed 0.3s before the robot performs the trajectory and stops being dispensed 0.3s
after the trajectory reaches its end.

Table 4.7: Results for the demonstrator:Test 3

Test 3 Inspection 1 Inspection 2


TP 3 0
FP 0 0
FN 0 0
Gaps 3 0

Overall test class = T P

65
Table 4.8: Results for the demonstrator:Test 4

Test 4 Inspection 1 Inspection 2


TP 4 0
FP 1 0
FN 0 0
Gaps 4 0

Overall test class = T P

Table 4.9: Results for the demonstrator:Test 4

Manual Inspection Vision System

Once again, on the first inspection, a false positive emerged due to poor lightning conditions. The silicon
used to simulate the industrial adhesive often dried inside the dispensing gun, forming silicon nuggets.
One nugget caused an abrupt increase on the flow, which was responsible for the existence of the type
3 defect, depicted on table 4.9.

Table 4.10: Results for the demonstrator:Test 5

Test 5 Inspection 1 Inspection 2


TP 3 0
FP 0 0
FN 0 0
Gaps 3 0

Overall test class = T P

66
Table 4.11: Confusion matrix for the demonstrator

n=5 OK from vision system NOK from vision system


OK TP=3 FN=1 4
NOK FP=1 TN=0 1
4 1

T P +T N TP TP
accuracy = n = 0.6, sensitivity = T P +F N = 0.75, precision = T P +F P = 0.75

N umber of successf ully corrected parts 4


Correction Success Rate = = = 0.8
n 5

4.2 Industrial Cell(ABB IRB2400 on post-process inspection)

Table 4.12: Test results for the industrial cell(ABB IRB2400 on post-process inspection)

Test Gaps Number of corrections Final Status TP FP FN Overall Class Cycle time
1 4 1 NOK 4 0 0 TN —-
2 4 1 NOK 4 1 0 TN —-
3 5 1 OK 5 0 0 FN 19.02s
4 4 1 OK 4 0 0 TP 18.09s
5 4 1 OK 4 0 0 FN 16.97s
6* 5/2 2 OK 5/2 0 0 FN —-
7 4/1 2 OK 4/1 0 0 TP —-
8 5 1 OK 5 0 0 TP 18.78s
9 5 1 OK 5 0 0 TP 19.29s
10 4 1 OK 4 0 0 TP 17.01s
11 4 1 OK 4 0 0 FN 22.08s
12 4 1 OK 4 0 0 TP 19.73s
13 6 1 OK 6 0 0 TP 21.04s
14 6 1 OK 6 0 0 TP 21.46s
15 5 1 OK 5 0 0 TP 20.30s
16 5 1 OK 5 0 0 TP 20.57s
17 4 1 OK 4 0 0 TP 18.04s
18 4 1 OK 4 0 0 TP 17.93s
19 4 1 OK 4 0 0 TP 17.12s
20 5 1 OK 5 0 0 TP 21.79s
total 94 22 18 OK, 2 NOK 94 1 0 —- —-

As can bee seen in table 4.12, the first two tests weren’t successful and there wasn’t a second attempt
to correction. It wasn’t possible to fasten the robot’s base to the ground by the time the tests were
performed, this lead to an unexpected decalibration of the work object. Through table 4.14 it is possible
to see that the vision system detects the gaps in the correct location but the correction is not performed in
those coordinates with enough accuracy, this fact can be noticed more evidently on the gap further to the
left. Between tests 5 and 6 there was also a work object decalibration, but since it was less significant
than the first one, it was possible to achieve OK status with a second correction. This decalibration
was only detected after the seventh test, when the work object was recalibrated. Similarly to the UR5
implementation, one false positive on the vision system occurred due to light reflexes(test 2) and there

67
were no false negatives. In terms of overall class, false negatives occurred as a consequence of the
vision system detecting gaps in corrected segments with inflection points. This phenomenon can be
observed in table 4.15, where the abrupt change of direction on the left edge of the bead,due to a
slight offset on the correction segments, is responsible for the detection of a gap. Although the offset is
not significant enough to cause a type 4 defect,it is worth mentioning that it happens due to difficulties
on accurately calibrating the vision system with the workobject, particularly in areas where the part’s
geometry height is unregular.On the top left gap a bubble appeared wich can also be seen on the bottom
left gap.Both bubbles correspond to the first interpolation point of a gap and both emerged because the
zone parameter on the MoveGL_Start was mistakenly set to f ine,making the TCP stop before the first
interoplation point’s MoveL instruction.It was then set to z0,therefore eliminating the problem. The cycle
time for drawing a new bead is 27.73s and the longest cycle time for a correction represents 79% of this
value. Note that the robot’s speed is at 20%.

Table 4.13: Confusion matrix for industrial cell(the ABB IRB2400 on post-process inspection)

n=20 OK from vision system NOK from vision system


OK TP=14 FN=4 18
NOK FP=0 TN=2 2
14 6

Table 4.14: Results for the industrial cell(ABB IRB2400 on post process inspection):Test 1

Manual Inspection Vision System

68
Table 4.15: Results for the industrial cell(ABB IRB2400 on post process inspection):Test 11

Manual Inspection Vision System

Table 4.16: Results for the industrial cell(ABB IRB2400 on post process inspection):Test 15

Manual Inspection Vision System

T P +T N TP TP
accuracy = n = 0.8, sensitivity = T P +F N = 0.7, precision = T P +F P =1

69
N umber of successf ully corrected parts 18
Correction Success Rate = = = 0.9
n 20

4.3 ROS MoveIt!

The tests for this chapter consist of executing a four segment trajectory with two gaps comparing the
planning time, planning attempts and generated path length of three RRT familyalgorithms. Note that
the trajectories evolving the approximation points are omitted for the sake of summarizing. An object, in
collision with the shortest route between gap 1 and gap 2 is inserted on the scene in order to monitor
how the different planners recalculate an alternative trajectory. The results on table 4.19 and table 4.18
are the worst of five repetitions. The planner’s parameters are set to defaut. The velocities are set to
the values written in joint_limits.yaml, which correspond to the limits in table 2.3. There are no further
iterations since the purpose of this chapter is to prove that the developed software can be integrated
with Sampling-Based motion planning algorithms and optimizing the results is out of the scope of this
work.

Table 4.17: Trajectory segments used for testing

Trajectory Segment From(m) To(m) Distance


1 Home(0, 0.855, 1.455) Gap1start (−0.315, 0.938, 1) 1.16m
1.1 Gap1start (−0.315, 0.938, 1) Gap1end (−0.265, 0.938, 1) 0.05m
2 Gap1end (−0.265, 0.938, 1) Gap2start (0.235, 1.338, 1) 0.64m
2.1 Gap2start (0.235, 1.338, 1) Gap2end (0.285, 1.338, 1) 0.05m

Table 4.18: Trajectory without obstacle

Trajectory Segment Planner States created Planning attemptsPlanning TimeExecution Time Status
1 RRT 17 1 0.0522s 2.7994s Succeed
1.1 RRT 15 1 0.2149s 0.4103s Succeed
2 RRT 17 1 0.2998s 1.9320s Succeed
1.2 RRT 15 1 0.2123s 0.4120s Succeed
1 RRTstar 1171 1 3.0004s 2.7932s Succeed
1.1 RRTstar 286 1 5.0132s 0.3572s Succeed
2 RRTstar 148 1 3.004251s 1.9282s Succeed
2.1 RRTstar 289 1 5.0023s 0.3580s Succeed
1 RRTConnect2 start + 3 goal 1 0.025164s 2.7960s Succeed
1.1 RRTConnect2 start + 2 goal 1 0.0352s 0.3623s Succeed
2 RRTConnect5 start + 3 goal 1 0.144431s 1.9284s Succeed
2.1 RRTConnect2 start + 2 goal 1 0.0361s 0.3670s Succeed

70
Table 4.19: Trajectory with obstacle

Trajectory Segment Planner States created Planning attemptsPlanning TimeExecution Time Status
1 RRT 8 1 0.016154s 2.7989s Succeed
1.1 RRT 12 1 0.2353s 0.4102s Succeed
2 RRT 140 2 1.978731s 9.7648s Succeed
2.1 RRT 15 1 0.2454s 0.4124s Succeed
1 RRTstar 1762 1 5.006075s 2.7988s Succeed
1.1 RRTstar 302 1 5.0001s 0.3576s Succeed
2 RRTstar 295 1 5.020442s 3.8322s Succeed
2.1 RRTstar 289 1 5.0032s 0.3580s Succeed
1 RRTConnect2 start + 2 goal 1 0.024848s 2.7994s Succeed
1.1 RRTConnect2 start + 2 goal 1 0.0302s 0.3763s Succeed
2 RRTConnect3 start + 2 goal 1 0.116733s 3.854s Succeed
2.1 RRTConnect2 start + 2 goal 1 0.0326s 0.363s Succeed

Since the TCP speed is the same for every test, the execution time can be interpreted as a measure
of the path length in order to compare the results. The RRTstar produces shorter paths than the other
two algorithms but its planning time is significantly higher, making it unsuitable for this context. The RRT
produces the worst path lengths, particularly in the trajectory segment with obstacle, where it failed on
the first planning attempt. The RRTConnect presents planning times that won’t affect the cycle time
significantly and path lenghts very close to the ones produced by the RRTstar.

Table 4.20: 4 segment trajectory without obstacle, implementation through MoveIt! using RRT

RViz Gazebo

71
Table 4.21: 4 segment trajectory with obstacle, implementation through MoveIt! using RRT

RViz Gazebo

72
4.4 Algorithm for ordering targets

Since it is unfeasible to generate enough gaps to compare the algorithms’ performance in terms of
computation time and path length, under the conditions presented in chapter 3, an hypothetical bead
was created. Gaps where produced, at first, with a specific location so their ordering wouldn’t be trivial.
After proving the advantages of using an ordering algorithm, other than the DO, to produce a smaller
path, random gaps were generated to compare the algorithms’ computation time.

Figure 4.2: Simulating several defects to test target ordering algorithms

Table 4.22: Path length and computation time t, in dm and ms, respectively, obtained using different
algorithms for target ordering

Number of Gaps Default Order(DO) t Discrete Cuckoo Search(DCS) t Dijkstra t


3 4.6347 0.667 4.6347 4.301 4.6347 1.624
7 20.5629 0.742 20.5629 6.727 20.5629 2.249
9 26.9071 0.857 25.6969 7.113 25.6969 3.126
10 29.4530 0.923 28.2430 7.740 28.2430 3.559
12 38.1323 0.996 36.9223 7.811 36.9223 2.620
30* —- — 38.9277 12.243 38.9277 13.923
60* —- — 42.0126 57.430 42.0126 48.705
90* —- — 55.9037 127.292 55.9037 81.39
110* —- — 69.1656 306.202 69.1656 354.233

73
Figure 4.3: Path length/computation time of the different algorithms for target ordering

In terms of path length, the three algorithms produce the same results until the gap number over-
comes 7. Beyond that, discrete cuckoo search produces the best results, as well as Dijkstra’s algorithm.
The reason for this is the fact that both of these algorithms choose the route {6, 8, 7, 9} which is shorter
than{6, 7, 8, 9}, this difference proves the advantage of using an optimization algorithm for this task. The
gaps for the last four cases were generated randomly in order to compare the computation time, they
have no geometrical meaning. All the computation times presented are an average of 10 iterations.
Despite the order of magnitude of the computation times presented isn’t significant enough to delay the
cycle time, Dijkstra’s algorithm presents the best results up to around 100 gaps, which is an unrealisticaly
high number. In fact, a number of gaps equal/above 30 on a single bead is already unrealistic.

74
Chapter 5

Conclusions

5.1 Achievements

A software module was created whose core is transversal to several robot programming languages/robot
frameworks. This module operates independently from the inspection,therefore,it can be adapted to
other vision systems,since it receives and deals with coordinate vectors. The module was able to cor-
rect defected beads with a rate of success of 80%,on the demonstrator,and 90% on the industrial cell.

In the case of the demonstrator,the reason for the 20% unsuccess is the lack of control of light-
ning conditions,which also applies to justify the results for accuracy,sensitivity and precision. Since the
demonstrator’s purpose was just to prove the concept,this classifications where made to tackle potential
problems that could emerge on the industrial cell and no further tests were performed. In the case of
the industrial cell,the fact that the rate of success didn’t reach the 100% is justifiable by a difficulty on
maintaining the work object’s calibration. This was due to the fact that the robot’s base link couldn’t be
fastened to the ground on the test warehouse.

The tests for the target ordering algorithms reveal that the Dijkstra’s algorithm is the most suitable in
terms of results and computation time, which confirms what was stated in section2.3.3. The implemen-
tation of the algorithm brings an advantage, through a decrease in cycle time, relatively to the available
solution mentioned in section 1.3.

The software module was successfully integrated in the platform ROS MoveIt!. On simulation, the
targets are received from the inspection and successfully reached by the robot while avoiding obstacles
in the workspace. The algorithm RRTConnect proved to be the best choice, from the RRT family, in
terms of path lengths and planning time.

75
5.2 Future Work

One of the most proeminent subjects for future work is the conclusion of the industrial cell on in
process inspection. Despite the fact that the module is prepared to be used in that scene, the imple-
mentation on real environment would be the ultimate proof that the developed solution can compete on
equal ground with the state of the art solutions. Aside from that, there are several improvements to be
made:

• The inspection algorithm needs improvement to solve the problem of detecting false negatives on the
inflection points.

• An approach to calibrate the vision system with the work object accurately regardless of the geometric
irregularities of the part could enhance the presentation of the solution and prevent defects of type 4
originated by future corrections.

The other proeminent subject for future work is the adaptation of the integration with ROS MoveIt! to
a real environment. Firstly, a point cloud aquisition and processing system has to be implemented to
correctly monitor the robot’s workspace and flow the information to the planners. Secondly, the comuni-
cation with the PLC has to be assured as well as the integration of the handshake between the robot and
the adhesive controller/dispenser in order to control/adjust the outputs and velocities to produce com-
pliant with standarts beads. Finally, since the planners guarantee no restriction of the workspace like
there was previously, singularities have to be taken into account. Measures should be taken to prevent
the robot from reaching one of the configurations described in section 2.1.3.

76
Bibliography

[1] M. Alfano, C. Morano, F. Moroni, F. Musiari, G. D. Spennacchio, and D. Di Lonardo. Fracture


toughness of structural adhesives for the automotive industry. Procedia Structural Integrity, 8:561–
565, 2018.

[2] J. Schlett. Machine vision helps adhesive trend stick in auto industry. https://www.photonics.
com/Articles/Machine_Vision_Helps_Adhesive_Trend_Stick_in_Auto/a61129, 2016.

[3] F. Miranda and M. Guedes. Projecto see-q,definição de requisitos, 2018. Versão 1.0.

[4] E. American Chemistry Council and S. Deparment. Plastics and polymer composites in light vehi-
cles. Plastics-car.com, 2015.

[5] BMW. A revolution in car making: Bmw i3 production. BMW Media Information, 2013.

[6] E. F. O. T. F. R. Association. Factories of the future,multi-annual roadmap for the contractual ppp
under horizon 2020, 2013.

[7] The concept of zero defects in quality management. https://www.simplilearn.com/


concept-of-zero-defects-quality-management-article. Accessed: 2019-06-15.

[8] S. Kumar and D. Mahto. Recent trends in industrial and other engineering applications of non
destructive testing: a review. 2013.

[9] H. Shen, S. Li, D. Gu, and H. Chang. Bearing defect inspection based on machine vision. Mea-
surement, 45(4):719–733, 2012.

[10] BEADMASTER 3D-Product Info. ISRA VISION.

[11] P3D INTRO-Handout. Coherix.

[12] L. Pérez, Í. Rodríguez, N. Rodríguez, R. Usamentiaga, and D. García. Robot guidance using
machine vision techniques in industrial environments: A comparative review. Sensors, 16(3):335,
2016.

[13] Halcon. Solution Guide III-C. 2018.

[14] Choosing the best machine vision technique for robot guid-
ance. https://www.roboticstomorrow.com/article/2018/09/

77
choosing-the-best-machine-vision-technique-for-robot-guidance/12547. Accessed:
2019-06-15.

[15] A. Valsaraj, A. Barik, P. Vishak, and K. Midhun. Stereo vision system implemented on fpga. Proce-
dia Technology, 24:1105–1112, 2016.

[16] J. Sturm, K. Konolige, C. Stachniss, and W. Burgard. 3d pose estimation, tracking and model
learning of articulated objects from dense depth video using projected texture stereo. In RGB-D:
Advanced Reasoning with Depth Cameras Workshop, RSS, 2010.

[17] J.-K. Oh, S. Lee, and C.-H. Lee. Stereo vision based automation for a bin-picking solution. Interna-
tional Journal of Control, Automation and Systems, 10(2):362–373, 2012.

[18] L. Li. Time-of-flight camera–an introduction. Technical white paper, (SLOA190B), 2014.

[19] M. Hansard, S. Lee, O. Choi, and R. P. Horaud. Time-of-flight cameras: principles, methods and
applications. Springer Science & Business Media, 2012.

[20] K. Claes and H. Bruyninckx. Robot positioning using structured light patterns suitable for self
calibration and 3d tracking. In Proceedings of the 2007 International Conference on Advanced
Robotics, Jeju, Korea, 2007.

[21] K. Khoshelham and S. O. Elberink. Accuracy and resolution of kinect depth data for indoor mapping
applications. Sensors, 12(2):1437–1454, 2012.

[22] L. Susperregi, B. Sierra, M. Castrillón, J. Lorenzo, J. M. Martínez-Otzeta, and E. Lazkano. On the


use of a low-cost thermal sensor to improve kinect people detection in a mobile robot. Sensors, 13
(11):14687–14713, 2013.

[23] I. A. Sucan, M. Moll, and L. E. Kavraki. The open motion planning library. IEEE Robotics &
Automation Magazine, 19(4):72–82, 2012.

[24] E. Plaku, L. E. Kavraki, and M. Y. Vardi. Hybrid systems: from verification to falsification by com-
bining motion planning and discrete search. Formal Methods in System Design, 34(2):157–182,
2009.

[25] D. Youakim and P. Ridao. Motion planning survey for autonomous mobile manipulators underwater
manipulator case study. Robotics and Autonomous Systems, 107:20–44, 2018.

[26] L. Larsen, J. Kim, M. Kupke, and A. Schuster. Automatic path planning of industrial robots compar-
ing sampling-based and computational intelligence methods. Procedia Manufacturing, 11:241–248,
2017.

[27] S. Karaman and E. Frazzoli. Sampling-based algorithms for optimal motion planning. The interna-
tional journal of robotics research, 30(7):846–894, 2011.

78
[28] R. Ragel, I. Maza, F. Caballero, and A. Ollero. Comparison of motion planning techniques for
a multi-rotor uas equipped with a multi-joint manipulator arm. In 2015 Workshop on Research,
Education and Development of Unmanned Aerial Systems (RED-UAS), pages 133–141. IEEE,
2015.

[29] J. N. Gomes de Brito. Manipulador robótico para poda automática (projeto romovi). Master’s thesis,
FEUP, 2018.

[30] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo. Robotics: modelling, planning and control.
Springer Science & Business Media, 2010.

[31] P. M. Kebria, S. Al-Wais, H. Abdi, and S. Nahavandi. Kinematic and dynamic modelling of ur5
manipulator. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC),
pages 004229–004234. IEEE, 2016.

[32] S. Brand and N. Nilsson. Calibration of robot kinematics using a double ball-bar with embedded
sensing. 2016.

[33] Product specification IRB 2400/10 IRB 2400/16. ABB Robotics, . Rev. N.

[34] M. Brandstötter, A. Angerer, and M. Hofbaur. An analytical solution of the inverse kinematics prob-
lem of industrial serial manipulators with an ortho-parallel basis and a spherical wrist. In Proceed-
ings of the Austrian Robotics Workshop, pages 7–11, 2014.

[35] T. M. Salgueiro. Robot assisted needle interventions for brain surgery. Master’s thesis, IST, 2014.

[36] S. Chiaverini, B. Siciliano, and O. Egeland. Experimental results on controlling a 6-dof robot ma-
nipulator in the neighborhood of kinematic singularities. In Experimental Robotics III, pages 1–13.
Springer, 1994.

[37] User Manual UR5/CB3. Universal Robots, . Version 3.3.3.

[38] M. J. Laastad. Robotic rehabilitation of upper-limb after stroke-implementation of rehabilitation


control strategy on robotic manipulator. Master’s thesis, NTNU, 2017.

[39] E. Wernholt. On multivariable and nonlinear identification of industrial robots. Division of Automatic
Control and Communication Systems, Department of . . . , 2004.

[40] I. A. Mendoza. Rapid programming of an assembly cell of lego cars and modelling the systems
using sysml. 2017.

[41] Technical reference manual-RAPID Instructions, Functions and Data types. ABB Robotics, . Rev.
J.

[42] V. Tlach, I. Kuric, D. Kumičáková, and A. Rengevič. Possibilities of a robotic end of arm tooling
control within the software platform ros. Procedia engineering, 192:875–880, 2017.

[43] Ros website. http://wiki.ros.org/. Accessed: 2019-06-30.

79
[44] N. Koenig and A. Howard. Design and use paradigms for gazebo, an open-source multi-robot simu-
lator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE
Cat. No. 04CH37566), volume 3, pages 2149–2154. IEEE, 2004.

[45] S. Chitta, E. Marder-Eppstein, W. Meeussen, V. Pradeep, A. Rodríguez Tsouroukdissian, J. Bohren,


D. Coleman, B. Magyar, G. Raiola, M. Lüdtke, and E. Fernández Perdomo. ros_control: A generic
and simple control framework for ros. The Journal of Open Source Software, 2017. URL http:
//www.theoj.org/joss-papers/joss.00456/10.21105.joss.00456.pdf.

[46] Tutorial: Ros control. http://gazebosim.org/tutorials/?tut=ros_control. Accessed: 2019-


08-15.

[47] S. Chitta. Moveit!: an introduction. In Robot Operating System (ROS), pages 3–27. Springer, 2016.

[48] The URScript Programming Language. Universal Robots, . Version 1.8.

[49] G. K. Jati, H. M. Manurung, et al. Discrete cuckoo search for traveling salesman problem. In
2012 7th International Conference on Computing and Convergence Technology (ICCCT), pages
993–997. IEEE, 2012.

[50] D. Gupta. Solving tsp using various meta-heuristic algorithms. International Journal of Recent
Contributions from Engineering, Science & IT (iJES), 1(2):22–26, 2013.

[51] M. F. A. Syahputra, R. N. Devita, S. A. Siregar, and K. C. Kirana. Implementation of traveling


salesman problem (tsp) based on dijkstra’s algorithm in the information system of delivery service.
JAVA Journal of Electrical and Electronics Engineering, 14(1), 2016.

[52] Cuckoo search. https://github.com/Ashwin-Surana/cuckoo-search. Accessed: 2019-07-30.

[53] N. J. P. M. Rodrigues. Ensinamento por demonstração com controlo de força e inspeção por visão
em células de polimento robotizadas. 2016.

[54] A. C. G. d. S. Januário. Object re-grasping with dexterous robotic hands. Master’s thesis, IST, 2015.

[55] Ompl available planners. https://ompl.kavrakilab.org/planners.html. Accessed: 2019-09-


30.

[56] Ompl available planners. https://medium.com/@theclassytim/


robotic-path-planning-rrt-and-rrt-212319121378. Accessed: 2019-09-30.

[57] I. A. Şucan and L. E. Kavraki. Kinodynamic motion planning by interior-exterior cell exploration. In
Algorithmic Foundation of Robotics VIII, pages 449–464. Springer, 2009.

80
Appendix A

Algorithms

A.1 Dijkstra

Dijkstra’s algorithm is known as a single source shortest path algorithm, i.e.it computes the shortest
path tree making the starting point as its root. The first step of the algorithm concerns the initialization
of the graph(G) with a list of vertices(V) and edges(E). It assignes the distance for every node, seting 0
to the current node and ∞ to the unvisited nodes proceeding to the calculation of the distances between
the current node and its neighbours. The current node is marked as visited and the closest unmarked
node is set as current. The relaxation updates the costs of all the vertices, if its possible to improve the
best estimate of the shortest path[51].

Algorithm 3 Pseudocode of Dijkstra algorithm[51]


procedure INITIALISE _ SINGLE _ SOURCE ( G RAPH G , N ODE S )
for each vertex v in Vertices( g ) do
g.d[v] := inf inity
g.pi[v] := null
g.d[s] := 0
procedure RELAX ( N ODE U, N ODE V, DOUBLE W [][] )
if d[v] > d[u] + w[u, v] then
d[v] := d[u] + w[u, v]
pi[v] := u
procedure SHORTEST _ PATHS ( G RAPH G , N ODE S )
initialise_single_source( g, s )
S := 0 /* Make S empty */
Q := V ertices(g) /* Put the vertices in a PQ */
while not Empty(Q) do
u := ExtractCheapest(Q)
AddN ode(S, u) /* Add u to S */
for each vertex v in Adjacent( u ) do
relax(u, v, w)

81
A.2 Sampling-based motion algorithms

PRM-Probablistic Roadmap is a planner capable of computing free collision paths for any kind of robots
that move in environments with fixed obstacles. It works executing, synchronously, two main tasks:The
construction of a roadmap and the search for its shortest path. The roadmap is built through the random
selection of robot configurations, taking into account if these are collision free or not. When a collision
free configuration is found, it is marked as a node amd it is connected to the remaining roadmap nodes
according to criteria that is defined by the user. Regarding the second task, the shortest path is found
through an A* or Dijkstra algorithm.[29]

Figure A.1: Recursive iterations on the roadmap constructions according to the PRM algorithm [29]

RRT-Rapidly exploring Random Tree builds a tree where all nodes correspond to collision free
states, the first node being the initial configuration of the robot[54]. The algorithm is depicted below
where:

x_init is the initial state

K_iterations is the maximum number of iterations

4_t is the duration of each movement

x_rand, x_near, x_new are the new sample of the state space, the closest node of the tree to x_rand
and the x_rand with a movement u from the x_near during 4_t time, respectively.

RAN DOM _ST AT E() returns a random state which will be added to the tree after finding the nearest
neighbour through N EAREST _N EIGHBOU R(x_rand, T ree)

N EW _ST AT E(x_near, u, 4_t) creates the state which goes from x_near to x_rand with movemenyt
u for 4_t time.

82
Algorithm 4 RRT algorithm[54]
procedure GEN ERAT E_RRT (x_init, K_iterations, 4_t)
T ree.init()
for i = 1toK_iterations do
x_rand ← RAN DOM _ST AT E()
x_near ← N EAREST _N EIGHBOU R(x_rand, T ree)
x_new ← N EW _ST AT E(x_near, u, 4_t)
T ree.ADD_V ERT EX(x_new)
T ree.ADD_EDGE(x_near, x_new, u)
RETURN Tree

Figure A.2: Iterations on the tree construction using RRT [29]

The RRTConnect and RRTstar are variations from the RRT algorithm. The RRTConnect is bidirec-
tional(i.e. it grows two trees, one from the start state and another from the goal state). The RRTstar is
an asymptotically optimal version of RRT, it converges on the optimal path as a funtion of time[55]. The
RRTstar records the distance each vertex has traveled relative to its parent vertex, this distance is called
cost. After the closest node is found in the graph, a neighborhood of vertices within a certain area is
examined. If a node with a cheaper cost is found, this node replaces the closest node. This eliminates
the cubic structure of the RRT depicted in A.1, creating a fan shaped structure[56]. RRTstar also adds
the rewiring of the tree, after a vertex has been connected to the cheapest neighbor, the neighbors
are examined once again. It is checked if rewiring the neighbors to the new vertex will make their cost
decrease. If so, the neighbor is rewired to the newly added vertex[56].

Table A.1: RRTstar[56]

Fan shaped structure for RRTstar Rewired RRTstar tree

83
KPIECE-Kinodynamic motion Planning Interior-Exterior Cell Exploration discretizes the state
space into uniform cells with same fixed size which can be classified as interior or exterior. An interior
cell has 2n non-diagonal neighbour cells and an exterior cell has less than 2n non-diagonal cells. A tree
is initialized in the initial state of the robot and, in a first phase, the tree growth criteria is the preference
for exterior cells instead of interior cells. Once all neighbour cells are interior or exterior, the new criterion
is the importance of each cell, which is computed resorting to the following equation:

log(I).score
Importance(p) = (A.1)
S.N.C

where p is the cell to evaluate, I is the iteration number, S is the number of times p was selected for
expansion, N is the number of neighbour cells instatiated at the same level of discretization and C is the
sumof the durations of the motions in the all the motions in the all tree until the considered iteration. The
algorithm ends when a stopping condition is satisfied or no solution is found.[54]

Figure A.3: Example of KPIECE discretization using 3 levels, taken from [57]

84
A.3 RAPID and URSCript correction files

Algorithm 5 URScript code


sleep(2)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.1, 0.11, -0.01, 0, 0, 0]), a=0.1, v=0.07, r=0.001)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.1, 0.11, 0.0, 0, 0, 0]), a=0.1, v=0.07, r=0.001)
set_digital_out(4, True)
sleep(0.01)
set_digital_out(4, False)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.12, 0.13, 0.0, 0, 0, 0]), a=0.1, v=0.01, r=0.001)
sleep(0.3)
set_digital_out(4, True)
sleep(0.01)
set_digital_out(4, False)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.12, 0.13, -0.01, 0, 0, 0]), a=0.1, v=0.07, r=0.001)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.14, 0.15, -0.01, 0, 0, 0]), a=0.1, v=0.07, r=0.001)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.14, 0.15, 0.0, 0, 0, 0]), a=0.1, v=0.07, r=0.001)
set_digital_out(4, True)
sleep(0.01)
set_digital_out(4, False)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.16, 0.17, 0.0, 0, 0, 0]), a=0.1, v=0.01, r=0.001)
sleep(0.3)
set_digital_out(4, True)
sleep(0.01)
set_digital_out(4, False)
sleep(0.3)
movep(pose_trans(Plane_22, p[0.16, 0.17, -0.01, 0, 0, 0]), a=0.1, v=0.07, r=0.001)
sleep(0.01)
movep(pose_trans(Plane_22,p[0.1,0.1,-0.2,0,0,0]),a=0.1,v=0.07,r=0.001)

85
Algorithm 6 RAPID Code
MODULE correct
CONST robtarget point_approx0:=[[0.0,0.0,6],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point00:=[[0.0,0.0,1],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point01:=[[100.0,100.0,1],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point02:=[[200.0,200.0,1],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point03:=[[300.0,300.0,1],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point04:=[[400.0,400.0,1],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point_remov0:=[[400.0,400.0,6],[0.02453, 0.56944, 0.82132, 0.02406],
[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point_approx1:=[[900.0,900.0,6],[0.02453, 0.56944, 0.82132, 0.02406],
[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point10:=[[900.0,900.0,1,][0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point11:=[[1000.0,1000.0,1],[0.02453, 0.56944, 0.82132, 0.02406]
,[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
CONST robtarget point_remov1:=[[1000.0,1000.0,6],[0.02453, 0.56944,0.82132,0.02406],
[0, -1, 1, 0],
[9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0, 9000000000.0]];
PROC GO()
MoveL point_approx0,vmax,z50, t_gun1\WObj:=st_01;
MoveGL_Start point00,vmax,z0, t_gun1\WObj:=st_01;
MoveL point01,vmax,z0, t_gun1\WObj:=st_01;
MoveL point02,vmax,z0, t_gun1\WObj:=st_01;
MoveL point03,vmax,z0, t_gun1\WObj:=st_01;
MoveGL_Finish point04,vmax,fine, t_gun1\WObj:=st_01;
MoveL point_remov0,vmax,z0, t_gun1\WObj:=st_01;
MoveL point_approx1,vmax,z0, t_gun1\WObj:=st_01;
MoveL point10,vmax,fine, t_gun1\WObj:=st_01;
MoveGL point11,vmax,fine, t_gun1\WObj:=st_01;
MoveL point_remov1,vmax,z50, t_gun1\WObj:=st_01;
ENDPROC
ENDMODULE

86
87
A.4 PLC

Figure A.4: Master grafcet for station 2


88
Figure A.5: Ladder diagram for station 2

89
A.5 ROS

Figure A.6: ROS control architecture[45]

Figure A.7: Gazebo+ROS control architecture[46]

90
91
Figure A.8: ROS Graph representing the active nodes and topics in the MoveIt! implementation
92
Figure A.9: TF graph for the MoveIt! implementation

controller_list.yaml

controller_list:
- name: arm_controller
action_ns: follow_joint_trajectory

93
type: FollowJointTrajectory
joints: [joint_1, joint_2, joint_3, joint_4, joint_5, joint_6]

joint_limits.yaml

joint_limits:
joint_1:
has_velocity_limits: true
max_velocity: 2.618
has_acceleration_limits: false
max_acceleration: 0
joint_2:
has_velocity_limits: true
max_velocity: 2.618
has_acceleration_limits: false
max_acceleration: 0
joint_3:
has_velocity_limits: true
max_velocity: 2.618
has_acceleration_limits: false
max_acceleration: 0
joint_4:
has_velocity_limits: true
max_velocity: 6.2832
has_acceleration_limits: false
max_acceleration: 0
joint_5:
has_velocity_limits: true
max_velocity: 6.2832
has_acceleration_limits: false
max_acceleration: 0
joint_6:
has_velocity_limits: true
max_velocity: 7.854
has_acceleration_limits: false
max_acceleration: 0

94
Appendix B

Technical Datasheets

B.1 Adhesive Properties

Figure B.1: SikaPower-492 technical data

95
96

Você também pode gostar