Você está na página 1de 10

Performance Evaluation of the Various

Training Algorithms and Network


Topologies in a Neural-network-based
Inverse Kinematics Solution for Robots

Regular Paper



Yavuz Sar
1,*


1 Sakarya University Hendek Vocational High School, Electronics and Automation Department, Sakarya, Turkey
* Corresponding author E-mail: sari@sakarya.edu.tr

Received 21 May 2013; Accepted 02 Apr 2014

DOI: 10.5772/58562

2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.


Abstract Recently, artificial neural networks have been
used to solve the inverse kinematics problem of
redundant robotic manipulators, where traditional
solutions are inadequate. The training algorithm and
network topology affect the performance of the neural
network. There are several training algorithms used in
the training of neural networks. In this study, the effect of
various learning algorithms on the learning performance
of the neural networks on the inverse kinematics model
learning of a seven-joint redundant robotic manipulator
is investigated. After the implementation of various
training algorithms, the Levenberg-Marquardth (LM)
algorithm is found to be significantly more efficient
compared to other training algorithms. The effect of the
various network types, activation functions and number
of neurons in the hidden layer on the learning
performance of the neural network is then investigated
using the LM algorithm. Among different network
topologies, the best results are obtained for the
feedforward network model with logistic sigmoid-
activation function (logsig) and 41 neurons in the hidden
layer. The results are presented with graphics and tables.

Keywords Robotics, Neural Networks, Training Algorithms,
Machine Learning, Inverse Kinematics Solution

1. Introduction

The inverse kinematics problem is one of the most
important problems in robotics. Fundamentally, it
consists in finding the set of joint variables to reach a
desired configuration of the tool frame. Computer based-
robots are usually acting in the joint space, whereas
objects are usually expressed in the Cartesian coordinate
system. In order to control the position of the end-effector
of a robotic manipulator, an inverse kinematics solution
should be established to make the necessary conversions.
This problem usually involves a set of nonlinear coupled,
algebraic equations. The inverse kinematics problem is
1 Yavuz Sar: Performance Evaluation of the Various Training Algorithms and Network
Topologies in a Neural-network-based Inverse Kinematics Solution for Robots
ARTICLE
Int J Adv Robot Syst, 2014, 11:64 | doi: 10.5772/58562
International Journal of Advanced Robotic Systems
especially complex and time-consuming for redundant
types of robotic manipulators [1,2].

In the literature, inverse kinematics solutions for robotic
manipulators are drawn from various traditional
methods such as algebraic methods, geometric methods
and numerical methods. Neural networks have also
become popular [3-6].

Intelligent techniques have been one popular subject in
recent years in robotics as one way to make control
systems able to act more intelligently and with a high
degree of autonomy. Artificial neural networks have been
widely applied in robotics for the extreme flexibility that
comes from their learning ability and function-
approximation capability in nonlinear systems [2]. Many
papers have been published about the neural-network-
based inverse kinematics solution for robotic
manipulators [7-14]. Tejomurtula and Kak presented a
study based on the solution of the inverse kinematics
problem for a three-joint robotic manipulator using a
structural neural network which can be trained quickly to
reduce training time and to increase accuracy [7]. Xia et
al. presented a paper about formulating the inverse
kinematics problem as a time-varying quadratic
optimization problem. For this purpose they suggested a
new recurrent neural network. According to their studies,
their suggested network structure is capable of
asymptotic tracking for the motion control of redundant
robotic manipulators [8]. Zhang et al. used Radial Bases
Function networks (RBF) for the inverse kinematics
solution of a MOTOMAN six-joint robotic manipulator.
They used the solution to avoid complicated traditional
procedures and programming to derive equations and
programming [9]. An adaptive learning strategy based on
using neural networks to control the motion of a six-joint
robot was presented by Hasan et al. Their study was
implemented without explicitly specifying the kinematic
configuration or the working-space configuration [10].
Rezzoug and Gorce studied the prediction of finger
posture by using artificial neural networks based on an
error backpropagation algorithm. They obtained lower
prediction errors compared to the other studies in the
literature [11]. Chiddarwar et al. published a paper based
on the comparison of radial-base functions and
multilayer neural networks for the solution of the inverse
kinematics problem for a six-joint serial robot model,
using a fusion approach [12]. Zhang et al. presented a
paper about the kinematic analysis of a novel 3-DOF
actuation-redundant parallel manipulator using neural
networks. They applied different intelligent techniques
such as multilayer-perception neural network, Radial
Bases Function neural network and Support Vector
Machine to investigate the forward kinematic problem of


the robot. They found that SVM gave better forward
kinematic results than other applied methods [13]. Kker
et al. presented a study based on the inverse kinematics
solution of a Hitachi M6100 robot based on committee-
machine neural networks. They showed that using a
committee-machine neural network instead of a unique
neural network increased the performance of the
solution; in other words, the error was decreased [14].

A neural networks working principle is based on
learning from previously obtained data known as a
learning or training set, and then checking the systems
success using test data. The learning algorithm also
affects the success of the neural network implementation
significantly. In this paper, the effect of various learning
algorithms, network types, activation functions and
numbers of neurons in the hidden layer have been
examined for the inverse kinematics solution of a seven-
joint robotic manipulator. The results show that the
Levenberg-Marquardt training algorithm, feedforward
networks, logsig activation function and 41 hidden-layer
sizes gave the best results for the inverse kinematics
solution of the used robot model.

In the paper, section 2 provides a kinematics analysis of a
Schunk LWA3 robot [15], section 3 outlines the neural-
network-based inverse kinematics solution, section 4
describes training and testing, section 5 gives results and
a discussion, and section 6 provides conclusions.

2. Kinematic analysis of Schunk LWA3 robot
A manipulator is composed of serial links that are
connected to each other with revolute or prismatic joints
from the base frame through the end-effector. The
calculation of the position and orientation of the end-
effector in terms of the joint variables is known as
forward kinematics. In order to obtain forward
kinematics of a robot mechanism in a systematic manner,
one should use a convenient kinematics model. The
Denavit-Hartenberg method, which uses four parameters,
is the most common method to describe robot kinematics
[16-18]. These four parameters can be described as

, and

, which are the link length, link twist, link


offset and joint angle, respectively. A coordinate frame is
attached to each joint for the determination of DH
parameters, and the

axis of the coordinate frame points


along the rotary or sliding direction of the joints [19,20].

A seven-DOF Schunk LWA3 robot model has been used
in this study, as shown in Figure 1. A 3-D view of the
Schunk LWA3 robot is given in Figure 2. By using the
above notations and Figure 1, the D-H parameters are
obtained for this robot model as given in Table 1.


2 Int J Adv Robot Syst, 2014, 11:64 | doi: 10.5772/58562
i

(mm)

(
o
)

(mm)

(
o
)
1 0 -90 300 1
2 0 90 0 2
3 0 - 90 328 3
4 0 90 0 4
5 0 - 90 276,5 5
6 0 90 0 6
7 0 - 90 171,7 7
Table 1. D-H parameters of the robot


Figure 1. The kinematic structure of Schunk LWA3 robot [15,21]


Figure 2. 3-D view of Schunk LWA3 robot [15,21]

The general transformation matrix

for a single link


can be obtained as follows:

(1)


1 0 0 0
0

0
0

0
0 0 0 1

1 0 0

0 1 0 0
0 0 1 0
0 0 0 1

0 0

0 0
0 0 1 0
0 0 0 1

1 0 0 0
0 1 0 0
0 0 1

0 0 0 1
(2)


0 0 0 1
. (3)

In (1), (2) and (3),

and

present rotation,

and


denote translation, and

and

are the
short-hands for

and

,
respectively.

For the Schunk LWA3 robot, it is straightforward to
compute each of the link transformation matrices using
(1), as follows.


0 0
0 0 1 300

0 0
0 0 0 1

, (4)

0 0
0 0 1 0

0 0
0 0 0 1

, (5)


0 0
0 0 1 328

0 0
0 0 0 1

, (6)

0 0
0 0 1 0

0 0
0 0 0 1

, (7)


0 0
0 0 1 276,5

0 0
0 0 0 1

, (8)

0 0
0 0 1 0

0 0
0 0 0 1

, (9)


0 0
0 0 1 171,7

0 0
0 0 0 1

. (10)
2
6
4
1
3
7
5
x0
y0
z0
y7
z7
x7
3 Yavuz Sar: Performance Evaluation of the Various Training Algorithms and Network
Topologies in a Neural-network-based Inverse Kinematics Solution for Robots
The forward kinematics of the end-effector with respect
to the base frame are determined by multiplying all of the

matrices as follows:

. (11)

An alternative representation of

can be written thus:



0 0 0 1
. (12)

In (12),

refer to the
rotational elements of the transformation matrix.

,
and

show the elements of the position vector.



For a seven-jointed manipulator, the position and
orientation of the end-effector with respect to the base is
given in (13).

. (13)

All obtained notations after the calculations based on (13),
are given in the Appendix.

By using (15)(26) (see Appendix), the training, validation
and test sets are prepared for the inverse kinematics
model learning. All of these algorithms are implemented
in MATLAB.

3. Neural-network-based inverse kinematics solution

Neural networks are generally used in the modelling of
nonlinear processes. An artificial neural network is a
parallel-distributed information processing system. It
stores the samples with distributed coding, thus forming a
trainable nonlinear system. Training of a neural network
can be expressed as a mapping between any given input
and output data set. Neural networks have some
advantages, such as adoption, learning and generalization.
Implementation of a neural-network model requires us to
decide the structure of the model, the type of activation
function and the learning algorithm [22, 23].

In Figure 3, the schematic representation of a neural-
network-based inverse kinematics solution is given. The
solution system is based on training a neural network to
solve an inverse kinematics problem based on the
prepared training data set using direct kinematics
equations. In Figure 3, e refers to error the neural
network results will be an approximation, and there will
be an acceptable error in the solution.

The designed neural-network topology is given in Figure
4. A feed-forward multilayer neural-network structure
was designed including 12 inputs and seven outputs.
Only one hidden layer was used during the studies.

The trained neural network can give the inverse kinematics
solution quickly for any given Cartesian coordinate in a
closed system such as a vision-based robot control system.


Figure 3. Neural-network-based inverse kinematics solution
system [14]


Figure 4. The neural network topology used in this study

4. Training and testing

In this study, a fifth-order polynomial trajectory planning
algorithm has been used to prepare data for training, testing
and validation sets of the neural network. The equation for
fifth-order polynomial trajectory planning is given in (14):


10


15


1 . (14)

i
12

i
8

i1
i
2

i
3

i
4

i
5

i
6

i
7
i
9

i
10

i
11


n
x

n
y

n
z

s
x

s
y

s
z

a
x

a
y

a
z

p
x

p
y
p
z

Input Layer
i
Hidden Layer Output Layer
j k
j1
j
2

j
m

Forward kinematics

Neural network
Actual joint angles

0
0
0
1


P
o
s
i
t
i
o
n

a
n
d

o
r
i
e
n
t
a
t
i
o
n


o
f

t
h
e

e
n
d

e
f
f
e
c
t
o
r


Desired joint angles
Error


+
-
e
4 Int J Adv Robot Syst, 2014, 11:64 | doi: 10.5772/58562
where i(t) denotes the angular position at time t, if is
the final position of the i
th
joint, i0 is the initial position
of the i
th
joint, m is the number of joints, and tf is the
arrival time from initial position to the target [24-28].

Some starting and final angular positions are defined to
produce data in the work volume of the robotic
manipulator. A sample trajectory is given in Figure 5
between 0 and 90 degrees for a joint.

Twelve different training algorithms have been used in
this study. The fundamentals of the 12 training
algorithms are summarized in Table 2.

For the training, 7000 data values corresponding to the
(0
1
,0
2
,0
3
,0
4
,0
5
,0
6
,0
7
) joint angles according to the different
(n
x
, n

, n
z
, s
x
, s

, s
z
, o
x
, o

, o
z
, p
x
, p

, p
z
) Cartesian
coordinate parameters were generated by using the fifth-
order polynomial trajectory planning given in (14) based
on kinematic equations (15) to (26), given in the Appendix.
A sample data set produced for the training of neural
networks is given in Table 3. Here, the trajectory-planning
algorithm has been used just to produce data between any
given starting angular position and the final position, as
mentioned above. Data preparation was done in a

predefined area (robot working area). We tried to obtain
well-structured learning sets to make the learning process
successful and easy. These values were recorded in the files
to form the training, validation and testing sets of the
networks. Each 2380 of these data values were used in the
training of neural networks, and 2310 were used in the
validation of each neural network to see their success for
the same data set. 2310 data values were used for the
testing process as well. Training of the neural networks
was completed when the error reached the possible
minimum. The training and test results are given with
details in the following section.


Figure 5. A sample fifth-order polynomial trajectory
Algorithm Purpose Description
LevenbergMarquardt
Trainlm
(LM)
Levenberg-Marquardt
backpropagation
trainlm is a network-training function that updates weight and bias values
according to Levenberg-Marquardt optimization.
Conjugated gradient descent
Traincgb
(CGB)
Conjugate gradient backpropagation
with Powell-Beale restarts
traincgb is a network-training function that updates weight and bias values
according to the conjugate gradient backpropagation with Powell-Beale restarts.
Traincgf
(CGF)
Conjugate gradient backpropagation
with Fletcher-Reeves updates
traincgf is a network-training function that updates weight and bias values
according to conjugate gradient backpropagation with Fletcher-Reeves updates.
Trainscg
(SCG)
Scaled conjugate gradient
backpropagation
trainscg is a network-training function that updates weight and bias values
according to the scaled conjugate gradient method.
Traincgp
(CGP)
Conjugate gradient backpropagation
with Polak-Ribire updates
traincgp is a network-training function that updates weight and bias values
according to conjugate gradient backpropagation with Polak-Ribire updates.
Quasi-Newton algorithm
Trainoss
(OSS)
One-step secant backpropagation trainoss is a network-training function that updates weight and bias values
according to the one-step secant method.
Trainbfg
(BFG)
BFGS quasi-Newton
backpropagation
trainbfg is a network-training function that updates weight and bias values
according to the BFGS quasi-Newton method.
Resilient backpropagation
Trainrp
(RP)
Resilient backpropagation trainrp is a network-training function that updates weight and bias values
according to the resilient backpropagation algorithm (Rprop).
Gradient descent with variable learning rate
Traingd
(GD)
Gradient descent backpropagation traingd is a network-training function that updates weight and bias values
according to gradient descent.
Traingda
(GDA)
Gradient descent with adaptive
learning rate backpropagation
traingda is a network-training function that updates weight and bias values
according to gradient descent with adaptive learning rate.
Traingdm
(GDM)
Gradient descent with momentum
backpropagation
traingdm is a network-training function that updates weight and bias
values according to gradient descent with momentum.
Traingdx
(GDX)
Gradient descent with momentum
and adaptive learning rate
backpropagation
traingdx is a network-training function that updates weight and bias values
according to gradient descent momentum and an adaptive learning rate.
Table 2. Different training algorithms used in this study

0
30
60
90
0 0,2 0,4 0,6 0,8 1
P
o
s
i
t
i
o
n

(
d
e
g
)
Time (s)
5 Yavuz Sar: Performance Evaluation of the Various Training Algorithms and Network
Topologies in a Neural-network-based Inverse Kinematics Solution for Robots
Inputs Outputs
n
x
n

n
z
s
x
s

s
z
o
x
o

o
z
p
x
(mm) p

(mm) p
z
(mm) 0
1
o
0
2
o
0
3
o
0
4
o
0
5
o
0
6
o
0
7
o
-0.987 -0.001 0.161 -0.160 -0.086 -0.983 0.014 -0.996 0.085 -583.312 154.689 117.887 10 78.4 0 20 5 82 0
-0.872 -0.001 0.489 -0.487 -0.086 -0.869 0.043 -0.996 0.075 -507.814 154.689 310.282 30 78.4 0 20 5 82 0
-0.991 -0.004 0.133 -0.130 -0.169 -0.977 0.026 -0.986 0.167 -570.188 156.511 129.943 10 73.8 0 30 10 76 0
-0.958 -0.005 0.288 -0.283 -0.164 -0.945 0.052 -0.986 0.156 -524.624 156.377 220.915 20 69.2 0 40 10 71 0
-0.900 -0.006 0.436 -0.430 -0.158 -0.889 0.074 -0.987 0.140 -461.612 156.207 297.819 30 64.6 0 50 10 65 0
-0.950 -0.012 0.312 -0.279 -0.417 -0.865 0.141 -0.909 0.393 -531.876 170.088 278.655 20 78.4 5 20 20 82 0
-0.955 -0.050 0.293 -0.264 -0.308 -0.914 0.136 -0.950 0.281 -451.005 162.612 322.051 30 64.6 0 50 20 65 5
-0.991 -0.053 0.123 -0.080 -0.505 -0.859 0.108 -0.861 0.496 -516.006 178.544 246.109 15 66.9 5 45 28 68 0
-0.998 -0.050 0.033 -0.010 -0.415 -0.910 0.059 -0.908 0.414 -575.608 169.795 174.391 10 78.4 0 20 25 82 5
-0.917 -0.061 0.393 -0.316 -0.491 -0.812 0.242 -0.869 0.431 -469.332 176.847 378.486 30 78.4 5 20 25 82 5
Table 3. A sample data set produced for the training of neural networks

5. Results and discussions

In this study, 12 training algorithms were used in the
inverse kinematics model learning. The results of these
algorithms are presented in Table 4. According to the
table, the last five training methods BFG, GDA, GDM,
GD and GDX are unsatisfactory. These are therefore
neglected and not used in the graphical comparisons. The
remaining seven training algorithms are compared.

Firstly, the comparisons are made by the number of
epochs versus the mean squared error (MSE) values of
the training, validation and testing. The graphical
representations are given in Figures 6a, 6b and 6c,
respectively. It is obvious from Figures 6a-c that the LM
algorithm is the best one. The comparisons are then
carried out by training time versus MSE values. These
graphical comparisons are given for training, validation
and test values in Figures 6d, 6e and 6f, respectively. The
meanings of the lines used in these graphics are given in
Figure 6g. Although 5433.6 seconds elapses for 1000
epochs during the training using LM, it is clearly seen in
the three comparisons that the MSE value for the LM
training algorithm is less than the MSE values of other
algorithms around 65 seconds. According to these
graphical representations, it is clear that the LM

algorithm is a good and robust training algorithm for the
inverse kinematics model learning of a robot. The MSE
values of training, validation and testing according to the
number of epochs for the LM algorithm is been given in
the same graphic in Figure 7.

Since the LM algorithm is found to be significantly
efficient, it is chosen for the analysis of the effect of
various network types, activation functions and number
of neurons in the hidden layer.

Six different network types, including 40 neurons in the
hidden layer, were designed and examined by using the
LM training algorithm and hyperbolic tangent sigmoid-
activation function (tansig). The best results were
obtained from the feedforward network model, as seen in
Table 5. After this, 13 different activation functions were
examined in the feedforward neural network model,
including 40 neurons in the hidden layer, using the LM
training algorithm. According to the results presented in
Table 6, the logsig activation function was better than the
others. Finally, the effect of different number of neurons in
the hidden layer was analysed by using the feedforward
neural network model with logsig activation function
using the LM training algorithm. As is evident in Table 7,
41 neurons in the hidden layer gave the best result.

Algorithm
Training
time
(sec.)
Number
of
epochs
Training Validation Test
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
LM 5433.6 1000 0.0016 0.0030 0.0026
CGB 217.723 967 0.1816 0.1855 0.1894
CGF 215.354 1000 0.1981 0.2243 0.2207
SCG 139.891 756 0.2296 0.2537 0.2551
CGP 182.368 827 0.2514 0.2587 0.2692
OSS 147.725 538 0.9826 1.0354 1.0529
RP 95.707 1000 1.444 1.5025 1.5352
BFG 351.21 1000 13.1157 14.1383 13.2108
GDA 14.65 150 34.7802 35.1373 36.7285
GDM 0.755 6 1265 1317.9 1294.8
GD 0.76 6 1569.7 1591.3 1584.8
GDX 1.288 6 6002.4 5983.3 6008.4
Table 4. Best performances for training, validation, and testing training times and number of epochs for different training algorithms
6 Int J Adv Robot Syst, 2014, 11:64 | doi: 10.5772/58562

Figure 6a. Comparison of training performances versus number
of epochs for different training algorithms
Figure 6b. Comparison of validation performances versus
number of epochs for different training algorithms


Figure 6c. Comparison of testing performances versus number of
epochs for different training algorithms

Figure 6d. Comparison of training performances versus time for
different training algorithms


Figure 6e. Comparison of validation performances versus time
for different training algorithms


Figure 6f. Comparison of testing performances versus time for
different training algorithms
0 100 200 300 400 500
10
-2
10
-1
10
0
10
1
Number of Epochs
M
S
E

(
d
e
g
2
)
0 100 200 300 400 500
10
-2
10
-1
10
0
10
1
Number of Epochs
M
S
E

(
d
e
g
2
)
0 100 200 300 400 500
10
-2
10
-1
10
0
Number of Epochs
M
S
E

(
d
e
g
2
)
0 50 100 150
10
-1
10
0
10
1
Time (s)
M
S
E

(
d
e
g
2
)
0 50 100 150
10
-1
10
0
10
1
Time (s)
M
S
E

(
d
e
g
2
)
0 50 100 150
10
-1
10
0
10
1
Time (s.)
M
S
E

(
d
e
g
2
)
7 Yavuz Sar: Performance Evaluation of the Various Training Algorithms and Network
Topologies in a Neural-network-based Inverse Kinematics Solution for Robots

Figure 6g. The meanings of the line styles in Figures 6a-f

Singularities and uncertainties in the robotic arm
configurations are one of the essential problems in
kinematics robot control resulting from applying a robot
model. A solution based on using artificial neural
networks was proposed by Hasan et al. [29]. The main
idea is based on using neural networks to learn the
characteristics of the robot system rather than having to
specify an accurate robotic system model. This means the
inputs of the neural network will also have linear velocity
with the Cartesian position and orientation information
to overcome singularities and uncertainties. Since the
main focus of this paper is to investigate the performance
of the various training algorithms in the neural-network-

based inverse kinematics solution, however, this solution
is not applied here.


Figure 7. Training, validation and testing values for LM algorithm


Network Type
Training
time
(sec.)
Number
of
epochs
Training Validation Test
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
Feedforward 6112.4 1000 0.00090432 0.0013 0.0013
Fitnet 5433.6 1000 0.0016 0.003 0.0026
Cascade 6423.8 1000 0.0027 0.004 0.0039
Elman 2194.3 1000 0.0127 0.0134 0.0132
LVQnet 104.685 2 162.8645 171.5717 166.5339
Patternnet 1538 270 248.0215 245.6518 252.9521
Table 5. Best performances for training, validation, and testing, training times and number of epochs for different network types


Activation
Function
Training
time
(sec.)
Number
of
epochs
Training Validation Test
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
Logsig 4074.5 733 0.00084905 0.0012 0.0013
Tansig 6112.4 1000 0.00090432 0.0013 0.0013
Softmax 6897.2 801 0.0011 0.0015 0.0064
Radbas 2584.4 442 0.0061 0.0085 0.0087
Poslin 317.406 57 0.0228 0.0339 0.0337
Satlin 192.848 33 0.0324 0.0523 0.0445
Satlins 265.246 46 0.0425 0.0564 0.0567
Tribas 402.489 73 0.0735 0.1229 0.1097
Purelin 28.154 4 1.6975 1.6901 1.7922
Compet 10.513 1 42.4409 41.6729 42.0576
Hardlim 10.386 1 169.4213 163.6717 167.204
Hardlims 11.272 1 161.8098 168.5041 170.5411
Netinv 95.723 17 144.2543 301.7994 4093.2
Table 6. Best performances for training, validation, and testing, training times and number of epochs for different activation functions





0 200 400 600 800 1000
10
-2
Number of Epochs
M
S
E

(
d
e
g
2
)
Training
Validation
Testing
LM
CGB
CGF
SCG
CGP
OSS
RP
8 Int J Adv Robot Syst, 2014, 11:64 | doi: 10.5772/58562
Hidden
Layer
Size
Training
time
(sec.)
Number
of
epochs
Training Validation Test
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
Best-performance
MSE (deg
2
)
10 1200.6 1000 0.0209 0.0212 0.0217
15 1363 756 0.0051 0.0063 0.0064
20 2583.6 1000 0.0028 0.0031 0.0033
25 3300.9 1000 0.0029 0.0036 0.004
30 1146 290 0.0104 0.0134 0.0132
35 5038.6 1000 0.0013 0.0018 0.0019
36 4850.9 1000 0.0011 0.0015 0.0016
37 3053.3 617 0.0026 0.0041 0.0039
38 5268.5 1000 0.0011 0.0014 0.0015
39 5472.7 1000 0.001 0.0014 0.0027
40 4074.5 733 0.00084905 0.0012 0.0013
41 5907.7 1000 0.00067996 0.0008654 0.001
42 6138.3 1000 0.0011 0.0016 0.0016
43 6372 1000 0.00090911 0.0014 0.0106
44 6381.7 1000 0.0012 0.0017 0.0017
45 6317.8 1000 0.0014 0.0023 0.007
50 7102.1 1000 0.001 0.0015 0.0016
Table 7. Best performances for training, validation, and testing, training times and number of epochs for different number of neurons n
the hidden layer

6. Conclusions

This study has presented a performance evaluation of the
various training algorithms and network topologies in a
neural-network-based inverse kinematics solution for a
seven-DOF robot. Twelve different training algorithms
were analysed for their performances in the inverse
kinematics solution for robotic manipulators. The LM
training algorithm was found to be significantly the most
efficient. It was evident that the LM algorithm was the
fastest-converging algorithm and performed with high
accuracy compared to the other examined training
algorithms. Additionally, 13 different activation functions
were examined during the study. The LM training
algorithm was the fastest-converging one since it reached
the lowest MSE value, around 65 seconds. In conclusion,
the feedforward neural network model consisting of 41
neurons in the hidden layer with logsig activation
function using the LM algorithm is successful for the
inverse kinematics solution for a seven-DOF robot. As a
future study, these training algorithms with a neural
network could be examined on a real robotic
manipulator, including hardware and software
implementation of the solution scheme. Linear velocity
could also be used as input of the neural network
together with Cartesian position and orientation
information, to overcome singularities and uncertainties.

7. References

[1] Chapelle F., Bidaud P. (2004) Closed form solutions
for inverse kinematics approximation of general 6R
manipulators. Mechanism and Machine Theory 39,
pp. 323338.
[2] Hasan A.T., Al-Assadi H.M.A.A., Ahmad A.M.I.
(2011) Neural Networks Based Inverse Kinematics
Solution for Serial Robot Manipulators Passing
Through Singularities, Artificial Neural Networks.
Industrial and Control Engineering Applications,
Prof. Kenji Suzuki (Ed.), ISBN: 978-953-307-220-3,
InTech, DOI: 10.5772/14977.
[3] Bingul Z., Ertunc H.M., Oysu C. (2005) Applying
neural network to inverse kinematics problem for 6R
robot manipulator with offset wrist. In: Proceedings,
7th International Conference on Adaptive and
Natural Computing Algorithms, Coimbra, Portugal,
pp. 112-115.
[4] Daunicht W. (1991) Approximation of the inverse
kinematics of an industrial robot by DEFAnet. In:
Proceedings, IEEE International Joint Conference on
Neural Networks, Singapore, pp. 531538.
[5] Mayorga R.V., Sanongboon P. (2005) Inverse
kinematics and geometrically bounded singularities
prevention of redundant manipulators: an artificial
neural network approach. Robotics and Autonomous
Systems 53, pp. 164176.
[6] Martin J.A., Lope J.D., Santos M. (2009) A method to
learn the inverse kinematics of multi-link robots by
evolving neuro-controllers. Neurocomputing 72, pp.
28062814.
[7] Tejomurtula S., Kak S. (1999) Inverse kinematics in
robotics usingneural networks. Inf Sci 116, pp. 147164.
[8] Xia Y., Wang J. (2001) Neural Networks for
Kinematic Control of Redundant Robot
Manipulators. IEEE Trans. Syst. Man Cybern., Part B,
Cybern. 31 (1), pp. 147154.
[9] Zhang P.Y., L T.S., Song L.B. (2005) RBF Networks-
Based Inverse Kinematics of 6R Manipulator. Int. J.
Adv. Manuf. Technol. 26: pp. 144147.
9 Yavuz Sar: Performance Evaluation of the Various Training Algorithms and Network
Topologies in a Neural-network-based Inverse Kinematics Solution for Robots
[10] Hasan A.T., Hamouda A.M.S., Ismail N., Al-Assadi
H.M.A.A. (2006) An adaptive-learning algorithm to
solve the inverse kinematics problem of a 6 D.O.F.
serial robot manipulator. Advanced in Engineering
Software 37, pp. 432438.
[11] Rezzoug N., Gorce P. (2008) Prediction of Fingers
Posture Using Artificial Neural Networks. Journal of
Biomechanics 41, pp. 27432749.
[12] Chiddarwar S.S., Babu N.R. (2010) Comparison of
RBF and MLP Neural Networks to Solve Inverse
Kinematic Problem for 6R Serial Robot by a Fusion
Approach. Engineering Applications of Artificial
Intelligence 23, pp. 10831092.
[13] Zhang D., Lei J. (2011) Kinematic Analysis of a Novel
3-DOF Actuation Redundant Parallel Manipulator
Using Artificial Intelligence Approach. Robotics and
Computer-Integrated Manufacturing 27, pp. 157163.
[14] Kker R., akar T., Sari Y. (2013) A neural-network
committee machine approach to the inverse
kinematics problem solution of robotic manipulators.
Engineering with Computers, Springer-Verlag
London, DOI 10.1007/s00366-013-0313-2
[15] Schunk GmbH. [Online] http://www.schunk.com
[16] Kelly A. (1994) Essential Kinematics for Autonomous
Vehicles. Carnegie Mellon University, The Robotics
Institute.
[17] Fu K.S., Gonzalez R.C., Lee C.S.G. (1987) Robotics:
Control, Sensing, Vision, and Intelligence. McGraw-
Hill, New York.
[18] Lee G.C.S. (1982) Robot arm kinematics, dynamics,
and control. IEEE Computer 15 (12), pp. 6279.
[19] Kucuk S., Bingul Z. (2006) Industrial Robotics:
Theory, Modelling and Control. Sam Cubero (Ed.),
ISBN: 3-86611-285-8, InTech.
[20] Craig J.J. (1989) Introduction to Robotics Mechanics
and Control. Addison-Wesley Publishing Company,
USA.
[21] Pluzhnikov S. (2012) Motion Planning and Control of
Robot Manipulators. Masters thesis, Norwegian
University of Science and Technology, Department of
Engineering Cybernetics.
[22] Haykin S. (2009) Neural Networks and Learning
Machines. Third ed., Pearson, New Jersey, USA.
[23] Kheirkhah A., Azadeh A., Saberi M., Azaron A.,
Shakouri H. (2012) Improved estimation of electricity
demand function by using of artificial neural
network, principal component analysis and data
envelopment analysis. Computers & Industrial
Engineering 64, pp. 425441.
[24] Gupta A. and Kamal D. (1998) Practical Motion
Planning in Robotics: Current Approaches and
Future, John Wiley & Sons, Inc. New York, NY,
USA.
[25] Ata A.A. (2007) Optimal Trajectory Planning of
Manipulators: A Review. Journal of Engineering
Science and Technology2 (1), pp. 3254.
[26] Spong M.S., Vidyasagar M. (1989) Robot Dynamics
and Control. John Wiley and Sons, Inc, New York.
[27] Biagiotti L., Melchiorri C. (2008) Trajectory Planning
for Automatic Machines and Robots, Springer, Berlin.
[28] Tonbul T.S., Saritas M. (2003) Inverse Kinematics
Calculations and Trajectory Planning of a 5 DOF
Edubot Robot Manipulator. Journal of Faculty of
Engineering and Architecture, Gazi University, 18
(1), pp. 145167.
[29] Hasan A.T., Ismail N., Hamouda A.M.S., Aris I.,
Marhaban M.H., Al-Assadi H.M.A.A. (2010)
Artificial Neural-Network-Based Kinematics
Jacobian Solution For Serial Manipulator Passing
Through Singular Configurations, Advanced in
Engineering Software 41, pp. 359367.

8. Appendix

nx=c1c2c3c4c5c6c7-s1s3c4c5c6c7-c1s2s4c5c6c7-c1c2s3s5c6c7-
s1c3s5c6c7-c1c2c3s4s6c7+s1s3s4s6c7-c1s2c4s6c7-c1c2c3c4s5s7+
s1s3c4s5s7+c1s2s4s5s7-c1c2s3c5s7-s1c3c5s7
(15)
ny=s2c3c4c5c6c7+c2s4c5c6c7-s2s3s5c6c7-s2c3s4s6c7+c2c4s6c7-
s2c3c4s5s7-c2s4s5s7-s2s3c5s7
(16)
nz=-s1c2c3c4c5c6c7-c1s3c4c5c6c7+s1s2s4c5c6c7+s1c2s3s5c6c7-
c1c3s5c6c7+s1c2c3s4s6c7+c1s3s4s6c7+s1s2c4s6c7+s1c2c3c4s5s7+
c1s3c4s5s7-s1s2s4s5s7+s1c2s3c5s7-c1c3c5s7
(17)
sx=-c1c2c3c4c5c6s7+s1s3c4c5c6s7+c1s2s4c5c6s7+c1c2s3s5c6s7+
s1c3s5c6s7+c1c2c3s4s6s7-s1s3s4s6s7+c1s2c4s6s7-c1c2c3c4s5c7+
s1s3c4s5c7+c1s2s4s5c7-c1c2s3c5c7-s1c3c5c7
(18)
sy=-s2c3c4c5c6s7-c2s4c5c6s7+s2s3s5c6s7+s2c3s4s6s7-c2c4s6s7-
s2c3c4s5c7-c2s4s5c7-s2s3c5c7
(19)
sz=s1c2c3c4c5c6s7+c1s3c4c5c6s7-s1s2s4c5c6s7-s1c2s3s5c6s7+
c1c3s5c6s7-s1c2c3s4s6s7-c1s3s4s6s7-s1s2c4s6s7+s1c2c3c4s5c7+
c1s3c4s5c7-s1s2s4s5c7+s1c2s3c5c7-c1c3c5c7
(20)
ax=-c1c2c3c4c5s6+s1s3c4c5s6+c1s2s4c5s6+c1c2s3s5s6+s1c3s5s6-
c1c2c3s4c6+s1s3s4c6-c1s2c4c6
(21)
ay=-s2c3c4c5s6-c2s4c5s6+s2s3s5s6-s2c3s4c6+c2c4c6 (22)
az=s1c2c3c4c5s6+c1s3c4c5s6-s1s2s4c5s6-s1c2s3s5s6+c1c3s5s6+
s1c2c3s4c6+c1s3s4c6+s1s2c4c6
(23)
px=-171,7c1c2c3c4c5s6+171,7s1s3c4c5s6+171,7c1s2s4c5s6+
171,7c1c2s3s5s6+171,7s1c3s5s6-171,7c1c2c3s4c6+
171,7s1s3s4c6-171,7c1s2c4c6-276,5c1c2c3s4+276,5s1s3s4-
276,5c1s2c4-328c1s2
(24)
py=-171,7s2c3c4c5s6-171,7c2s4c5s6+171,7s2s3s5s6-
171,7s2c3s4c6+171,7c2c4c6-276,5s2c3s4+276,5c2c4+
328c2+300
(25)
pz=171,7s1c2c3c4c5s6+171,7c1s3c4c5s6-171,7s1s2s4c5s6-
171,7s1c2s3s5s6+171,7c1c3s5s6+171,7s1c2c3s4c6+
171,7c1s3s4c6+171,7s1s2c4c6+276,5s1c2c3s4+276,5c1s3s4+
276,5s1s2c4+328s1s2
(26)



10 Int J Adv Robot Syst, 2014, 11:64 | doi: 10.5772/58562

Você também pode gostar