Você está na página 1de 7

TUM: Roboterpraktikum st2006 - Assignment 5 group

10
From Openrobotino

Robot

Another screenshot of the triangulation application showing Screenshot of the object recognition and robot triangulation
the robot's position on the football field application

Localization on a Soccer Field by Means of a Camera

Stefan Edbauer, Jürgen Treml

The task has been to make your robot localize itself on a soccer field by using its on board camera. The field had two
goals on two sides, one painted yellow, the other one painted blue. Each of the four edges of the field where marked by
a pole each consisting of three cubes painted in blue and yellow, alternating. The poles next to the yellow goal have
been carrying blue-yellow-blue and the one's next to the blue goal yellow-blue-yellow. Besides, there have been white
lines on the ground, marking the field's boundaries. Which of those elements we're using to localize the robot was
pretty much up to us.

Contents
1 Basic Idea
2 Techniques
2.1 Libraries & Utilities
2.2 Implementation
2.2.1 Version 1 (First Implementation)
2.2.2 Version 2 (Improved)
3 Mathematics
4 Code Review
5 Bugs
6 Improvements
6.1 Recent Improvements
6.2 Ideas for further Improvements
7 Links
Basic Idea
Assuming the robot is standing somewhere on the soccer field, it starts turning around until it finds a (first) pole.
Having found a first pole it measures the time till it finds a second one. Knowing the speed with which it's rotation, you
can easily calculate the angled between the two detected poles. All of this is repeated for a third pole.

Now knowing the two angles between the three poles from the robot's position and given the geometry of the soccer
field as well as the location of the poles (which we can distinguish by their painting) we can now calculate the robot's
position. The details on how to do this are described a little further down this article.

Techniques
Libraries & Utilities

QT for the application's GUI


openCV for image processing and object recognition
histograms and backprojection
contour detection

Implementation

Version 1 (First Implementation)

In an endless loop, do the following:

Acquire image from the on board camera


Create backprojections of this image according to the given histograms for the objects on the field
Find contours in those backprojections
Find surrounding rectangle for each contour
Filter contours by size to get rid of very small objects created by any other objects on the room
Try to fit yellow and blue objects together and thus recognize the poles and especially distinguish them from the
goals
Turn around as long as we could not detect three poles
While doing this, measure the time between the occurrence of two poles and calculate the angle between them
from this
As soon as we have found a third pole, calculate the robot's position according to the algorithm described
further down this article

Version 2 (Improved)

In an endless loop, just do as above but:

Turn around until we have found 5 poles.

Thus we have performed a full turnaround and know that the sum of all measured angles should be 360 degrees.
ow we can use the difference between real and expected value to auto-scale the angles.

Besides:

Keep a list of the four most recently found poles


Thus, after having performed a localization, we don't have to wait for another five poles but only one until we can
perform a localization again. This way, after the first full turnaround, we can do four localizations per turn.

Keep a list of the most recently calculated robot positions

For each new localization, the resulting position is added to the list and an average position of all the positions in
the list is shown on the soccer field.

With all these improvements, accuracy of the robot localization has been greatly improved and is now somewhat
around 30 to 40 cm instead of 1 to 1.5 m.

Mathematics

Triangles and quads used for triangulation Triangulation of the robot position

Calculating the robot position is basically done by making use of the sum of angles in triangles and quads as well as
some sine and cosine laws.

The second drawing on the right shows the one and only quad we are referring to throughout our calculation (blue
line) as well as the two main triangles we are making use of (painted in green and orange). The first drawing is a
detailed drawing with all angles and vertices named exactly as they are named in the program code.

Before we start calculations, there's a few assumptions we can make:

We know and which are the width and the height of the soccer field.
We know that the angle between those two vertices is 90 degrees.
Angles and are also given, as they are measured by the robot as it constantly turns and recognizes the
poles.

1. Calculate one of the unknown angles in one of the two main triangles
The law of sines for arbitrary triangles leads to

(1)

and

(2).
For convenience we define

and .

With and being the height and width of the soccer field and and being the two angles between the three
poles measured by the robot, and can be considered given.
Furthermore given the sum of angles in a quad, we find for the blue quad mentioned above that

where and again are the angles between the three poles as stated above.
Therefore, we now have a direct relation between and .
Again, for convenience let

Now we find that

(3).

Now isolating a in both, equations (1) and (2), then using (3) to replace in (2) we get

and

which states that

After a few basic transformations and applying the identity

we find that

Now we have

and are able to calculate .

2. Calculate the one remaining angle in the triangle containing


This is quite easy. Given the sum of all angles in a triangle as we have

3. Calculate diagonal which splits up the quad formed by and in two rectangular triangles
For the triangle containing and , the law of sines states that

and we find that

4. Calculate and which are the robots offset from the center pole, i.e. the robots position
Both of these vertices form a rectangular triangle together with vertex . For this, sine and cosine laws state that

and

and we now have

and

which is our robot's position relative to the central pole!

Code Review
Calculation of the robot's offset to the second of three poles given the two angles between the three poles:

void calcPos(double alpha, double beta, int * plI, int * pbI)


{
double y = l / sin(alpha);
double z = b / sin(beta);

double x = RAD(360.0) - RAD(90.0) - alpha - beta;

double gammaII = atan(sin(x) / (y / z + cos(x)));


double epsilonII = RAD(360.0) - RAD(90.0) - alpha - beta - gammaII;

double delta = RAD(180.0) - alpha - gammaII;


double a = l / sin(alpha) * sin(gammaII);

*plI = int(cos(delta) * a);


*pbI = int(sin(delta) * a);
}

Bugs
Inaccuracy due to a small logical mistake which is to be corrected soon ;-)

Unfortunately inaccuracy has not been related to this logical mistake and therefore has not been solved by its
correction. evertheless, averaging more than just one position as well as auto-scaling of angles has improved
accuracy a lot.

Every four measurements, there is one measurement which gives a completely wrong position. It seems that this
happens every time the measurement is based on the three most distant poles, but this bug still needs to be
identified exactly. Anyway, there's a small chance that this is related to the bug mentioned above and thus is
automatically solved while solving this one.

Fortunately, just the latter has been the case and correcting the logical mistake mentioned above has completely
solved this problem.
Due to the lack of time because of a pending term abroad right ahead, the code is spaghetti code and the user
interface as well as thread synchronization is a mess. All of this should be cleaned up some day.

Improvements
Recent Improvements

Auto scale angle values

Perform a full turn around before doing the calculation. Thus, by summing up all the measured angles, you can
determine how close to 360 degrees you are a scale a angles by a certain factor so that they sum up to exactly 360
degrees. Thus, angles should be much more accurate as they not depend on the actual motor speed mapping any more.

Calculate robot position as the average position of more than just one measurement

Currently the robot position is calculated by triangulating three poles. As there are not just three, but four poles on the
field, the robot starts with a different pole for each triangulation. This provides four different combinations of poles for
triangulation. Nevertheless, the current implementation doesn't make any use of the results of triangulations done
before the current one. Averaging the position from the last four triangulations (for the four different combinations of
poles) should improve accuracy again.

Ideas for further Improvements

Use real motor speed values for calculation of angles

Currently angles between poles are calculated by multiplying the set rotation speed with the time between elapsed
between a currently detected pole and the last detected pole. Querying the robot for the current motor speed values
and using those to calculate the angle in stead of using the set values, might improve accuracy.

Auto scale angle values

Perform a full turn around before doing the calculation. Thus, by summing up all the measured angles, you can
determine how close to 360 degrees you are a scale a angles by a certain factor so that they sum up to exactly 360
degrees. Thus, angles should be much more accurate as they not depend on the actual motor speed mapping any more.

Tweak image processing and object recognition

The camera could need some calibration. Besides, object recognition leaves quite some room for improvements. Pole
recognition could be adjusted checking for ratio width / height of the rectangles making up the pole to minimize
chances of other objects (such as humans with colored clothing on the field) to be detected as a pole. A check for
minimum size is already implemented to get rid of very small objects detected by the cam. A check for maximum size
might bring some improvements.

Calculate robot position as the average position of more than just one measurement

Currently the robot position is calculated by triangulating three poles. As there are not just three, but four poles on the
field, the robot starts with a different pole for each triangulation. This provides four different combinations of poles for
triangulation. Nevertheless, the current implementation doesn't make any use of the results of triangulations done
before the current one. Averaging the position from the last four triangulations (for the four different combinations of
poles) should improve accuracy again.

Make use of other objects on the field

In addition to just triangulating the four possible combinations of poles, one could also make use of the two goals and
maybe the lines on the field to acquire more position values. This can be used for averaging as well, but also to
detected and sort out extremely inaccurate or wrong values.
Make it faster

CPU load is by far not at it's maximum which means image processing and object detection should be able to keep up
with a faster rotation speed of the robot. If you're not the cosy kind of man, you might be happy with the robot turning
faster than the current 15 deg/s, especially as the robot can do a lot faster than this.

Links
Personal homepage with information, links and a video of the robot running the localization program [1]
(http://www.juergentreml.de/joomla/index.php?option=com_content&task=view&id=16&
Itemid=37#Screenshots)

Retrieved from "http://old.openrobotino.org/index.php/TUM:_Roboterpraktikum_st2006_-_Assignment_5_group_10"

This page was last modified 09:18, 10 April 2007.


This page has been accessed 573 times.
Privacy policy
About Openrobotino
Disclaimers

Você também pode gostar