Você está na página 1de 108

Development

of autonomous
manoeuvres in a quadcopter

Jara Forcadell Ortiz


Marc Sureda Codina

Industrial engineering, Universitat de Girona

Mentor: Yvan Petillot


Heriot-Watt Univerisy

February 2013

Development of autonomous manoeuvres in a quadcopter Report

Table of Contents
1. Introduction ........................................................................................................................................ 4
1.1. Background and context ............................................................................................................................ 4
1.2. Purpose ............................................................................................................................................................. 4
1.3. Specifications and scope ............................................................................................................................ 5
2. Hardware ............................................................................................................................................. 6
2.1. The quadcopter: Parrot AR. Drone 2.0 ................................................................................................ 6
2.2. Joypad .............................................................................................................................................................. 10
2.3. Network layout ............................................................................................................................................ 10
3. External software ............................................................................................................................ 12
3.1. Ubuntu ............................................................................................................................................................. 12
3.2. Robot Operating System (ROS) ............................................................................................................ 13
3.2.1. Basic tools and concepts ................................................................................................................. 13
3.2.2. ROS build tool: CMake ...................................................................................................................... 15
3.2.3. Types of messages ............................................................................................................................. 16
3.3. Ardrone driver ............................................................................................................................................. 18
3.3.1. Introduction ......................................................................................................................................... 18
3.3.2. Structure ................................................................................................................................................ 18
3.3.3. Custom messages ............................................................................................................................... 20
4. Custom software structure .......................................................................................................... 22
4.1. Introduction .................................................................................................................................................. 22
4.2. Joypad .............................................................................................................................................................. 22
4.3. Spooler ............................................................................................................................................................ 24
4.4. First controller ............................................................................................................................................. 26
5. Position estimation ......................................................................................................................... 28
5.1. Position estimation based on image ................................................................................................... 28
5.1.1. Position estimation based on features ...................................................................................... 28
5.1.2. Position estimation based on phase correlation .................................................................. 36
5.2. Position estimation based on odometry ........................................................................................... 39

Development of autonomous manoeuvres in a quadcopter Report


6. Visual servoing ................................................................................................................................. 40
6.1. The controller node: landing_control ................................................................................................ 43
6.2. The target detection node: tag_detection_server ......................................................................... 44
6.3. The target tracking node: tracker_server ........................................................................................ 50
6.3.1. Visual Servoing Platform (ViSP) .................................................................................................. 50
6.3.2. Ellipse tracker ...................................................................................................................................... 50
6.3.3. Dot tracker ............................................................................................................................................ 52
6.3.4. Calibration of the bottom camera ............................................................................................... 55
7. Conclusions ....................................................................................................................................... 57
8. References ......................................................................................................................................... 59
ANNEX A: CODES ...................................................................................................................................... 61
A.1 Joystick node ..................................................................................................................................................... 61
A.1.1 joystick.cpp ............................................................................................................................................... 61
A.1.2 joystick.h .................................................................................................................................................... 65
A.2 Spooler node ..................................................................................................................................................... 67
A.2.1 spooler.cpp ................................................................................................................................................ 67
A.2.2 spooler.h ..................................................................................................................................................... 69
A.3 First controller ................................................................................................................................................. 71
A.3.1 first_controller.cpp ................................................................................................................................ 71
A.3.2 first_controller.h ..................................................................................................................................... 72
A.4 Pose control based on features (ptam) ................................................................................................. 73
A.4.1 pose_control_ptam.cpp ........................................................................................................................ 73
A.4.2 pose_control_ptam ................................................................................................................................. 79
A.5.Pose control based on phase correlation .............................................................................................. 81
A.5.1 phase_correlation_two_images.cpp ................................................................................................ 81
A.5.2 phase_correlation_camera.cpp ......................................................................................................... 82
A.5.3 phase_correlation_camera.h .............................................................................................................. 85
A.6 Pose control based on odometry .............................................................................................................. 86
A.6.1 pose_control_odom.cpp ....................................................................................................................... 86
A.6.2. pose_control_odom.h ........................................................................................................................... 88
A.7 Visual servoing node ..................................................................................................................................... 90
A.7.1 landing_control.cpp ............................................................................................................................... 90
A.7.2 landing_control.h .................................................................................................................................... 94
A.7.3 tag_detection_server.cpp ..................................................................................................................... 95

Development of autonomous manoeuvres in a quadcopter Report


A.7.4 target_detection_server.h ................................................................................................................. 100
A.7.5 tracker_server.cpp .............................................................................................................................. 101
A.7.6 tracker_server.h ................................................................................................................................... 104
A.8 Ardrone messages ....................................................................................................................................... 106
A.8.1 Pose.msg .................................................................................................................................................. 106
A.8.2 Priority.msg ........................................................................................................................................... 106
A.8.3 Vector3.msg ........................................................................................................................................... 106
A.8.4 Vel.msg ..................................................................................................................................................... 106

Development of autonomous manoeuvres in a quadcopter Report

1. Introduction
1.1. Background and context
Robotics is a branch of the technology that in recent years hasnt stopped growing. The robots
have become an important tool for human beings not only in the military domain, but also in
civilian environments. Some of the tasks assigned to robots and remotely operated vehicles
are observation and exploration of environments, which sometimes can be dangerous,
expensive access or simply inaccessible to humans.
One kind of robots or remotely commanded vehicles that have seen increased their use and
research are flying vehicles and small flying vehicles (Miniature aerial vehicles or MAVs).
Quadcopters are one type of these flying vehicles. A quadcopter (also quadrotor, quadrotor
helicopter or quadcopter) is a multicopter that is propelled by four rotors. The quadcopters
have almost all the advantages (in terms of flight) of the helicopters, such as stay still in one
position when flying, move and turn fast, move to any direction without turning, turn without
moving or take off and land vertically. Quadcopters have become very popular in unmanned
aerial vehicle (UAV) research in robotics and in computer vision.
Researchers are frequently choosing quadcopters because can accurately and efficiently
perform tasks and it is quite easy to implement tasks and do trials with them. In addition,
there is a wide range of prices of quadcopters or sensors, motors or cameras to build them,
and they can be used for many purposes and featured with many different sensors, cameras,
etc. Many researches has been done with quadcopters such as autonomous navigation,
autonomous surveillance, maping, searching and detecting, following, collaborative
manoeuvres, lifting or landing in a target among others.
An example of one of these researches is a project sFly that lasted from 2009 until 2011,
which aims the development of several small and safe helicopters which can fly autonomously
in city-like enviroments and which can be used to assist humans in tasks like rescue and
monitoring. Another example is the Flying Machine Arena (ETH Zurich), which is a space that
was made to fly machines, especially quadcopters.
This project had been developed in the Oceans Systems Laboratory of the Heriot-Watt
University of Edinburgh (Scotland), which is a research centre specialized in autonomous
systems, sensor modelling and processing and underwater acoustic system theory and design.
To this date the OSL has done projects with Autonomous Underwater Vehicles (AUV) but
never with an Unmanned Aerial Vehicle (UAV).
For the interaction with the robots the OSL uses the platform ROS (Robotics Open Source).
This platform provides libraries and tools to help software developers create robot
applications. ROS drivers for the UAV object of this project have already been developed.
1.2. Purpose
The purpose of this project is to use ROS software to interact with an existing UAV and
develop applications in C++ in order to command the UAV. The goals of this project are to
develop applications, like simple controller, to get the UAV first to follow simple commands,
then do position estimation trying several methods and finally detect a target to perform an
autonomous landing on that target.

Development of autonomous manoeuvres in a quadcopter Report


1.3. Specifications and scope
All the programs developed are built upon an already existing driver for the robot. Therefore
the applications made in this project have to adapt to that driver. The scope of this project is
to get the quadcopter to perform some complex manoeuvres autonomously after developing
and trying it with an UAV.
The specifications for this project are:
The manoeuvres implemented have to be autonomous, in which the robot takes its own
decisions based on the information received from the sensors and the criteria established on
the code.
External libraries and existing packages will be used, but most of the packages on the project
have been created, and its code written, by the authors of this project.

Development of autonomous manoeuvres in a quadcopter Report

2. Hardware
The devices used in the present project are a quadcopter and a joypad coupled with a ground-
based laptop. The quadcopter is the main hardware used in the project, and the joypad is used
to control the quadcopter remotely when its needed. All the programs are run in the laptop,
which sends all the orders to the quadcopter.
2.1. The quadcopter: Parrot AR. Drone 2.0

Fig. 1. AR. Drone 2.0

The quadcopter used is the Parrot AR. Drone 2.0 (Fig. 1), which is a ready-to-fly quadcopter,
suitable for operation indoors or outdoors. The Parrot AR. Drone 2.0 is the second version of a
quadcopter sold as a flying sophisticated toy for 14 ages and up created by the company
Parrot. Besides the quadcopter, the company also sells multimedia devices, such as
headphones or speakers, and hands-free kits for cars. This quadcopter can fly indoors or
outdoors (without much wind, up to 15 m/h). It is supposed to be controlled by Wi-Fi using
an application for smartphones or tablets (Fig. 2).

Fig. 2. AR. Drone 2.0 commanded with a smartphone

This quadcopter is used in several universities and research institutions to use it in research
projects in fields such as robotics, artificial intelligence and computer vision. This has been
the quadcopter used in this project because it is not expensive (306.39 with spare battery
included) and it is very easy to use because there are drivers done by universities to connect
to the drone directly and send to it, for example, velocity commands.
Although this quadcopter is used in research, it is not the most indicate because it is a toy and
its motors are not very powerful and it has not been designed to raise extra weight.

Development of autonomous manoeuvres in a quadcopter Report


It features a front facing high definition camera and a low definition camera that points
straight down. The high definition camera looking forwards is an HD camera with 720p at
30fps. It is a wide-angle lens (92 diagonal fisheye) and a H264 encoding base profile. It has
also a 3-axis gyroscope, a 3-axis accelerometer, a 3-axis magnetometer, a pressure sensor, an
ultrasound sensor for ground altitude measurement, a lithium battery and a charger. It is
controlled by four rotors and adjusting the individual motors speed. All the motors contribute
to raise and maintain the quadcopter in the air, but two of them rotate clockwise (front left
and rear right rotors) and the other two counterclockwise (front right and rear left rotors) to
cancel out their respective torques. To move the drone forwards is only necessary to increase
the velocity of the two rear motors and decrease the velocity of the two front ones, and the
turning movements of the drone are achieved by increasing the velocity of the two motors
that rotates clockwise and decrease the velocity of the other two to rotate counterclockwise,
and vice-versa. The dimensions of the quadcopter (Fig. 3) are 517 mm 517 mm with the
outdoor hull, and 451 mm 451 mm with the indoor hull.

Fig. 3. AR. Drone 2.0 sizes

Fig. 4. Altitude sensor test

A test of the altitude sensor of the new drone has been done to check its proper operation
(Fig. 4), and the conclusion is that it is quite accurate when it is still but the measurements
are a bit wrong when it raises or descends rapidly.

Development of autonomous manoeuvres in a quadcopter Report


Motors
The quadcopter has four brushless inrunner motors. Each motor has a power rating of 15
Watts and rotates at 28,000 rpm when hovering. It represents 3,300 rpm at the propeller,
when driven by the AR. Drones reduction gears. The motors range begins at 10,350 rpm, and
goes up to a maximum speed of 41,400 rpm. Each motor is attached to its electronic
controller, which has also been specifically developed for the AR. Drone. An 8-bit low power
microcontroller and a 10-bit ADC control the speed of the motor.
Main Board
The AR. Drone main board embeds the vertical camera, the Wi-Fi chip, and the Parrot P&
central CPU. The P6 chip includes a 468 MIPS ARM926EJ Risc CPU and a video accelerator.
The main board runs Linux along with AR. Drone software, and includes a 256 Mb, 200 MHz,
32 Bits MDDR RAM and 32Mb of NAND Flash. It also includes a Wi-Fi chipset from Atheros
and a USB Port for direct flashing and future extensions.
Navigation Board
The navigation board includes the sensors and a 40 MIPS microcontroller with a fast 12 bits
ADC converter. It has 1 x ultrasonic transceiver and 1 x ultrasonic receiver to measure the
height of the drone up to 6 metres. The 3-axis accelerometer is positioned at the centre of
gravity of the AR. Drone. It is used in a +/- 2g range and digitalized by an on chip 10 bit ADC.
Data is sent to the microcontroller, a 2-axis MEMS gyroscope and a precision piezo electric
gyroscope for the yaw measurement and heading control.
Structure
It has two different hulls made of tough flexible expanded polypropylene foam, and featuring
a rigid impact resistant high gloss outer shell. One hull is for indoor flights with foam around
the blades to protect it, and the other for outdoor flights without the blades protection (Fig.
5). The hulls are attached to the AR. Drone using two magnets, with one positioned on the
hull, and another located on the body. The body set is made of tough expanded polypropylene.
It carries the battery holder, which is mounted on the main body of the drone with foam that
absorbs vibrations from the motors. The battery is positioned slightly towards the rear of the
device, to keep the AR. Drones centre of gravity correct. Its total weight is 380g (with outdoor
hull) or 420g (with indoor hull) because of his carbon fibber tubes structure. Spare parts of
the quadcopter are available on the Internet.

Fig. 5. AR. Drone with the outdoor hull

Development of autonomous manoeuvres in a quadcopter Report


Battery
The official AR. Drone 2.0 high-grade lithium polymer battery has 3 cells, with a capacity of
1000 mAh at 11.1 volts, and a discharge capacity of 10C (Fig. 6). Includes a protection circuit
module that protects the battery from over charging, over discharge and short circuits. The
battery comprises a rigid casing and complies with UL2054 safety standards. The AR. Drone
2.0 Li-Po has two connections, one discharge plug to provide current to the drone, and
another integral connector for balanced charging whilst attached directly to the charger. For
this project we have used two batteries because the duration of a battery when the
quadcopter is working is about 10 or 15 minutes.

Fig. 6. Battery of the AR. Drone 2.0

Most of the parts of the drone can be replaced with spare parts that can be bought from the
same company that AR Parrot.
It has been made a modification to the drone with the project highly advanced due to the
possibility to get better results with the modification. It basically consists of moving the front
looking camera to become a down looking camera because as it is a flying robot, it is much
more useful a good down looking camera with wide angle lens (92 diagonal fisheye) than a
good front camera, and the bottom camera that has by default is of very poor quality. The
foam from the front part of the drone has been removed in order to turn the camera. The
changes can be seen in the Fig. 7. This change has been done at the end of January 2013,
during the project development.

Fig. 7. Modification of the front HD camera position

Development of autonomous manoeuvres in a quadcopter Report


This change was done because of the characteristics of the cameras. To do the landing
manoeuvres is necessary to use a down -looking camera to detect and track the target, and, as
said before, the default bottom camera is a low definition camera. The results obtained with
this camera were not very good mainly because most of the times the target was lost during
the tracking. So it was decided to change the angle of the front camera and make it look
downwards (Fig. 7). This was done because of its high definition, but mainly because it has
wide-angle lens so the frame is high and wide. The results obtained with the turned
downwards camera are much better than with the other because the target is not lost as often
as with the default bottom camera.
2.2. Joypad
The joypad used in this project is the Logitech Cordless Precision Controller for Play Station 3
(Fig. 8). It is a wireless device, so it uses a USB receiver for the data transmission. The
controller requires two AA batteries that provide close to 50 hours of usage. It features a four
direction D-pad, four action buttons (, , ,), four shoulder buttons (R1, R2, L1 and L2),
four central buttons (start, select, D/mode and PS) and two clickable analog sticks.

Fig. 8. Play Station 3 joypad

2.3. Network layout


To run all the system we use a ground-based laptop connected via Wi-Fi with the quadcopter.
The drone has its on-board computer for stabilization, motion control and acquisition the
camera images. It cant process both camera images at the same time. It has to work either
with the bottom camera or with the front camera, but during a flight the working camera can
be changed. Each AR. Drone 2.0 device contains its own wireless router, in which the
smartphones, tablets or laptops are connected to control it. The drone acts as a wireless
server and assigns itself (via its DHCP server) a fixed IP address of 192.168.1.1. All the nodes
(programs) are running in the laptop as well as the AR. Drone driver software to convert the
programs messages into drones messages (Fig. 9).

10

Development of autonomous manoeuvres in a quadcopter Report

Node 1

AR Drone
driver

Node 2

Velocity
commands

...

Fig. 9. Operational structure of the system

All processing data acquired from the drones sensors and cameras and all the programs to
control it run on the ground-based laptop and only velocity commands are sent to the drone
via Wi-Fi. So almost all the computation takes place in the laptop, and there is no on-board
computation in the drone, except for the stabilization and image acquisition. The joypad is
also connected to the laptop via USB stick.
There were problems with the Wi-Fi communication between the laptop and the drone during
some trials because its default Wi-Fi channel is the sixth, and many other wireless networks
were using this channel. It was a problem because this project is carried out in a laboratory at
the Heriot-Watt University of Edinburgh where there are several active wireless networks.
The solution has been changing the channel used for the drones Wi-Fi network to the first one
because in this channel there are less Wi-Fi networks than in the sixth channel and since then
the communications are working properly.

11

Development of autonomous manoeuvres in a quadcopter Report

3. External software
3.1. Ubuntu
The operating system used for the developing of this project is Ubuntu. Ubuntu is based on a
Devian - Linux distribution and it's free and open-source software. The reason for using this
operating system is that it's able to support the different software used in this project that will
be explained below. Ubuntu, just as most commercial operating systems like Windows or Mac
OS, has a graphical interface. Among other things have a desktop, folder organization (Fig. 10),
office applications, web browser, and a terminal (Fig. 12). There are many applications
available for free download and installation as image editors, games, drawing tools, sound and
video players and more.

Fig. 10. Ubuntu desktop and folder system

Fig. 11. Ubuntu software centre

On this project, as will be explained on the next sections, the terminal has been an essential
tool (Fig. 12).

12

Development of autonomous manoeuvres in a quadcopter Report

Fig. 12. Ubuntu terminals

3.2. Robot Operating System (ROS)


The operating system used to interact with the quadcopter is the Robotic Operating System
(ROS)(Fig. 13). It provides hardware abstraction, device drivers, libraries, visualizers,
message passing and package management. It doesnt have a graphical interface so it has to be
operated through a terminal using written commands. ROS is licensed under an open source
BSD license. To understand the next chapters is necessary to make an introduction to the
basic tools and concepts of the ROS.

Fig. 13. Robotic Operating System (ROS)

3.2.1. Basic tools and concepts


Node: It's an executable that uses ROS to communicate with other nodes.
Launch: The utility of .launch file is to be able to start different nodes at the same time
using only one terminal and one command.
Package: It's the lowest level of the ROS software organization. It can contain
executables, tools or libraries.
Stack: It's a collection of packages.
Roscpp: It's the ROS library for C++. Enables the programmer to interface with all the
ROS tools.
Message: It's a piece of information you can exchange with other nodes. There are
different types of messages depending on the data type contained on the message.
Anyone can develop a new type customized for it's own application.

13

Development of autonomous manoeuvres in a quadcopter Report


Roscore (Fig. 14): Its at the same time the master, the rosout and the parameters server.
When using ROS it has to be running in a terminal.

Fig. 14. Roscore running on a terminal

Topic: Nodes can publish messages to a topic as well as subscribe to a topic to receive
messages. It is the medium of communication between nodes. A clear example of this
would be the constant feedback on the position of the robot; the node that obtains this
position would be the publisher and the node containing the PID the subscriber.
Parameter: Parameters can be either set or accessed from a node or from a terminal by a
user. It's a useful way to pass punctual information. One example could be for the gain
of a PID implemented in a node, if it takes its value from a parameter, the user can
change this value during the functioning of the program and it will be actualized.
Service: When a node calls a service it's making a request on another node, and will then
receive a response. Like in the case of the messages, there are different types of
services depending on the type of the data
Rxgraph (Fig. 15) is a tool that creates a graph showing all the nodes that are currently
running and the topics that join them. An arrow, coming out from the publisher and
into the subscriber, represents the topic. Each node can be at the same time publisher
and subscriber to different nodes.

Fig. 15. Rxgragph showing ROS node and topics

To summarize, in order to communicate, the nodes can recur to send messages through
topics, call services or set and get parameters.

14

Development of autonomous manoeuvres in a quadcopter Report


3.2.2. ROS build tool: CMake
The tool ROS uses to compile a package is CMake. CMake is an open-source build system.
Controls the software compilation using simple files. These files are the CmakeLists.txt and
the Manifest.xml and every package has to contain them to be compiled. The CmakeList.txt file
(Fig. 16) is where has to be specified which .cpp files have to be compiled, and whether or not
are messages or servers to be build in the package. Each file to be compiled has to be in the
proper folder, the one for the .cpp files to create a node is called src, the one for the messages
is msg and for the services is srv.

Fig. 16. CmakeList.txt

15

Development of autonomous manoeuvres in a quadcopter Report


On the Manifest.xml file (Fig. 17) have to be written the dependencies of the package. That is,
the other packages with whom this one communicates or takes information from.

Fig. 17. Manifest.xml


3.2.3. Types of messages
There are many types of messages but here will only be explained the ones of interest to the
project. Messages, like nodes, have to be created in a ROS package. The name of a type of
message contains the name of the package where it comes from and then the name of the
specific type. For instance, the package std_msgs contains different standard messages like:
std_msgs/String, std_msgs/Bool. The type from this package that's been used in this project is
std_msgs/Empty which as the name implies does not contain any fields.
To work with image broadcasting it has been used the sensor_msgs package. In particular, the
sensor_msg types used are sensor_msgs/Image and sensor_msgs/CameraInfo.
The sensor_msgs/Image contains the following fields:
Header header

uint32 seq

time stamp

string frame_id
uint32 height
uint32 width
string encoding
uint8 is_bigendian
uint32 step
uint8[] data

16

Development of autonomous manoeuvres in a quadcopter Report


And the sensor_msgs/CameraInfo contains:
Header header

uint32 seq

time stamp

string frame_id
uint32 height
uint32 width
string distortion_model
float64[] D
float64[9] K
float64[9] R
float64[12] P
uint32 binning_x
uint32 binning_y
sensor_msgs/RegionOfInterest roi

uint32 x_offset
uint32 y_offset

uint32 height

uint32 width

bool do_rectify

As can be observed, it is possible to have a message type inside another one. This makes it
simpler to design message types because one can combine existing ones then and add more
fields if necessary. The package geometry_msgs provides messages for common geometrical
primitives such as points, vectors and poses.
The type geometry_msgs/Point32 is to send a 3D point
float32 x
float32 y
float32 z

geometry_msgs/Vector3 is for a 3D vector


float64 x
float64 y
float64 z

The type geometry_msgs/Twist can have different uses, such as sending a velocity command,
or a displacement both in 3D coordinates and with linear and angular fields.
geometry_msgs/Vector3 linear
float64 x
float64 y
float64 z



geometry_msgs/Vector3 angular

float64 x
float64 y
float64 z


17

Development of autonomous manoeuvres in a quadcopter Report


To use a type it is necessary to have the package containing it installed and compiled. And
inside the code there must be a header for each type of message used.
For example:
#include geometry_msgs/Twist.h
For many applications it's necessary to create custom types of messages. Anyone can create as
many types as needs. For the purpose of this project some new types were designed that will
be explained further on.
3.3. Ardrone driver
3.3.1. Introduction
A driver for the Parrot AR. Drone was already available in the ROS webpage when this project
started. Since it's features were suitable for the purpose of the project it was decided to use it
instead of developing a new one. This driver has been developed in the Autonomy Lab of
Simon Fraser University (Canada) by Mani Monajjemi. This driver is a ROS package called
ardrone_autonomy. This package contains the node ardrone_driver, the custom messages
created for this node and the servers for the services offered by this driver.
3.3.2. Structure
In the Fig. 18 can be observed the different topics that interact with this driver. Those whose
arrows point to the ardrone_autonomy are the ones that the node is subscribed to. This means
that the node is listening to the messages that are sent through these topics. To command the
Drone a node has to publish to these topics.
In order to command the AR. Drone to take off, land or reset an empty message has to be sent
to the corresponding topic. The type of these messages has to be std_msgs::Empty. Once the
quadcopter has done the take off and it's hovering it can be controlled by sending velocity
commands to the topic cmd_vel. The type of these messages has to be geometry_msgs::Twist.
On the other hand, the node is publishing information on the topics pointed by the arrows.
The node can send messages thorough any of these topics and any other nodes that are
subscribed to that topic will be able to listen to them. If a node wants to control or command
the AR. Drone it's recommendable that it listens to some of the topics the driver publishes to
in order to know what the quadcopter is doing and in which state it is.
The navdata topic contains general information on the navigation of the quadcopter received
from the different sensors. The ones relevant to this project are the battery percent, the state
of the drone, the orientation, the linear velocity and the linear acceleration on the three axes
and the time stamp of the message. The state of the drone can be: 0: Unknown, 1: Inited, 2:
Landed, 3 and 7: Flying, 4: Hovering, 5: Test, 6: Taking off and 8: Landing. The type of this
message is explained in the next point.
The topics /ardrone/bottom/image_raw and /ardrone/front/image_raw contain the images
from the bottom and front camera respectively. The ARDrone only allows receving images
from one camera at a time so while one of these topics is broadcasting the other won't be
sending anything. The topic /ardrone/image_raw contains the images of the camera that is

18

Development of autonomous manoeuvres in a quadcopter Report


broadcasting at that moment. The messages on this topic are of type sensor_msgs/Image. The
linear acceleration, angular velocity and orientation from the Navdata are also published to
the topic /ardrone/imu. The message type is sensor_msgs/Imu and the units are metric.
Each camera has as well a topic where the camera information, like the dimensions of the
image, is broadcast. The type of its messages is sensor_msgs/CameraInfo. Only one of the
cameras is broadcasting at a time. To choose which camera there are two services:
/ardrone/togglecam doesn't require any parameter and changes from the actual camera
feed to the other one.
/ardrone/setcamchannel with a parameter 0 changes to the front and with a parameter
1 to the bottom camera.

Fig. 18. Topics to where the /ardrone_driver is publishing or subscribed to

19

Development of autonomous manoeuvres in a quadcopter Report



The update frequency of the navdata topic can be chosen using the parameters navdata_demo
and realtime_navdata. The first determines the frequency of the drones data transmission as
15Hz when the parameter is set to 1 and 200Hz when its set to 0. The second parameter
affects the driver update frequency, if its set to true, the driver will publish the received
information instantly, otherwise the driver will cache the most recent received data and the
publish that at a fixed rate, configured by another parameter called looprate. The default
configuration is realtime_navdata set to false and looprate set to 50.
A clarification must be made in this chapter; some of the features explained here were not
available at the beginning of this project. These were added in a major update of the package
at approximately the middle of the development period of this project. This will be mentioned
in some of the following chapters where the features would have been useful and have been
applied afterwards.
3.3.3. Custom messages
For the topic ardrone/navdata the developers of the code created a specific type of message
called ardrone/Navdata, its fields are explained below.

header: ROS message header

batteryPercent: The remaining charge of the drone's battery (%)

state: The Drone's current state: 0: Unknown, 1: Inited, 2: Landed, 3 and 7: Flying, 4:
Hovering, 5: Test, 6: Taking off and 8: Landing.

rotX: Left/right tilt in degrees (rotation about the X axis)

rotY: Forward/backward tilt in degrees (rotation about the Y axis)

rotZ: Orientation in degrees (rotation about the Z axis)

magX, magY, magZ: Magnetometer readings (AR. Drone 2.0 Only) (TBA: Convention)

pressure: Pressure sensed by Drone's barometer (AR. Drone 2.0 Only) (TBA: Unit)

temp : Temperature sensed by Drone's sensor (AR. Drone 2.0 Only) (TBA: Unit)

wind_speed: Estimated wind speed (AR. Drone 2.0 Only) (TBA: Unit)

wind_angle: Estimated wind angle (AR. Drone 2.0 Only) (TBA: Unit)

wind_comp_angle: Estimated wind angle compensation (AR. Drone 2.0 Only) (TBA: Unit)

altd: Estimated altitude (mm)

vx, vy, vz: Linear velocity (mm/s) [TBA: Convention]

ax, ay, az: Linear acceleration (g) [TBA: Convention]

tm: Timestamp of the data returned by the Drone returned as number of micro-seconds
passed since Drone's boot-up.

All the other messages use a standard type of messages.

20

Development of autonomous manoeuvres in a quadcopter Report


Services
The ardrone_autonomy package contains different services. In this project have been used the
services ardrone/togglecam, that changes the channel of the camera feed and
/ardrone/setcamchannel that changes to the desired camera channel, expects one parameter
that must be set to 0 for the front camera and 1 for the bottom.
Also a service for executing flight animations can be called using the command
/ardrone/setflightanimation. Expects two parameters: type and duration. The duration has to
be a uint16 and if its set to 0, lasts the default duration of the animation. The type has to be a
uint8 between 1 and 19 and determines which kind of animation has to perform from the
following list:
1. ARDRONE_ANIM_PHI_M30_DEG
2. ARDRONE_ANIM_PHI_30_DEG
3. ARDRONE_ANIM_THETA_M30_DEG
4. ARDRONE_ANIM_THETA_30_DEG
5. ARDRONE_ANIM_THETA_20DEG_YAW_200DEG
6. ARDRONE_ANIM_THETA_20DEG_YAW_M200DEG
7. ARDRONE_ANIM_TURNAROUND
8. ARDRONE_ANIM_TURNAROUND_GODOWN
9. ARDRONE_ANIM_YAW_SHAKE
10. ARDRONE_ANIM_YAW_DANCE
11. ARDRONE_ANIM_PHI_DANCE
12. ARDRONE_ANIM_THETA_DANCE
13. ARDRONE_ANIM_VZ_DANCE
14. ARDRONE_ANIM_WAVE
15. ARDRONE_ANIM_PHI_THETA_MIXED
16. ARDRONE_ANIM_DOUBLE_PHI_THETA_MIXED
17. ARDRONE_ANIM_FLIP_AHEAD
18. ARDRONE_ANIM_FLIP_BEHIND
19. ARDRONE_ANIM_FLIP_LEFT
20. ARDRONE_ANIM_FLIP_RIGHT

21

Velocity
commands

Pose
Spooler
Control
Development of autonomous manoeuvres in a quadcopter
Velocity Report
Velocity
commands

commands

Pose
Control based
on odometry

Velocity
commands

Visual
Servoing

4. Custom software structure


AR Drone
driver

4.1. Introduction

Estimated
position
(NOT
SCALED)

PTAM

The general structure of the software is shown in Fig. 19. There is a node called spooler that
decides which messages arrive to the AR. Drone driver. Publishing to this Spooler node there
is the joypad node, to be able to manually control the quadcopter, and then the controller
node that can be any node designed to AR
control
AR Drone cameras
Dronethe AR. Drone.
What has been developed in this project are the different controller nodes, all guided by the
AR. Drone cameras and sensors. Over the next chapters these controllers will be explained in
detail.

Velocity
commands

Spooler

joypad
node

Velocity
commands

Velocity
commands

Controller

Navigation
data

AR Drone
driver
Navigation
data

Velocity
commands

Cameras
images

AR Drone motors / cameras / sensors


Fig. 19. General structure of the software

4.2. Joypad
The aim of the Joypad node is to be able to control the AR. Drone manually. This is especially
useful when doing trials of any of the autonomous manoeuvres in case it is necessary to
regain the control of the robot because the program is not working as planned or needs a little
manual readjustment to keep working. This can help prevent any danger of injury to the
users, the viewers or any damage on the robot.
An existing code to control another robot has been adapted to develop this node. This code
(ANNEX A.1 Joystick node) was for a land robot called Turtlebot and was developed by Josep
Bosch from the Oceans Systems Lab.
All the topics published by this node are listened by the spooler node that is explained below.
All the messages broadcast from the Joypad node have the Priority field set to true so that its
commands are always executed regardless of the commands coming from other nodes.

22

Development of autonomous manoeuvres in a quadcopter Report

Fig. 20. Topics published by the ardrone_joy node

The commands that can be sent from the joypad are the following:

Velocity commands: forwards, backwards, to the left, to the right, up and down. The
velocities range from 0 to 0.3m/s that is the AR. Drone 2.0 maximum velocity.

Take off

Land

Reset.

Execute a flip to the left, to the right, backwards or forward.

Custom and cmd-aux do not have a specific use and can be characterized different for
each node. In this project have only been used in the pose_estimation_ptam node as will
be explained further on in this report.

The function of the used buttons can be shown in the Fig. 21.
Reset

End take over

Take off

Move up and
down

R1, R2,
L1 or L2

Flips
Land

Turn
Move forwards and
backwars

Move right and left


Fig. 21. Functions of the joypad buttons

23

Development of autonomous manoeuvres in a quadcopter Report


4.3. Spooler
The spooler node aim is to manage the messages sent to the AR. Drone 2.0 from the different
nodes in order to prevent the driver receiving contradictory messages at the same time.
All the nodes that want to send commands to the quadcopter must send them to the spooler
instead and this node will evaluate the priority of the messages and decide which messages
should be passed to the robot.

Fig. 22. AR. Drone Spooler topics

24

Development of autonomous manoeuvres in a quadcopter Report


Functioning
By default the spooler will only pass the commands from the joypad, and only when it
receives the end take over command from the joypad will start passing messages from a
controller node. Once this happens, and the robot is receiving the commands from the
controller node, to regain the manual control of the quadcopter is only necessary to send a
command from the joypad node. As soon as the spooler receives any message from the joypad,
cease to listen to the controller and its again in a manual control mode until the end take
over command is received. The code (ANNEX A.2 Spooler node) checks the priority field of
the messages that are explained below.

Fig. 23. Topics to which the ardrone_spooler is publishing and subscribed to

Custom ardrone_msgs
In order to know where the messages came from it was necessary to design new types of
message that had assigned a priority. These custom messages are defined as ardrone_msgs.

ardrone_msgs/Priority Has only one field called Priority containing a bool.


bool priority

ardrone_msgs/Vel Has the same fields as a geometry_msgs/Twist plus the Priority field
that is a bool.
ardrone_msgs/Vector3 linear

float64 x

float64 y

float64 z
ardrone_msgs/Vector3 angular

float64 x

float64 y

float64 z
bool priority

This field Priority it's set to true when the message comes from the Joystick node, and set
to false if it comes from any other node. This way the spooler node checks this field of the
message to know where the message comes from.

25

Development of autonomous manoeuvres in a quadcopter Report


As was explained previously, to create new types of messages is necessary to create a
package. In this case the package created was called ardrone_msgs. Then the code to generate
the message type has to be in a file named as the type and has .msg extension, saved in a
folder called msg. In this file there must be written the fields of the message.

Fig. 25. Folder msg

Fig. 24. Example of code to generate a message

4.4. First controller


This first and very simple controller (ANNEX A.3 First controller) was developed to be able to
try the spooler and joypad nodes. Also was useful to make a first approach to commanding the
AR. Drone and to the roscpp library.
This node listens to the navdata topic to know the drone state. Then publishes the commands
to the spooler topics: cmd_vel, land and take_off. The messages sent from this node have the
field priority set to false, so in any moment during the execution of the program the Joystick
node can take over the quadcopter.


Fig. 26. Controller node rxgraph

26

Development of autonomous manoeuvres in a quadcopter Report


The controller node first commands the AR. Drone to take off, once it's hovering makes it
rotate for two seconds, then stop rotating and wait hovering for two more seconds and finally
land. The code functions as a state machine changing between the states (Table 1): LANDED,
TAKING_OFF, MOVING, STOPPING, WAITING, START_LANDING, LANDING and FINISHED.
There are not different paths, so each state simply executes its job and gives way to the next
one on the list.
STATE

FUNCTION

LANDED

Makes sure the drone is in landed state and sends the take_off message

TAKING_OFF

Waits until the take_off is accomplished and the drone is hovering. In


order to know that checks the Navdata field state.

MOVING

For two seconds moves forwards. Sets a timer and when the timer
finishes it set the next state.

STOPPING

Sets all the velocities to zero and changes to waiting.

WAITING

Starts a timer and waits for two seconds until it finishes and sets the next
state.

START_LANDING Gives the command to land and sets the next state.
LANDING

This state does nothing,. Just waits until the drone has landed, then it sets
the state to finish.

FINISHED

Last state. Just shows a sentence saying the mission has been
accomplished successfully.
Table 1. Code states

27

Development of autonomous manoeuvres in a quadcopter Report

5. Position estimation
The issue of identifying the location of a mobile robot (x, y, z, and orientation) at any given
time is one of the most fundamental problems in robotics. Localization is a version of on-line
temporal state estimation, where a mobile robot seeks to estimate its position in a global
coordinate frame. The pose estimation problem can be solved in different ways depending on
the choice of the method. Most methods use relative position techniques, in which the robot
position is initialised to zero at start-up and any subsequent motion is measured respect to
this location.
In this project three methods are developed to do the position estimation task: first two are
based on the image obtained from the cameras of the drone, and the other one is based on the
robots odometry.
5.1. Position estimation based on image
The following two methods use the image from the cameras of the robot to estimate its
position. The first of the following methods uses the front HD camera looking forwards and
its based on features. The second one is designed for a looking down camera and its based on
phase correlation between consecutive images. This section of the project has been made
before turning the front HD camera, so, during the development of this part the quadcopter
had the front HD camera and the default bottom camera.
5.1.1. Position estimation based on features
Two methods had been tested to do the position estimation based on features: a)
mono_odometer node from Viso2 package, and b) ptam node from ptam package in the
ethzasl_ptam stack.
a) Viso2
The Systems, Robotics and Vision group of the University of the Balearic Islands, Spain,
maintain this package. It contains two nodes that use the libviso21 library: mono_odometer
and stereo_odometer. Both estimate camera motion based on incoming rectified images from
calibrated camera.
In our project we tried the mono_odometer node from viso2 package because we had only one
camera looking down. To estimate the position, this algorithms needs to calculate the robot
motion, and to be able to calculate the camera motion the transformation from the camera
frame to the robot frame has to be known. To estimate the scale of the motion, this node
needs the altitude and the pitch of the robot. But this node has some limitation that makes it
not suitable to use for this purpose in this project. It is not recommended to use it when there
are rotations. If the movement is linear it calculates the position quite well, but when there
are rotations it doesn't work well. Since a quadcopter is very unusual to fly always with linear
velocity without turning, this package was discarded to do the motion estimation. Also,
monocular scale is estimated by assuming a fixed camera height over ground, which is very
unusual for a quadcopter flight.

1 LIBVISO2 (Library for Visual Odometry 2) is a very fast cross-platfrom (Linux, Windows) C++ library

with MATLAB wrappers for computing the 6 DOF motion of a moving mono/stereo camera.

28

Development of autonomous manoeuvres in a quadcopter Report


b) Ethzasl_ptam
Ethzasl_ptam is a ROS-stack builds on the monocular SLAM framework PTAM2 presented by
Klein and Murray in their paper at ISMAR07 [1]. This stack contains two packages: ptam
package, which has the main nodes to perform the monocular SLAM, and ptam_com package,
which contains the header files for communication with the ptam package and custom
message, server, and action definitions. Inside the ptam package there is the main node called
ptam as well.
In this project two nodes will be used to test this method, one to get the position and another
to control the first node and the drone by sending velocity commands. The node used to get
the position is the ptam node from ptam package.
Ptam node
This package is maintained by Stephan Weiss, Markus Achtelik and Simon Lynen. This
software has been chosen for this project because it is a SLAM system, which is a technique
that doesnt need any prior information about the world to build a map, and particularly
because it has been used before to estimate the position of a quadcopter robots because its
ability to not losing the map with a camera in fast motion. This software splits the tracking
and mapping. In this way, the tracking procedure is no slaved to the map-making procedure,
and different tracking methods can be used (ptam uses a coarse-to-fine approach with a
robust estimator). This node gives the pose of the camera, x, y and z, and the angular position
in quaternion angles.
The first idea was to use the bottom camera, but it was not used because its bad resolution
and the difficulty to get features on a homogeneous floor. The camera model used in PTAM is
made for wide-angle lenses (>90), so definitely it is not made to work with cameras like the
default bottom camera of the AR. Drone 2.0. However we have tried to do it with the bad
definition down looking camera but it doesnt work properly, so we decided to use it with the
HD front looking camera. It is much easier to get features with the front camera, not only
because its higher definition, but also because of the less homogenous surfaces.
Since only monochrome capture is needed for the PTAM input images, we use a package to
convert the input image from the drone to black and white image. This package is called
image_proc.
Image_proc package
This package is made by Patrick Mihelich, Kurt Konolige and Jeremy Leibs. It removes the
camera distortion from the raw image stream, converts it to a monochrome image, or both.
Since the front HD camera has wide-angle lens (92 diagonal fisheye), the image appears
rounded, and for some applications is should be rectified. This is not the case of PTAM, since it
works fine with the not rectifies images, but it uses monochrome images for tracking the
features, so in this system this package will be used to provide ptam with black and white
images.

2 PTAM is software developed by the Active Laboratoy Department of Engineering Science Univerity of

Oxford by George Klein and David Murray. PTAM (short for Parallel Tracking and Mapping) is a piece
of computer vision software that can track the 3D position of a moving camera in real time. It is a
SLAM system for augmented reality. It requires no markers, pre-made maps, known templates, or
inertial sensors but instead works out the structure of the world as it goes. PTAM is useful primarily
for augmented reality, although it has been applied to other tasks (robot guidance) as well.

29

Development of autonomous manoeuvres in a quadcopter Report


In the left hand of the Fig. 27 there is a non rectified frame, where can be seen that the table is
not flat, while, in the right frame, where the same frame is rectified with the Image_proc
package, the table is shown as flat.

Fig. 27. Not rectified and rectified images from the HD camera

Ptam camera calibration


Camera calibration often referred to as camera resectioning, is a way of examining an image,
or a video, and deducing what the camera situation was at the time the image was captured.
Camera calibration is used primarily in robotic applications, and when modelling scenes
virtually based on real input. Traditionally, camera calibration was a difficult and tedious
process, but modern software applications make it quite easy to achieve, even for home users.
This is the case for the ptam node. Inside the PTAM package, ethzal_ptam, there is a node
called camera_calibrator. It is a tool to calibrate the intrinsic camera parameters of a camera
with a wide-angle lens. The camera calibration is done with a grid with black and a
homogeneous surrounding. For the calibration procedure is needed to grab a few frames
(around 10 shots) of the grid viewed from different angles (Fig. 28). After that, the node
calculates five numbers that will be used to set five parameters in a PTAM source file, which is
used to run PTAM.

Fig. 28. Shots to calibrate the camera

Initialisation of PTAM
For the use of PTAM is necessary to start getting the features to track from two different
points of view in order to be able to triangulate and find the relative distance between them
and the camera. It takes a main plane using the initial features, and all the positions are based
on the distance of the camera to the centre of coordinates of that plane. For this reason is
necessary an initialisation to use it.

30

Development of autonomous manoeuvres in a quadcopter Report

Fig. 29. Initialization of PTAM

To get the two points of view of the initial features a translation of the camera has to be done.
It has to be only translation, not rotation, and it has to move 10 cm approximately (Fig. 29).
When it has been done, PTAM is able to start building a map (not scaled), track the initial
features and find new features to track in the camera frame.
If only the PTAM is used, the commands to control the PTAM (start a map, restart a map, get
the two point of views of the initial features, etc.) are sent with the keyboard. For example, get
the two initial points of view is made by pressing the space bar of the keyboard or the letter
r key to restart the map.
The controller node: pose_control_ptam
The functions developed in this node (ANNEX A.4 Pose control based on features (ptam)) can
be split in to different functions: the ones to control ptam node, and the ones to control the
drone. The functions to control ptam node are creating a map, scaling the map and restarting
the map. The functions to control the drone from the estimated position are position keeping
and moving in any x-y direction sending distance commands. The position keeping function
has been done to test the pose estimation, and the moving function, to test the scaling
function. Should be pointed that the driver of the drone has its own position keeping function
that works very well when keeps the angle but not so well when keeps the linear (x, y)
position.
The functions to control the ptam node has ben done by sending the same message that
would send a keyboard (space bar, letter r, etc.) to the ptam node, for example
msg_spacebar.data = "Space" for the space bar press. The map is restarted in the same way.
And to make the initialisation and to get the two initial points of view an action had been
implemented. This action permits to move the camera translating but not rotating it. The way
to do this is descending or rising the quadcopter and get the frames before and after that
movement.
The other function it has to do with PTAM is the scaling function because monocular vision
cannot measure metric scale; the maps are built with an undetermined scale. To do the scaling
one real measurement in any of the three axes must be known, for example, a known width or
height of an object, or, as done in this project, moving the camera again only translating and
not rotating and knowing how much has it moved. To use this method is necessary the
altitude sensor. Rising the quadcopter again about 10 cm, the scale can be done. The altitude
the drone has raised (altitude sensor) can be compared with the displacement in the z-axis of

31

Development of autonomous manoeuvres in a quadcopter Report


PTAM, and with a division the scale can be found easily.
The functions to control the drone are the position keeping function and moving the drone
by sending distance commands. In the position keeping function the drone tries to stay still in
its position, so when it is moved from that position it comes back to its initial position. The
goal of the position keeping function is to test the accuracy of the estimated position with
PTAM. The controller software is structured by states (Fig. 30 and Table 2), which are:
TAKEOFF, START_INIT, MOVING_TO_INIT, END_INIT, RE_INIT, POSITION_KEEPING and LOST.
TAKING_OFF

START_INIT

MOVING_TO_INIT

END_INIT
RE_INIT
POSITION_KEEPING

LOST

Fig. 30. Sturcture of the states in the code

STATE
TAKEOFF
START_INIT
MOVING_TO_INIT
END_INIT

RE_INIT
POSITION_KEEPING
LOST

Function
The drone takes off
Starts the initialisation: takes the initial
features in the first frame and rise the
drone
The drone is rising
Ends the initialisation: stops the drone
and takes the position of the initial
features in the frame
Prepare the restart of the initialisation:
descend
Do the scaling (every time there is a
new map), memorizes the initial
position and keeps this position
Stops the drone and turn to find the
lost map

Table 2. States of the code

NEXT STATE
START_INIT
MOVING_TO_INIT
END_INIT
POSITION_KEEPING: if
the map works
RE_INIT: If the map
doesnt work
START_INIT
LOST: if the map is lost
POSITION_KEEPING: if
the map if found
START_INIT: If the map
isnt found after a timer
32

Development of autonomous manoeuvres in a quadcopter Report


Functions in each state
In the START_INIT STATE, the frame from the front camera will be used as the first frame to
do the initialisation of PTAM. In the MOVING_TO_INIT state, the drone is moving vertically to
get a second frame. In the Fig. 32 the features from the first frame are tracked.
When the drone has moved enough to get a different frame with the same features, ptam node
is able to start building the map from the features (Fig. 31). This map is build only with the
features and it gives the camera position. The map can also be seen without the image camera
and in a 3D view (Fig. 33).

Fig. 32. Features tracked

Fig. 31. PTAM map

33

Development of autonomous manoeuvres in a quadcopter Report

Fig. 33. 3D PTAM map

To do the function of moving the drone using distance commands, a new usage of some
buttons of the joy has been implemented. This buttons set a modification in the initial position
memorised in the POSITION_KEEPING state. The position can be modified each time the
buttons are pressed by one meter in any direction x-y with the arrows buttons of the joypad.
This can only be preceded by a scaling, because without the scaling procedure, the map has no
measurements. The distance commands allows to test the scaling procedure implemented in
this node, and the result obtained with this method is that it is not very accurate. In addition,
the scale is not always the same because of the not accuracy of the altitude sensor and the
small random movements of the quadcopter when it flies. However, it is an initial approach of
the scaling with a quadcopter with one camera, and its error can be disregarded in some
applications. There was implemented another button (Select button of the joypad) in this
node to start a new map. This has been done because with this button the map can be build
wherever we drive the drone.
To operate the entire package a launchfile has been created to start the nodes: ardrone_driver,
ardrone_spooler, ardrone_joy and image_proc. Once all these nodes are started the ptam node
is started. Finally the node to control all the system is started: the pose_control_ptam node.
In conclusion, the position-based visual servoing node works quite well if there are enough
features in the image frame. So, to use it is necessary to have an environment with many
things to get features from it around the flying zone. If there are too few features and is not
able to build a map with them or the drone loses the map, this node is able to restart a new
map with new features. As said before, the scaling method is not very accurate, but is an initial
approach. An improvement to the code would be to orientate PTAM axis to match with the
drones axis, in this way, the precision in the scaling and the movements would increase
significantly because the grid will always be in the same place respect the drone. This
orientation could not be achieved in this project.


34

Development of autonomous manoeuvres in a quadcopter Report


PTAM with the HD camera turned
This part of the position estimation of this project (5.1.1. Position estimation based on
features) has been developed during the first part of this project, when the HD camera was
not turned down yet; the camera was a looking forward camera.
Since the front camera was turned (looking down), the ptam node wont work as explained in
this section anymore. For this reason we have tried to make it work with the looking down
camera. Theoretically there is not much difference between both camera orientations; the
only change is in the PTAM map initialization. With the forward-looking camera, raising the
drone about 10 cm to get two different frames with the features did the map initialization.
With the down-looking camera, moving the drone forwards does this initialization.
But using PTAM with a looking down camera of a quadcopter and flying at low altitude the
system doesn't work as good as with the camera looking forwards. It does not work very well
because since the map is build by moving the drone forwards instead of raising it, the drone
leads forward and the map can not be build with only translation, at least it's not as easy as
raising it. It has to be done only translating the camera and not rotating it, and it is almost
impossible to do with only translation. Moreover, it is more difficult, than with the camera
looking forwards to maintain the map because if the drone moves few meters in any direction
(x or y), the frame could be completely different, and since it tilts to move, the map can be lost
easily.
So, the results obtained in the ptam node with the HD down looking camera are not very good.
Probably it would be better if the drone flew at higher altitude, because the map would has
more features in a bigger area, so when it moves the features wouldnt be lost so easily.
Tum_ardrone package
By the end of November, when the pose_control_ptam node was almost finished and running
in the drone, a package came out doing very similar position estimation with PTAM. This
package is called tum_ardrone and is the implementation of the papers [6] and [7] presented
by J. Engel, J. Sturm and D. Cremers. The package consists of three components: a monocular
SLAM system, an extended Kalman filter for data fusion and state estimation and a PID
controller to generate steering commands. With this package is demonstrated that with this
system (position estimation with PTAM with a front looking camera) the drone is able to
navigate in previously unknown environments at absolute scale without requiring artificial
markers or external sensors. Another significant thing that this package can do, and we
couldnt do with ours, is that the axes of PTAM are orientated according to the initial position
of the drone, with the grid always on the floor or on a parallel plane to the ground.
This package contains tree nodes: drone_stateestimation, which includes PTAM and
visualisation; drone_autopilot, which is the drone controller that sends the velocity
commands to the driver (ardrone_driver), it requires the drone_stateestimation node; and
drone_gui, which is the GUI (Graphical User Interface) for controlling the drone (with a
joystick or a keyboard) and for controlling the drone_autopilot and the drone_stateestimation
node.
This node came out for the first time in November the 21st and, since then, it has been updated
often.

35

Development of autonomous manoeuvres in a quadcopter Report


5.1.2. Position estimation based on phase correlation
This node estimates the displacements of the quadcopter applying the phase correlation
algorithm to the images obtained from the bottom camera. The theory is based on the paper
[2] where this method was used for sonar imaging.
Phase correlation is a method for determining the offset between two similar images using a
fast frequency domain approach. This algorithm uses the Fourier shift property, which says
that a shift between two functions is transformed in the Fourier domain as a linear phase
shift. If the two functions are images, applying the Fast Fourier Transform (FFT) to each, and
computing their normalized cross power spectrum and then the inverse FFT, the shift
between them two can be obtained.
On the Fig. 34 can be observed two images that have had applied a translation between them.
The third image is the result of the algorithm, the white dot is the maximum value of the result
matrix and its location corresponds to the translation.

Fig. 34. Example of phase correlation

This however only works for images that have been translated but not scaled or rotated. Then
to use this method the second image has to be de-rotated and de-scaled before applying the
algorithm. In order to do that another property of the Fourier transform can be used: the
differences in rotation and scale between the two images in the log-polar coordinates are
represented by linear shifts.
In view of this properties the process of phase correlation between two images f and g
would be the following: First apply the FFT to both images, then to each result apply a polar
transform. Taking only the magnitudes of the polar transforms calculate the normalized cross
power spectrum and then its inverse FFT. With this the rotation and scale will be known and
the second image can be de-scaled and de-rotated. Finally, the phase correlation algorithm
can be applied and the translation obtained. In the diagram in Fig. 35 this process can be seen
more clearly.

36

Development of autonomous manoeuvres in a quadcopter Report


FFT

FFT

Polar transform

Polar transform

|G|

|F|

FFT

FFT

|G|

|F|

Cross power spectrum

FFT

Rotation and scale


De-rotate
image
FFT

FFT

Cross power spectrum

IFFT

Translation

Fig. 35. Phase correlation method

Since this was implemented from the mathematical explanation it was decided to go step by
step. The first code (ANNEX A.5.1 phase_correlation_two_images.cpp) developed was only for
doing phase correlation between to non-rotated and non-scaled images and gave the
translation in pixels. After this code worked the next step was implement the same for a
camera feed and get the translation in milimeters. This second code (ANNEX A.5.2
phase_correlation_camera.cpp) works but is really unstable because a minimum rotation of
the camera or approximation to the image will make the output meaningless.
The period of time for the developing of the project has come to the end before this node was
finished so only the version without the rotation has been achieved.
For the implementation of this algortihm OpenCV and cv_bridge package has been used.

37

Development of autonomous manoeuvres in a quadcopter Report


OpenCV and cv_bridge package
OpenCV (Open source Computing Vision) is a library of programming functions for real time
computing vision. Is a free library for academic and commercial use, it is released under an
open source BSD license. It was officially launched in 1999, and the first alpha version of
OpenCV was released to the public in 2000. OpenCV is written in C++ and it has C++, C, Python
and Java interfaces running on Windows, Linux, Android and Mac. The library has more than
2,500 optimised algorithms, and it is used in many applications such as image segmentation,
transforms, detection, recognition, tracking, camera calibration, etc. Uses range from
interactive art, to mine inspection, stitching maps on the web on through advanced robotics.

Fig. 36. OpenCV

The OpenCV version used in this project is 2.4, released on March the 19th of 2012.
Also, to be able to use the OpenCV library, the images from the camera feed which come
through a ROS topic have to be converted from the Image message to the OpenCV image
format. For this codes is necessary the use the cv_bridge package. As the name explains this
package acts as a bridge between OpenCV and other formats. It was created by Patrick
Mihelich and James Bowman and has a BSD licence.

Fig. 37. cv_bridge structure

As will be seen on the next chapters, this library and the cv_bridge package have been used in
different of the nodes developed in the project.

38

Development of autonomous manoeuvres in a quadcopter Report


5.2. Position estimation based on odometry
To obtain the position of the robot from the odometry the only input needed is the
acceleration, velocity and angles of the drone. All this data can be obtained from the drones
driver, which sends a ROS message called /navdata (navigation data) where there are the
accelerations, velocities, altitude measured form the ultrasound altitude sensor, orientation,
the time between two consecutive messages, among other.
In this project has been done a node to keep the quadcopter position in order to test the
accuracy of the position estimation based on the odometry. Some calculus are necessary to get
the pose of the drone every loop, but not for the altitude, nor the orientation (yaw), because
both are obtained from the /navdata message directly, where the altitude measurement is
obtained from the drones altitude sensor. The kinematic equation (1) has been used to
calculate the position in the x and y axes.
(1)

X = Xo + Vot

A PID controller has been used to calculate the velocity to send to the robot depending on the
displacement from the starting point in all axis and yaw. The PID controller is part of a ROS-
package called control_toolbox, which contains several C++ classes useful in writing robot
controllers.
The results obtained using this node (ANNEX A.6 Pose control based on odometry) are not the
desired because the robot is no able to keep its position and its not able to come back to the
initial position when it is displaced from it. When the drone is keeping the pose with this
node, it constantly moves forwards or backwards and sideways. This method is not
appropriate to do position estimation. This is, possibly, because the /navdata message is
received with a certain frequency, and the acceleration is continuously changing. This method
can only be used if the navigation data from a robot is received very often, almost analogue.
When we did this node, the ardrone driver published the /navdata message every 15Hz, but
a few weeks after finishing the node, the driver was updated with the possibility to increase
the frequency of only the /navdata message up to 200Hz. We have tried the node with this
new frequency and the results are a little better, but no enough to get the pose of the drone
with this node.

39

Development of autonomous manoeuvres in a quadcopter Report

6. Visual servoing
Visual servoing, or Vision-Based Robot Control, is a technique that uses the visual feedback
information extracted from a camera (or more than one) to control the motion of a robot.
Visual servoing techniques can be classified into three main types: IBVS (Image-Based Visual
Servoing), PBVS (Position-Based Visual Servoing) and a hybrid approach:

IBVS is based on the error between the desired and the current features on the frame of
the camera. 2D image measurements are used directly to estimate the desired movement
of the robot.

PBVS is based on the estimation of the position with respect to the camera, and then the
command is issued to the robot controller. These systems retrieve the 3D information
about the scene where known camera model (usually in conjunction with a geometric
model of the target) is used to estimate the pose (position and orientation) of the target.

Hybrid approach is a 2-1/2-D (two and a half dimensions) visual servoing. Ii is a


combination of the two previous approaches. Does not need any geometric 3D model of
the object. It consist in combining visual features obtained directly from the image, and
position-based features.

In this project has been developed two types of visual servoing: the IBVS and the 2-1/2-D
visual servoing. The 2-1/2-D visual servoing has been explained in the section 5.1. Position
estimation based on image, inside the position estimation part (chapter 5. Position
estimation). It had been explained within that section because it is one of the two methods for
estimating the position that has been tested (one based on image and the other based on
odometry). Since it was first developed the entire part of position estimation (chapter 5), the
2-1/2-D visual servoing part of visual servoing is not explained in this section (chapter 6.
Visual servoing). The IBVS is explained further ahead in this chapter.
The applications of visually guided systems are many, from intelligent homes to automobile
industry, and increasing, and, of course, robotics. Vision guided robots has been on of the
major research area in robotics during last decades. Along with this growth, some software
and applications to work with images in order to do visual servoing have appeared in the
recent years. An example of this software is the OpenCV library, which has been explained
before.
Another distinction has to be made inside the visual servoing methods, depending on the
distance between the camera and the part of the robot that has to move to the desired
position there are eye-to-hand and eye-in-hand methods. If the camera is located on the
centre of the part that has to move, its called eye-in-hand, because in the case of a robotic arm
the camera would be located in the hand that has to move to the desired point. If the camera is
located elsewhere, and therefore a transformation has to be made to the velocities
commanded to the robot, its called eye-to-hand. The visual servoing developed in this project
is eye-in-hand because the camera is in the quadcopter.

40

Development of autonomous manoeuvres in a quadcopter Report


Land over a target
One of the goals of this project is to do visual servoing in order to command the drone to land
over a known target, first over a static target and then over a moving target. At the beginning
of the development of this function, the down looking camera will be used to do this function
although its bad resolution. Further on the HD camera is used to perform this function. To
perform the landing, first of all, the detection of the target is needed to get a position of it,
then, the commands to land over it will be send to the drone.
Because of the detection of the target involves a lot of computation in every frame and
sometimes the target can be wrong detected, the intention to perform this manoeuvre is to
split the detecting function and the tracking function in two nodes, with a third node to send
the velocity commands to the driver. This third node is the controller (landing_control), which
is the main node and will always work. The system is designed to work in the following way:
First, the controller node will require the detecting node (tag_detection_server) to find the
target. When the target is detected, stop the detecting function and start the tracking of the
target with the tracking node (tracker_server). When the target is being tracked, get the
velocity commands to send to the drones driver in order to achieve a landing. To perform all
these actions, there is the main node, the controller, which controls the tasks in every
moment. The actionlib package for ROS is used to control the system. This package allows
sending requests from a node to other nodes to perform some task, and also receive a reply to
the request. There are two kinds of nodes when using this package: the clients and the
servers. The clients are the ones who send requests to the servers nodes, which perform
some specific task. In this system, the controller is the client node, and the target detection
node and the tracking node are the servers node.
Acionlib package
The actionlib package is a ROS package created by Eitan Marder-Eppstein and Vijay Pradeep.
It provides a standardized interface for interfacing with preemptible tasks like moving the
base to a target location, performing a laser scan and returning the resulting point cloud,
detecting the handle of a door, etc. In ROS based system, there are cases when someone would
like to send a request to a node to perform some task, and also receive a reply to the request.
This can currently be achieved via ROS services. But, sometimes, if the service takes a long
time to execute, the main node or the user might want to cancel the request during the
execution or get feedback during it. This package provides the necessary tools to create
servers that perform long-running goals that can be preempted and clients that can send
requests to server nodes. Is necessary to define a few messages on which the client and server
nodes can communicate. In these messages there is defined the Goal, the Feedback and the
Result of each task:

Goal: The goal is the objective of the task. It is sent to the server (ActionServer) from the
client (ActionClient). This can be, for example, a PoseStamped message that contains
information about where the robot should move to in the world, in case of a mobile robot.

Feedback: The feedback provides the client information about how the task is been
carried out. For the example of the moving robot, the feedback could be the position of the
robot in the world at every moment.

Result: The result is the outcome of the performed task. It is sent from the server to the
client only once, when the task is completed. For the example of the moving robot, the
result could be the final pose of the robot, although in that case the result is not very
important.
41

Development of autonomous manoeuvres in a quadcopter Report


Structure of the landing procedure
Taking advantage that gives the modular operating system (ROS) and using the Actionlib
package, the system of the landing procedure comprises three nodes that communicate with
each other (Fig. 38).

Velocity
commands

Spooler

joypad
node

Landing
Control

Velocity
commands

Client

Feedback
/ Result

Start /
Preempt

Target
Detection

Velocity
commands

Feedback
/ Result

Start /
Preempt

Target
Tracking

Server

Server

Bottom camera image


Navigation data

AR Drone
driver
Navigation
data

Velocity
commands

Cameras
images

AR Drone motors / cameras / sensors


Fig. 38. Structure of the landing control

In the Fig. 38, the nodes in the right are the ones that perform the autonomous landing
procedure. There is the master node, called Landing Control, which is the client that asks the
Target Detection and the Target Tracking nodes to perform its respective tasks. The Target
Detection and the Target Tracking nodes are server nodes, which send the feedback to the
master node. The orange arrows represent the messages created by the Actionlib library in
order to establish the communication between clients and servers. In the target detection
feedback message there is the target position, and in the tracking feedback message there is
the velocity to send to the drone, the position of the target with respect to the centre of the
image in centimetres and a variable to know if the tracking is working. The result message of
the Target Detection server and the Tracking server are empty messages because there is no
information to send to the master, but the achievement of the tasks.
Two launchs files have been created in order to run the system. The first launch file starts the
AR. Drone driver, the Spooler node, the Joypad node and the image_proc node (explained
before: to have the camera images rectified). The second launch file starts the three visual
servoing nodes: the landing control, the target detection and the tracking nodes. There are
two different launch files instead of only one because the driver takes a while to establish
communication with the quadcopter, and the visual servoing nodes need the driver to be
running to start. Another thing needed to run the system is the template image file to perform
the detection part. The path of the image must be specified in the code of that node (ANNEX
A.7.3 tag_detection_server.cpp) to perform the detection of the template and that image must
have the appropriate proportions.

42

Development of autonomous manoeuvres in a quadcopter Report


6.1. The controller node: landing_control
The controller node is the master node that controls all the manoeuvres and the steps of the
system (take off, search the target, approach to it, restart the search if the target is lost and
land over it). This node is also the one who commands the other two nodes, the detection and
the tracking nodes, to start their processes. The communication between these nodes is
performed by making the nodes be client/server nodes with the acionlib packages, as
explained before.
About the code
It is structured by states (in the code - ANNEX A.7.1 landing_control.cpp - called status), which
are TAKEOFF, SEARCHING, TRACKING, LOST, and LANDING (Table 3).
STATE
TAKEOFF
SEARCHING
TRACKING
LOST
LANDING

Function
The drone takes off
Starts the Target Detection
node
Starts the Tracking node
Tries to find the lost target. If it
can not find it, start searching it
again
Lands the drone

NEXT STATE
SEARCHING
TRACKING: if the target is found
LANDING: if the approach to the target
had been ok
LOST: if the target had been lost
TRACKING: if the target has been found
SEARCHING: if the target has not been
found before 3 seconds
-

Table 3. States of the code

In the SEARCHING state, the Target Detection node is started, and in every loop a feedback
from this node is received, first the drone is situated at an optimum altitude to find the target
(between 800mm and 1300mm) and then it is moved forwards to find the target. In the
TRACKING state, the Tracking node is started, and every time a feedback is received, a
velocity command is sent to the drone. During the tracking, the velocity commands are
received from the tracking node, which computes the velocities for doing the visual servoing
task as explained later. If the target is centred in the frame, a down velocity is sent. When the
target is lost, the LOST state is set to try to find the target again, and if this is not found, the
system will change to the SEARCHING state again. When the LANDING state is set, a forward
velocity and the landing command are sent to the drone to make it land centred over the
target.
This node has a function that sets the system to the LANDING state to land the drone when the
battery is almost empty. Another function in this node is to print in the screen information
that could be useful during the operation of the system such as the actual state of the system,
the drones battery, the altitude, or the velocities commands among others.
The Target Detection node and the Tracking node are the main reason of the modification of
the drone, the change of the front camera angle mentioned before in this project. Since the
camera used is has wide angle (92 diagonal fisheye), its image is rounded, so a processing
effect that corrects the round effect of the lens is needed. This processing has been done with
the image_proc ROS package, explained before, in the 5.1.1. Position estimation based on
features.

43

Development of autonomous manoeuvres in a quadcopter Report


6.2. The target detection node: tag_detection_server
The target detection node (ANNEX A.7.3 tag_detection_server.cpp) is the node responsible for
detecting the target where it has to land. Taking into account that the place where the drone
will have to land will be a known target and no more than that target has to be known, two
strategies have been proposed to achieve this purpose. An important thing to consider when
choosing the strategy to work with is that the target will not be well orientated, with the same
size and with the same illumination (colour) most of the time, so for that reason, this code has
to be able to find the target when it turns, changes the size or in different light situations.
Strategy

Pros
No matters the size of the target

Colour
detection

No matters the orientation of the target

Cons
It is not robust when the light
conditions are changing
Can be detected more things of the
same colour besides the target

It is fast enough to do the task


It is a very robust function
Template
matching

The target must have the size and


orientation of the template

It is robust when the light condition are


changing
It is fast enough to do the target
detection
Wrong targets can be discarded
Table 4. Detecting strategies

To use the colour strategy, more things should be done, such as shape comparison, to increase
the robustness of the detection. In addition, the lack of robustness given by the changing light
conditions is very difficult to solve easily.
Template matching can be a fast and effective way of monitoring the position of an object
within the image. But it needs some other processing to make it work when the target has
different sizes (seen from different highs) and with different orientations.
Finally, the strategy chosen to perform the detection is the Template matching function of
OpenCV because it is easier to do the processing to change the orientation and the size than to
make the colour detection robust in changing light conditions.







44

Development of autonomous manoeuvres in a quadcopter Report


Functions of the code
For this node will be necessary to have a template image to compare the images in the frame
with it. The first template chosen was a white H within a black circle (Fig. 40). With this
template, the circle was the element tracked with the tracking node and the transition of the
circle position in the frame works pretty good. The transition was made by sending five points
of the circle from the detection node to the tracking node.

!
Fig. 40. First target tried

But since the change of the tracking function, for reasons explained further, the target with
the circle and the H inside wouldnt work any more. With the new tracker the element tracked
is an area in the centre of the target, so the circle around the H was no necessary any more.
And there was a problem with the transition of the position of the tracking area between the
detector and the tracker. The problem is that the detector has to send the position of a point
inside the area to track, and since the H has narrow white areas, when the tracking node takes
the central position, the drone could has moved a bit, enough to get a point outside of the H.
For that reason it was decided to make a target with a bigger H. Several targets where tried,
but the final target (Fig. 39) was one with a bigger area in the centre in order to reduce the
possibility to make a mistake when passing the point from the tracker to the detector when
flying. This new H is black, because, since the tracking only needs a white area surrounded
with black or vice-versa, it works as well as the white H.

Fig. 39. Final target

The template has to measure the half of an A4 paper because it is the size that the detector
will search the template. So, the final target is a sheet of paper with the template inside (Fig.
41).

45

Development of autonomous manoeuvres in a quadcopter Report


Fig. 41. Target in a sheet of paper

To detect the target the node has to detect the sheet, compare to the template and decide if it
is the target or not. Also, to detect the target is needed to make the function works if the
template is rotated and has a different size of the template. Thats why the detection node is
structured in different functions:
The first function, called findCountours in the code (ANNEX A.7.3 tag_detection_server.cpp), is
to find squared shapes and only polygons with four edges. To do this, some OpenCV functions
are used. First, the Canny function is performed in the frame to get the edges, to find polygons
with these edges the findCountors function is called. Then, the function approxPolyDP is used
to approximate the polygons with quadrangles. Finally the polygons with four edges with
similar angles as a rectangle are selected.

Fig. 42. Squares detected

There is implemented the drawSquares function, which allows us to see the detected squares
(Fig. 42). In the code, this function is not performed because it is not necessary to the main
objective. For this detection function, the sheet must be surrounded by a dark area to detect
the white square easily.

46

Development of autonomous manoeuvres in a quadcopter Report


The detection squares are passed to a function to rectify the square shapes, is called
rectifiedSquares in the code. Within this function, the template is loaded. The template should
not be the drawn template directly because it would be different in the camera frame. For this
reason the template loaded is an image of the template taken from the camera image
previously (Fig. 43).

Fig. 43. Template obtained from a camera image

This is the template searched in the rectified square, which is obtained from the four corners
of the squares detected. The four corners are oriented and resized to the correct size to
compare with the template (Fig. 44).

Fig. 44. Rectified square with the target

When the squares are resized, the MatchingMethod function is carried out. This is the main
function of the code, where is compared the squares detected with the template. In this
function can be chosen several methods, and after having tested all of them, the called
CV_TM_SQDIFF has been proved as the most effective for this purpose. The most important
thing inside this function is the OpenCV function MatchTemplate, which slides the template
image through the camera frame and stores the comparison results in the result matrix Fig.
45. This matrix is a grayscale image where the value of each pixel corresponds to how close of
a match it is. There is a function in OpenCV that finds the pixel with the value that has more
likelihood to be the target and the likely target position within the image.

47

Development of autonomous manoeuvres in a quadcopter Report


Fig. 45. Result matrix

When the result matrix looks like the Fig. 45, means the template is found in the detected
square, and if it is detected during 5 consecutive frames, a green square is drawn around the
target (Fig. 46).

Fig. 46. Target detected and verified

Initially there was another function implemented in this node. That function was used to find
a circle within the target when it is found and then take five points of the perimeter of that
circle to pass them to the tracking node (Fig. 47), but since the tracking strategy had been
changed, this function was no useful anymore, so we decide to remove it.

48

Development of autonomous manoeuvres in a quadcopter Report

Fig. 47. Circle and points found in the first target tried

The Target Detection node sends the result to the master node when the target is found and
verified, it has been detected during 5 consecutive frames. The result message is an empty
message, because there is no important information to communicate but the achievement of
the task. In the feedback message there is the position of the target and two variables to know
if the target has been found. As explained before, to start the tracking procedure is necessary
to pass a point (x,y) to the tracking node. This point will be the centre of the target, the black
area of the H, which is a big area in order to not be mistaken. It will be sent to the tracking
node by two parameters (x and y positions in the frame).
This node was begun when the front camera (HD and wide angle camera) was not turned yet.
In the images Fig. 49 and Fig. 48 can be seen the difference of definition among both images,
but the most important thing is that with the HD camera, there is more surface covered, so the
target will be detected faster. This is very important in the tracking node as well.

Fig. 49. Low-resolution camera frame

Fig. 48. High-resolution camera frame

49

Development of autonomous manoeuvres in a quadcopter Report


6.3. The target tracking node: tracker_server
The objective of this node is to track a target that has already been detected by the
tag_detector_server node, compute the visual servoing task to obtain the velocities and then
pass them on to the master node through the feedback messages. To compute the visual
servoing part we use an existing platform called ViSP.
6.3.1. Visual Servoing Platform (ViSP)
For the developing of this node it has been used the Visual Servoing Platform (ViSP). This
library contains different functionalities to implement visual servoing applications that range
from the grasping of images, image processing and tracking algorithms to the control of
robots. The library has a modular architecture meaning that the different functions are
independent from each other. This allows the user to implement only the features needed for
each specific application. It must be mentioned that this library is addressed to industrial
environments where you can control all conditions such as lightning and there are mainly
robotic arms, which have steady movement and easily controllable velocities. Obviously the
object of this project does not fit into this conditions and this limitation will be further
analysed in the conclusions.
The functionalities used in this project are the image display, the ellipse tracker, the pose
estimation algorithm, the dot tracker, and the visual servoing task builder. First, was
implemented the ellipse tracking but wasn't robust enough as to track under flying
conditions. Then after trying the different tracking methods available on this library, it was
determined that the one offering most robust results for this project was the dot tracker and
so this was the method used in the definitive version of the tracking node.
When implementing the ellipse tracker the approach used was to estimate the pose of the
ellipse, pass this pose to the master who would then use a PID to determine the velocities. But
then on the second version of this node, as well as changing the feature to track was decided
to implement a visual servoing task using the specific tools available on Visp. Both methods
will be explained in this chapter as well as the results obtained with each one.
6.3.2. Ellipse tracker
Introduction
The ellipse tracker is based on the moving edges tracker. This tracker is based on matching
through convolution masks and has three derived trackers for ellipse, for lines and for nurbs
curves. The method is as follows: The edge is sampled, and for each iteration the samples are
tracked along the normal to the edge. The equation of the edge is computed for each iteration.
It enables to find the normal to the edge at each sample and detect the outliers (Fig. 50).

50

Development of autonomous manoeuvres in a quadcopter Report

Fig. 50. Ellipse tracker

Functioning
To initialize the tracking the algorithm needs to know five points that are on the contour of
the ellipse to be tracked. On the first iteration the program initializes the tracking and the
pose estimation and starts to track. From the second iteration on it only keeps tracking and
the updates the estimated position. The input image has to be processed so that the tracker is
able to work. The processing includes a Gaussian Blur to reduce the noise and a Canny Edges
detector to obtain an image with only edges.
Tracking parameters
The tracking parameters are very sensitive to the resolution of the camera, the frames per
second to be processed and the illumination conditions. They are declared within the code
and have to be tuned for every application.

Threshold: The higher it is, the sharper the transition must be to accept a point. Using a
low value facilitates the detection of an edge in a low contrast environment, however it
may introduce outliers in the minimisation.

mu1 and mu2: These values are used to reject a new candidate position for the sample
point if the new convolution is too small (for mu1) or too high (for mu2) compared to the
previous one. For example, if mu1 = mu2 = 0:5, the sample point is accepted only if the
value of the new convolution is between 50% and 150% of the previous convolution. This
procedure allows to reject points with a different transition (black-to-white instead of the
previous white-to-black for example).

Range: Corresponds to the distance in pixel along the normal to the curve where the new
edge is searched. This value is dependant on the expected displacement of the edge
between two consecutive images. The higher the value is, the slower will be the algorithm
and the more there will be outliers.

Sample step: Corresponds to the distance between two consecutive sample points on a
curve. If the object is very simple, and if the environment is not too noisy, a high value can
be sufcient.

51

Development of autonomous manoeuvres in a quadcopter Report


Structure of the code
Like in previous nodes, this code makes use of the opencv library and the cv_bridge node.
Furthermore, in this case, it's been necessary to make an additional transformation to the
image to convert it from the opencv image format to the visp image format. This has been
done with the package visp_bridge. Very similar to the cv_bridge this package converts opencv
structures to visp and viceversa.
The structure of the code is divided between an initialization of all the functions and
parameters, which has to be done only once at the beginning and the update of the functions
that has to be iterative and run every time an image from the camera is received. In the figure
X can be observed a scheme of this structure, the initialization is contained inside the
condition of being the first time the callback is called.


Fig. 51. Structure of the circle tracker

6.3.3. Dot tracker


VISP provides algorithms to track white dots on a black background, or black dots on a white
background. This algorithms are the ones which have been used for this node because they
are much more stable than the circle tracker to do this task. In VISP algorithms, a dot is a part
of image where the connected pixels have the same level (black or white colour). There are
two classes of dot tracker algorithm in VISP libraries. These are vpDot and vpDot2. The
second is the one that is used in this node because it is more efficient and robust. A dot in VISP
is defined by its grey level, the centre of gravity, the size and the moments (the surface or
area: m00, inertia first order moments along i and j: m01 and m10, inertia second order
moments along i and j: m02 and m20, and m11.
The tracking procedure consists on defining the dot to track and do the tracking. To define the
dot to track is not necessary to define all the parameters explained before, but only knowing a
point inside the dot it can be initialised. To know that point, the detecting node will sent a
point of the centre of the target by parameters to the tracking node, and the area that points
this point will be the dot (central area) to track.
Before doing the tracking is necessary to convert the images from the camera into grey scale
and then into VISP vpImage class in order to display them also with VISP. The VISP algorithm

52

Development of autonomous manoeuvres in a quadcopter Report


performs the tracking in the following way: From the previous position of the centre of
gravity, goes right to detect the boundary, then follow the boundary in order to compute the
Freeman chain to find the dot characteristics (centre of gravity, size and moments). If a dot is
found, check if it looks like the previous dot (size and moment), and if no dot is detected or it
isnt similar, check if the dot is in an image part around.
There are some advantages in the dot tracker than other types of tracker. For example, this
tracker is very robust and there is almost no tracking error if noise and specularity are not too
strong. It has automatic dot detection for initialization if you give it a point inside it, and if the
dot is lost, it can search a similar dot in a larger region and find again the lost dot.
In the code (ANNEX A.7.5 tracker_server.cpp) of the tracking node, first of all the central point
of the target is received from the detection node. Then the tracking and the visual servoing
task are initialized. In each frame, this node tries to track the dot, and if it is found, the velocity
is calculated, the centre of the dot and the dot are shown (Fig. 52) and feedback (with a
boolean to know if tracking is working) is sent to the master. If the tracking doesnt work, the
feedback is sent to the master as well, but with the boolean saying that tracking is not
working.

Fig. 52. Tracking the H as a dot

The Target tracking node sends the position of the target, the velocity calculated and a
variable to know if the node is tracking correctly or has lost the target. The result message of
this node is an empty message, as said before, because there is no information to transmit but
the accomplishment of the task. The master can preempt this task if the tracking has been lost
for more than 3 seconds. Then, the result empty message is sent as well with no
consequences.

53

Development of autonomous manoeuvres in a quadcopter Report


Structure of the code
The structure is almost the same as in the ellipse tracker, on the first iteration of the callback
the functions and parameters are initialized and then the functions are updated every time an
image is received. The functions to update are the tracker and the visual servoing task. Also
the velocities are sent to the controller through the feedback for every frame computed.

Fig. 53. Structure of the dot tracker

Visual servoing task


As said before it's possible to build the visual servoing task with Visp. In this introduction it
will be given a very general overview of visual servoing in general and go in more detail about
the image-based method.
The aim of the visual servoing is reduce an error defined by function (2).
(2)

!""#" = ! ! ! , ! !

Where s is a vector of visual features, m(t) a set of image measurements, usually the image
coordinates of interest points or the centroid of an object and a a set of parameters that
represent potential additional knowledge about the system. s* contains the desired values of
the features.
In the case of image-based control m(t) are the image-plane coordinates, in pixels, of the
interest points and a the camera intrinsic parameters. The control law is the one shown in the
function (3).
(3)

!! = ! !!!

Where the vc is the velocity for the robot, L+ is the interaction matrix and is a constant that
has to be negative to reduce the error.
The interaction matrix must be approximated. There are several choices for constructing the
approximate Le+, but the more commonly used are the following.

54

Development of autonomous manoeuvres in a quadcopter Report

Current: For this method the current depth of each point must be available.
(4)

Desired: This way only the desired depth of each point has to be set.
(5)

!!! = !!!

!!! = !!!

Mean: Le+^= 1/2*(Le + Le+*) for this method is also necessary to have the current depth
of each point.
(6)

!!! = 1/2 !!! + !!!

According to the results obtained by Franois Chaumette and Seth Hutchinson [4] the better
method was the mean of the two approximations, so initially it was the chosen method for this
project. Since the target is located on the floor, the current depths of the points correspond to
the altitude of the quadcopter that can be obtained from the altitude sensor. However during
the trials the method that proved to work better was the approximation using the desired
position and then this is the method implemented in the last version of the code.
Problem solving
This library is aimed mostly to robotic arms, which makes the visual servo control difficult to
implement in a robot such as a quadcopter. In the case of a robotic arm the velocities sent are
more accurately executed, and especially when dealing a decrease in the velocity is really
difficult for the drone to reduce in time and not to overshoot the target.
To solve that problem it has been imposed a maximum velocity, if the velocity resulting from
the control law is bigger than that, the velocity sent to the drone will be the maximum value.
Flying slower it is easier for the quadcopter to stop or change direction without overshooting
the target.
6.3.4. Calibration of the bottom camera
Visp has a package for calibrating the camera. The user has to edit the launch file to add the
camera path. The package launches a console and a visual interface. The user has to point the
camera to the calibration pattern from different positions and angles. Each time once the new
position is hold the user has to click on the display, then click in the four points indicated in
the pattern on the proper order. After that has to choose the next image and repeat the
process. When enough images had been taken the user has to write a command on a new
terminal calling the calibrator to calculate the calibration parameters and write them on a
.calib file.

Fig. 54. VISP calibration pattern

55

Development of autonomous manoeuvres in a quadcopter Report

Fig. 55. Calibration process for VISP

56

Development of autonomous manoeuvres in a quadcopter Report

7. Conclusions
The Parrot AR. Drone 2.0 is a very versatile quadcopter, as can be seen in this project it's
possible to develop many robotic applications out of it. It's quite resistant to small crashes and
blows that make it a good choice for programming learning. One pro is it has Wi-Fi which
makes it possible to run the programs on a computer while receiving all the data from the
sensors and sending all the commands.
On the downside it's not very accurate on following the velocities. Many factors like a
clustered environment, or air currents can make the drone a bit unstable, even enough as to
make it move when the command is to stay still and hovering. Also when some of these
conditions met the quadcopter may find more difficult to progress in some directions than
others, depending on the direction the environment is pushing it on. Also, from the trials it's
been observed that the amount of battery can also affect the flight. Under the same velocity
commands, the drone will move slower when the battery level is under 30% approximately,
making it less powerful against the external forces mentioned above.
Moreover, the movement of the quadcopter is not smooth but fluctuating which makes the
video feed from the cameras highly changeable and full and so makes the image processing
more difficult to implement.
With the position estimation based on features has been demonstrated that with the system
proposed in this project (position estimation with PTAM with a front looking camera) is
possible to estimate the position of the drone very accurately and to navigate with a
navigation using a front camera without previous knowledge of the environment. A thing that
we couldnt achieve in this project is to get the PTAM axes orientated according to the initial
position of the drone, with the grid always on the floor or on a parallel plane to the ground.
The position estimation based on phase correlation was not achieved in this project due to a
lack of time and knowledge. Further work on the subject would be necessary to successfully
implement this node. As can be expected, is in this case impossible to assess the performance
of this node.
It has been tested the odometry accuracy of the A.R Drone 2.0 doing position keeping and it
has been proved that the navigation data acquired from the drone is not accurate enough to
perform the navigation.
With this first approach of the autonomous landing procedure (visual servoing part)
developed in this project has been demonstrated that a landing procedure is a manoeuvre
that can be achieved with very few sensors and a camera. One thing that has not been done is
a target searching strategy because we had no navigation when developing the visual servoing
part since the front camera (used to estimate the position with PTAM) was turned down.




57

Development of autonomous manoeuvres in a quadcopter Report


Proposed future work
Here are proposed some extensions of this project or some improvements that in
retrospective can be done to some of the nodes developed.

Rosify the quadcopter. Make a ROS driver for the quadcopter that can control the robot
more deeply, for example that controls each motor separately.

Improve the communication between the nodes and the spooler, especially the joystick
node, to reduce the number of topics. Instead of one topic per command with empty
messages, use only a few topics with the messages containing different commands. On
this project this was done emulating the system used in the ardrone_autonomy package,
but in the end there are too many topics.

Get the PTAM axes orientated according to the initial position of the drone, with the grid
always on the floor or on a parallel plane to the ground.

Try to make the PTAM work in a quadcopter with a down looking camera flying at a
higher altitude.

Finish the phase correlation node to be able to obtain the rotation and the scale from the
camera feed.

Complement the autonomous landing with a searching strategy that permits to start the
procedure from more distance. To achieve that, navigation is needed because only
sending velocity messages without navigation is impossible to follow a path. Since in this
project the front HD camera was turned, there was no navigation.

Land upon a moving target.

58

Development of autonomous manoeuvres in a quadcopter Report

8. References
[1] Quigley, M.; Gerkey, B.; Conley, K.; Faust, J.; Foote, T.; Leibs, J.; Berger, E.; Wheeler, R.; Ng,
A.; , "ROS: an open-source Robot Operating System," in Proc. Open-Source Software
workshop of the International Conference on Robotics and Automation (ICRA), 2009
[2] Hurtos, N.; Cuf', X.; Petillot, Y.; Salvi, J.; , Fourier-based registrations for two-dimensional
forward-looking sonar image mosaicing, Intelligent Robots and Systems (IROS), 2012
IEEE/RSJ International Conference on , vol., no., pp.5298-5305, 7-12 Oct. 2012
[3] Klein, G.; Murray, D.; , Parallel Tracking and Mapping for Small AR Workspaces, Mixed
and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on ,
vol., no., pp.225-234, 13-16 Nov. 2007
[4] Chaumette, F.; Hutchinson, S.; , Visual servo control. I. Basic approaches, Robotics &
Automation Magazine, IEEE , vol.13, no.4, pp.82-90, Dec. 2006
[5] Reddy, B.S.; Chatterji, B.N.; , "An FFT-based technique for translation, rotation, and scale-
invariant image registration," Image Processing, IEEE Transactions on , vol.5, no.8,
pp.1266-1271, Aug 1996
[6] Engel, J.; Sturm, J.; Cremers, D.; , Camera-based navigation of a low-cost quadcopter,
Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on , vol.,
no., pp.2815-2821, 7-12 Oct. 2012
[7] Engel, J.; Sturm J.; Cremers D.; , Accurate Figure Flying with a Quadcopter Using Onboard
Visual and Inertial Sensing, In Proc. of the Workshop on Visual Control of Mobile Robots
(ViCoMoR) at the IEEE/RJS International Conference on Intelligent Robot Systems (IROS),
2012
[8] Chaumette, F.; Marchand, E.; Novotny, F.; Saunier, A.; Spindler, F.; Tallonneau, R.; ,
Building a visual servoing task visp tutorial, Lagadic project,
http://www.irisa.fr/lagadic, October 2011
[9] Chaumette, F.; Marchand, E.; Spindler, F.; Tallonneau, R.; Yol, A.; , Computer vision
algorithms visp tutorial, Lagadic project, http://www.irisa.fr/lagadic, July 2012
[10] Bradski, G.; Kaeller, A.; , Learning OpenCV: Computer Vision with the OpenCV Library,
O'Reilly Media, September 2008
[11] Souli, J; , C++ Language Tutorial, cplusplus.com, 2008
[12] http://www.ros.org/wiki/
[13] https://github.com/AutonomyLab/ardrone_autonomy

59

Development of autonomous manoeuvres in a quadcopter Report


[14] http://opencv.org/
[15] http://docs.opencv.org/2.4.3/index.html
[16] http://www.irisa.fr/lagadic/visp/visp.html
[17] http://www.robots.ox.ac.uk/~gk/PTAM/
[18] http://ewokrampage.wordpress.com

60

Development of autonomous manoeuvres in a quadcopter Report

ANNEX A: CODES
A.1 Joystick node
A.1.1 joystick.cpp

// Define DEBUG_OUTPUT to enable PRINTLN_DEBUG output when not using ROS.
// ROS debug level output is toggled at runtime using rxconsole.
//#define DEBUG_OUTPUT

// ======================================= includes
#include <iostream>
using namespace std;
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdint.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <linux/joystick.h>
#include "Joystick.h"
using namespace std;

// ========================================= defines
#define MAX_ANALOG_VALUE 32767
#define LINEAR_LIMIT 0.3
#define ANGULAR_LIMIT 1
#define VELOCITY_SCALE_X 1
#define VELOCITY_SCALE_Y 1
#define VELOCITY_SCALE_Z 1
#define VELOCITY_SCALE_YAW 1

// ====================================== methods
// ---------------------------------------------------------------------------
Joystick::Joystick(int argc, char ** argv)
{

if(argc==3){

if (strcmp(argv[1], "-p") == 0)



{




ROS_INFO_STREAM("Using joystick device: " << argv[2]);




deviceName = "/dev/input/" + string(argv[2]);



}


}


if(deviceName.length() == 0)

{


deviceName = DEFAULT_DEVICE_PATH;


ROS_INFO_STREAM("Using DEFAULT Device: " << deviceName);

}
}
Joystick::~Joystick()
{
}
// ---------------------------------------------------------------------------
int Joystick::main(){


ROS_INFO_STREAM("init entered");



// And create a timer for periodic calls to read Joy.

doWorkTimer_ = nh_.createTimer(ros::Duration(0.1), boost::bind(&Joystick::doWork ,this));


memset(&pad, 0, sizeof(pad));

pad.fd = open(deviceName.c_str(), O_RDONLY);

if (pad.fd <= 0)

{


close(pad.fd);


cerr << "Failed to open joystick - check it is attached." << endl;

61

Development of autonomous manoeuvres in a quadcopter Report




return false;

}


cout << "Joystick opened ok." << endl;


// Get pad info ...

ioctl(pad.fd, JSIOCGAXES, &pad.axisCount);

ioctl(pad.fd, JSIOCGBUTTONS, &pad.buttonCount);

ioctl(pad.fd, JSIOCGVERSION, &pad.version);

ioctl(pad.fd, JSIOCGNAME(80), &pad.devName);

fcntl(pad.fd, F_SETFL, O_NONBLOCK);


cout << "axis : " << (int)pad.axisCount << endl;

cout << "buttons : " << (int)pad.buttonCount << endl;

cout << "version : " << pad.version << endl;

cout << "name : " << pad.devName << endl;


// set default values

pad.changed = false;

for (int i=0;i<pad.axisCount;i++) pad.aPos[i]=0;

for (int i=0;i<pad.buttonCount;i++) pad.bPos[i]=0;


lastValuesNonZero = true;


pub_move = nh_.advertise<ardrone_msgs::Vel>("spooler/cmd_vel", 1);

pub_aux = nh_.advertise<ardrone_msgs::Vel>("spooler/cmd_aux", 1);

pub_takeoff = nh_.advertise<ardrone_msgs::Priority>("spooler/takeoff", 1000);

pub_land = nh_.advertise<ardrone_msgs::Priority>("spooler/land", 1000);

pub_reset = nh_.advertise<ardrone_msgs::Priority>("spooler/reset", 1000);

pub_endtakeover=nh_.advertise<ardrone_msgs::Priority>("spooler/endtakeover",1000);

pub_custom = nh_.advertise<ardrone_msgs::Priority>("spooler/custom", 1000);

pub_left_flip = nh_.advertise<ardrone_msgs::Priority>("spooler/left_flip", 1000);

pub_right_flip = nh_.advertise<ardrone_msgs::Priority>("spooler/right_flip", 1000);

pub_forward_flip = nh_.advertise<ardrone_msgs::Priority>("spooler/forward_flip", 1000);

pub_back_flip = nh_.advertise<ardrone_msgs::Priority>("spooler/back_flip", 1000);


ROS_INFO_STREAM("initialisation complete");


while(ros::ok()){


ros::spin();


}



close(pad.fd);

return true;

}
void Joystick::doWork()
{

readLatestState();

printPadState();

float x, y, z, yaw, x_aux, y_aux;

bool takeoff, land, reset, endtakeover, custom, left_flip, right_flip, back_flip, forward_flip;

x = scaleStick(-pad.aPos[3], LINEAR_LIMIT);

y = scaleStick(-pad.aPos[2], LINEAR_LIMIT);

z = scaleStick(-pad.aPos[1], LINEAR_LIMIT);

yaw = scaleStick(-pad.aPos[0], ANGULAR_LIMIT);

takeoff = pad.bPos[3];

land = pad.bPos[1];

reset = pad.bPos[9];

endtakeover = pad.bPos[8];

custom = pad.bPos[12];

x_aux = scaleStick(-pad.aPos[5], LINEAR_LIMIT);

y_aux = scaleStick(-pad.aPos[4], LINEAR_LIMIT);

left_flip = (pad.bPos[2] && pad.bPos[4]);

right_flip = (pad.bPos[2] && pad.bPos[5]);

forward_flip = (pad.bPos[2] && pad.bPos[7]);

back_flip = (pad.bPos[2] && pad.bPos[6]);


bool sending = true;


if (x != 0 || y != 0 || z != 0 || yaw != 0 || takeoff != 0 || land != 0 || reset != 0 || left_flip!=0 || right_flip!=0 || back_flip!=0 || forward_flip!=0
)

{


ROS_INFO_STREAM(-pad.aPos[3] << " " << -pad.aPos[2] << " " << -pad.aPos[1] << " " << -pad.aPos[0] << " ");


ROS_INFO_STREAM("Joystick forces non-zero, sending msg.");


lastValuesNonZero = true;

}

62

Development of autonomous manoeuvres in a quadcopter Report



else if (lastValuesNonZero)

{


ROS_INFO_STREAM("Sending last zero msg, then stopping.");


lastValuesNonZero = false;

}

else {


//ROS_INFO_STREAM("Joystick forces zero. Not sending");


sending = false;

}



if (endtakeover != 0){


ROS_INFO_STREAM("Sending end of take over msg.");


sendVelocityMsg(x, y, z, yaw, takeoff, land, reset, endtakeover, custom, x_aux, y_aux, left_flip, right_flip,
forward_flip);


sending = false;

}


if (custom != 0 or x_aux!=0 or y_aux!=0){


sendVelocityMsg(x, y, z, yaw, takeoff, land, reset, endtakeover, custom, x_aux, y_aux, left_flip, right_flip,
forward_flip);


sending = false;

}

if (left_flip != 0){


sendVelocityMsg(x, y, z, yaw, takeoff, land, reset, endtakeover, custom, x_aux, y_aux, left_flip, right_flip,
forward_flip);


sending = false;

}



if(sending){


sendVelocityMsg(x, y, z, yaw, takeoff, land, reset, endtakeover, custom, x_aux, y_aux, left_flip, right_flip,
forward_flip);

}
}

// ---------------------------------------------------------------------------
float Joystick::scaleStick(int value, float limit)
{
if(abs(value) < 2000) value=0;
//cout << "Analog value: " << value << " Limit value: " << limit << " Max analog value: " << MAX_ANALOG_VALUE << endl;

return value * limit / MAX_ANALOG_VALUE;
}

// ---------------------------------------------------------------------------
bool Joystick::readLatestState()
{

pad.changed = false;

int result;

while ((result = read(pad.fd, &pad.ev, sizeof(pad.ev))) > 0)

{


switch (pad.ev.type)


{


case JS_EVENT_INIT:


case JS_EVENT_INIT | JS_EVENT_AXIS:


case JS_EVENT_INIT | JS_EVENT_BUTTON:


break;


case JS_EVENT_AXIS:


pad.aPos[pad.ev.number] = pad.ev.value;


pad.changed = true;


break;


case JS_EVENT_BUTTON:


pad.bPos[pad.ev.number] = pad.ev.value;


pad.changed = true;


break;


default:


printf("Other event ? %d\nnumber=%d\nvalue=%d\n",


pad.ev.type,pad.ev.number, pad.ev.value);


break;


}

}


if (errno != EAGAIN) {


cerr << "Aborting on joystick read error: " << errno << endl;

//
requestRestart();

}


return pad.changed;
}

back_flip,

back_flip,

back_flip,

back_flip,

63

Development of autonomous manoeuvres in a quadcopter Report



// ---------------------------------------------------------------------------
void Joystick::printPadState()
{

cout << "----------------------------------------------" << endl;

cout << "pad.changed: " << pad.changed << endl;

cout << "axis : " << pad.axisCount << endl;

cout << "buttons : " << pad.buttonCount << endl;

cout << "version : " << pad.version << endl;

cout << "name : " << pad.devName << endl;

cout << "----------------------------------------------" << endl;

cout << "last ev time : " << pad.ev.time << endl;


for (int i=0;i<pad.axisCount;i++) printf(" Axis %2d |",i);

printf("\n");

for (int i=0;i<pad.axisCount;i++) printf(" %7d |",pad.aPos[i]);

printf("\n\n");

for (int i=0;i<pad.buttonCount;i++) printf(" Btn.%2d |",i);

printf("\n");

for (int i=0;i<pad.buttonCount;i++) printf(" %2d |",pad.bPos[i]);

printf("\n");
}

// ---------------------------------------------------------------------------
void Joystick::sendVelocityMsg(float x, float y, float z, float yaw, bool takeoff, bool land, bool reset, bool endtakeover, bool custom, float x_aux,
float y_aux, bool left_flip, bool right_flip, bool back_flip, bool forward_flip)
{

msg_move.linear.x = VELOCITY_SCALE_X*x;

msg_move.linear.y = VELOCITY_SCALE_X*y;

msg_move.linear.z = VELOCITY_SCALE_X*z;

msg_move.angular.z = VELOCITY_SCALE_YAW*yaw;

msg_move.priority=true;


pub_move.publish(msg_move);

ROS_INFO_STREAM(msg_move);



if(x_aux != 0 || y_aux != 0){


msg_aux.priority=true;


msg_aux.linear.x = VELOCITY_SCALE_X*x_aux;


msg_aux.linear.y = VELOCITY_SCALE_X*y_aux;


pub_aux.publish(msg_aux);


msg_endtakeover.priority=true;


pub_endtakeover.publish(msg_endtakeover);


ROS_INFO_STREAM("SENDING AUX " << msg_aux << std::endl);

}


if (takeoff == 1){


msg_takeoff.priority=true;


pub_takeoff.publish(msg_takeoff);


ROS_INFO_STREAM("taking off...");

}


if (land == 1){


msg_land.priority=true;


pub_land.publish(msg_land);


ROS_INFO_STREAM("landing...");

}


if (reset == 1){


msg_reset.priority=true;


pub_reset.publish(msg_reset);


ROS_INFO_STREAM("RESET");

}


if (endtakeover == 1){


msg_endtakeover.priority=true;


pub_endtakeover.publish(msg_endtakeover);


ROS_INFO_STREAM("END OF JOYSTICK TAKEOVER");

}


if (custom == 1){


msg_endtakeover.priority=true;


pub_endtakeover.publish(msg_endtakeover);


msg_custom.priority=true;


pub_custom.publish(msg_custom);


ROS_INFO_STREAM("END OF JOYSTICK TAKEOVER - CUSTOM");

}

64

Development of autonomous manoeuvres in a quadcopter Report



if (left_flip == 1){


msg_left_flip.priority=true;


pub_left_flip.publish(msg_endtakeover);


ROS_INFO_STREAM("LEFT FLIP!");

}


if (right_flip == 1){


msg_right_flip.priority=true;


pub_right_flip.publish(msg_right_flip);


ROS_INFO_STREAM("RIGHT FLIP!");

}


if (back_flip == 1){


msg_back_flip.priority=true;


pub_back_flip.publish(msg_back_flip);


ROS_INFO_STREAM("BACK FLIP!");

}


if (forward_flip == 1){


msg_forward_flip.priority=true;


pub_forward_flip.publish(msg_forward_flip);


ROS_INFO_STREAM("FORWARD FLIP!");

}
}

// ========================================================================
int main (int argc, char * argv[])
{


ros::init(argc, argv, "ardrone_joy");


Joystick ardrone_joy(argc, argv);


return ardrone_joy.main();

}

A.1.2 joystick.h

#ifndef _JOYSTICK_H_
#define _JOYSTICK_H_

// ========================================================== includes
#include <ros/ros.h>
#include <ardrone_msgs/Vel.h>
#include <ardrone_msgs/Priority.h>


// ====================================================== external defines
/*
* BUTTON MAPPING INFORMATION
* ~~~~~~~~~~~~~~~~~~~~~~~~~~
* Buttons appear to be indexed from 0, but otherwise keep to the same order
* as written on the joypad itself. So button 1 is pad.bPos[0], etc.
*/

#define MAX_AXIS 16
#define MAX_BUTTON 16
#define DEFAULT_DEVICE_PATH "/dev/input/js0"

typedef struct {

unsigned char axisCount;

unsigned char buttonCount;

int fd;

int version;

char devName[80];

int aPos[MAX_AXIS];

int bPos[MAX_BUTTON];

bool changed;

js_event ev;
} pad_data_t;

// =========================================================== class
class Joystick {

65

Development of autonomous manoeuvres in a quadcopter Report



public:


Joystick(int argc, char ** argv);



~Joystick();



int main();

private:


void doWork();

float scaleStick(int value, float limit);

bool readLatestState();

void printPadState();

void sendVelocityMsg(float x, float y, float z, float yaw, bool takeoff, bool land, bool reset, bool takeover, bool custom, float x_aux, float
y_aux, bool left_flip, bool right_flip, bool back_flip, bool forward_flip);

ros::NodeHandle nh_;

ros::Timer doWorkTimer_;


ros::Publisher pub_move;

ros::Publisher pub_aux;

ros::Publisher pub_takeoff;

ros::Publisher pub_land;

ros::Publisher pub_reset;

ros::Publisher pub_endtakeover;

ros::Publisher pub_custom;

ros::Publisher pub_left_flip;

ros::Publisher pub_right_flip;

ros::Publisher pub_back_flip;

ros::Publisher pub_forward_flip;

ardrone_msgs::Vel msg_move;

ardrone_msgs::Vel msg_aux;

ardrone_msgs::Priority msg_takeoff;

ardrone_msgs::Priority msg_land;

ardrone_msgs::Priority msg_reset;

ardrone_msgs::Priority msg_endtakeover;

ardrone_msgs::Priority msg_custom;

ardrone_msgs::Priority msg_left_flip;

ardrone_msgs::Priority msg_right_flip;

ardrone_msgs::Priority msg_back_flip;

ardrone_msgs::Priority msg_forward_flip;

std::string deviceName;


pad_data_t pad;

bool lastValuesNonZero;

};

#endif

66

Development of autonomous manoeuvres in a quadcopter Report


A.2 Spooler node
A.2.1 spooler.cpp

#include "spooler.h"

//-----------------------------------------------------------------------------------

void spooler::SendVelocityMsg(const ardrone_msgs::Vel::ConstPtr& msg_vel)
{

float linear_x= msg_vel -> linear.x;

float linear_y= msg_vel -> linear.y;

float linear_z= msg_vel -> linear.z;

float angular_z= msg_vel -> angular.z;

priority = msg_vel -> priority;


//If previous priority is false: means we are following commands from the controller, so we can keep listening to messages without
priority.


if (priority==true || (priority==false && previous_priority==false) ){





msg_move.linear.x= linear_x;


msg_move.linear.y= linear_y;


msg_move.linear.z= linear_z;


msg_move.angular.z= angular_z;


pub_move.publish(msg_move);


previous_priority = priority;


//If the message was from a controller, priority is set to false, and next message with false priority will be passed





ROS_INFO_STREAM("Moving:");


ROS_INFO_STREAM("Vel x = " << linear_x);


ROS_INFO_STREAM("Vel y = " << linear_y);


ROS_INFO_STREAM("Vel z = " << linear_z);


ROS_INFO_STREAM("Ang z = " << angular_z);


ROS_INFO_STREAM("Priority: " << priority << std::endl);

}
}

//------------------------------------------------------------------------------------------

void spooler::SendTakeoffMsg(const ardrone_msgs::Priority::ConstPtr& msg)
{

priority = msg -> priority;


if(priority==true || (priority==false && previous_priority==false) ){


std_msgs::Empty msg_takeoff;


pub_takeoff.publish(msg_takeoff);


previous_priority=priority;



ROS_INFO_STREAM("Taking off.. - priority: " << priority << std::endl);

}
}

//------------------------------------------------------------------------------------------

void spooler::SendLandMsg(const ardrone_msgs::Priority::ConstPtr& msg)
{

priority = msg -> priority;


if(priority==true || (priority==false && previous_priority==false) ){


std_msgs::Empty msg_land;


pub_land.publish(msg_land);


previous_priority=priority;


ROS_INFO_STREAM("Landing.. - priority:" << priority << std::endl);

}
}

//------------------------------------------------------------------------------------------

void spooler::SendResetMsg(const ardrone_msgs::Priority::ConstPtr& msg){

priority = msg -> priority;


if(priority==true || (priority==false && previous_priority==false) ){

67

Development of autonomous manoeuvres in a quadcopter Report




std_msgs::Empty msg_reset;


pub_reset.publish(msg_reset);


previous_priority=priority;


ROS_INFO_STREAM("RESET!! - priority:" << priority << std::endl);


}
}

//------------------------------------------------------------------------------------------

void spooler::EndOfTakeover(const ardrone_msgs::Priority::ConstPtr& msg){

priority = msg -> priority;

previous_priority = false;

ROS_INFO_STREAM("end of joystick takeover");

ROS_INFO_STREAM("previous priority" << previous_priority);

ROS_INFO_STREAM("priority" << priority << std::endl);

// Now the messages with false priority will be sent.
}

//-----------------------------------------------------------------------------------

void spooler::LeftFlip(const ardrone_msgs::Priority::ConstPtr& msg){

priority = msg -> priority;

if(!doing_flip){


ROS_INFO_STREAM("Left flip" << std::endl);


srv.request.type = 18;


srv.request.duration = 0;


client.call(srv);


doing_flip=true;


timer_flip.start();

}
}

//-------------------------------------------------------------------------------------

void spooler::RightFlip(const ardrone_msgs::Priority::ConstPtr& msg){

priority = msg -> priority;

if(!doing_flip){


ROS_INFO_STREAM("Right flip" << std::endl);


srv.request.type = 19;


srv.request.duration = 0;


client.call(srv);


doing_flip=true;


timer_flip.start();

}
}

//-----------------------------------------------------------------------------------

void spooler::BackFlip(const ardrone_msgs::Priority::ConstPtr& msg){

priority = msg -> priority;

if(!doing_flip){


ROS_INFO_STREAM("Back flip" << std::endl);


srv.request.type = 17;


srv.request.duration = 0;


client.call(srv);


doing_flip=true;


timer_flip.start();

}

}

//-----------------------------------------------------------------------------------

void spooler::ForwardFlip(const ardrone_msgs::Priority::ConstPtr& msg){

priority = msg -> priority;

if(!doing_flip){


ROS_INFO_STREAM("Forward flip" << std::endl);


srv.request.type = 16;


srv.request.duration = 0;


client.call(srv);


doing_flip=true;


timer_flip.start();

}
}

//------------------------------------------------------------------------------------------

68

Development of autonomous manoeuvres in a quadcopter Report



void spooler::TimerCallback(const ros::TimerEvent& event){

doing_flip=false;

timer_flip.stop();
}

//--------------- CONSTRUCTOR ------------------------------------------------------------
spooler::spooler() {

sub_move = n.subscribe("spooler/cmd_vel", 1000, &spooler::SendVelocityMsg,this);

sub_takeoff = n.subscribe("spooler/takeoff", 1000, &spooler::SendTakeoffMsg,this);

sub_land = n.subscribe("spooler/land", 1000, &spooler::SendLandMsg,this);

sub_reset = n.subscribe("spooler/reset", 1000, &spooler::SendResetMsg,this);

sub_endtakeover = n.subscribe("spooler/endtakeover", 1000, &spooler::EndOfTakeover,this);

sub_left_flip = n.subscribe("spooler/left_flip", 1000, &spooler::LeftFlip,this);

sub_right_flip = n.subscribe("spooler/right_flip", 1000, &spooler::RightFlip,this);

sub_back_flip = n.subscribe("spooler/back_flip", 1000, &spooler::BackFlip,this);

sub_forward_flip = n.subscribe("spooler/forward_flip", 1000, &spooler::ForwardFlip,this);


client = n.serviceClient<ardrone_autonomy::FlightAnim>("ardrone/setflightanimation");


pub_move = n.advertise<geometry_msgs::Twist>("cmd_vel", 1000);

pub_takeoff = n.advertise<std_msgs::Empty>("ardrone/takeoff", 1000);

pub_land = n.advertise<std_msgs::Empty>("ardrone/land", 1000);

pub_reset = n.advertise<std_msgs::Empty>("ardrone/reset", 1000);


msg_move.linear.x=0;

msg_move.linear.y=0;

msg_move.linear.z=0;

msg_move.angular.z=0;

previous_priority=false;

doing_flip=false;

timer_flip = n.createTimer(ros::Duration(5), &spooler::TimerCallback, this);

timer_flip.stop();

}


//----------------------- Main ----------------------------------------------------------------

int main (int argc, char * argv[]){


ros::init(argc, argv, "ardrone_spooler");

spooler Spooler;

ros::spin();

return 0;
}

A.2.2 spooler.h

#include <ros/ros.h>
#include <ardrone_msgs/Vel.h>
#include <ardrone_msgs/Priority.h>
#include <geometry_msgs/Twist.h>
#include <std_msgs/Empty.h>
#include "ardrone_autonomy/FlightAnim.h"

class spooler {

public:

spooler();

private:

void SendVelocityMsg(const ardrone_msgs::Vel::ConstPtr& msg);

void SendTakeoffMsg(const ardrone_msgs::Priority::ConstPtr& msg);

void SendLandMsg(const ardrone_msgs::Priority::ConstPtr& msg);

void SendResetMsg(const ardrone_msgs::Priority::ConstPtr& msg);

void EndOfTakeover(const ardrone_msgs::Priority::ConstPtr& msg);

void LeftFlip(const ardrone_msgs::Priority::ConstPtr& msg);

void RightFlip(const ardrone_msgs::Priority::ConstPtr& msg);

void BackFlip(const ardrone_msgs::Priority::ConstPtr& msg);

void ForwardFlip(const ardrone_msgs::Priority::ConstPtr& msg);

void TimerCallback(const ros::TimerEvent& event);


ros::NodeHandle n;

69

Development of autonomous manoeuvres in a quadcopter Report


























};

//publishers to topics where ardrone reads


ros::Publisher pub_move;
ros::Publisher pub_takeoff;
ros::Publisher pub_land;
ros::Publisher pub_reset;
//subscribers where both controller and joystick nodes publish
ros::Subscriber sub_move;
ros::Subscriber sub_takeoff;
ros::Subscriber sub_land;
ros::Subscriber sub_reset;
ros::Subscriber sub_endtakeover;
ros::Subscriber sub_left_flip;
ros::Subscriber sub_right_flip;
ros::Subscriber sub_back_flip;
ros::Subscriber sub_forward_flip;
ros::ServiceClient client;
ardrone_autonomy::FlightAnim srv;
ros::Timer timer_flip;
geometry_msgs::Twist msg_move;
bool priority;
bool previous_priority;
bool doing_flip;

70

Development of autonomous manoeuvres in a quadcopter Report


A.3 First controller
A.3.1 first_controller.cpp

#include "ros/ros.h"
#include "first_controller.h"

//----- Navdata Callback -----------------------------------------------------------
void controller::controlCallback(const ardrone_autonomy::Navdata::ConstPtr& msg){

//state indicated by the Navdata messages from the quadcopter
int drone_state = msg -> state;

if (state == LANDED){

ardrone_msgs::Priority msg_takeoff;

msg_takeoff.priority == false;

pub_takeoff.publish(msg_takeoff);

ROS_INFO_STREAM("State: LANDED");

state=TAKING_OFF;
}
else if (state == TAKING_OFF){

ROS_INFO_STREAM(drone_state);

ROS_INFO_STREAM("State: TAKING_OFF");



if(drone_state != 6 && drone_state != 2){ //state 6 is taking off and state 2 is landed


timer_move.start();


state=MOVING;

}
}
else if (state == MOVING){

msg_move.linear.x=0.08; //Moves forwards to a velocity of 0.08 m/s

msg_move.priority=false;

pub_move.publish(msg_move);

ROS_INFO_STREAM("State: MOVING - vel_x = " << msg -> vx);
}
else if (state == STOPPING){

msg_move.angular.z=0;

msg_move.linear.x=0;

msg_move.priority=false;

pub_move.publish(msg_move);

ROS_INFO_STREAM("State: STOPPING - vel_x = " << msg -> vx);

state=WAITING;
}
else if (state == WAITING){

ROS_INFO_STREAM("State: WAITING");
}
else if (state == START_LANDING){

timer_wait.stop();

ardrone_msgs::Priority msg_land;

msg_land.priority = false;

pub_land.publish(msg_land);

ROS_INFO_STREAM("State: START_LANDING);

state=LANDING;
}
else if (state == LANDING){

ROS_INFO_STREAM("State: LANDING");

if(drone_state == 2) // = landed


state=FINISHED;
}
else if (state == FINISHED){

ROS_INFO_STREAM_THROTTLE(5,"State: FINISHED. - Mission finished succesfully");
}
}

void controller::WaitingCallback(const ros::TimerEvent& event){

state=START_LANDING;
}

void controller::MovingCallback(const ros::TimerEvent& event){

ROS_INFO_STREAM("Timer zero");

state=STOPPING;

timer_move.stop();

timer_wait.start();
}

71

Development of autonomous manoeuvres in a quadcopter Report



//---------- CONSTRUCTOR -----------------------------------------------------------
controller::controller(){

sub_data = n.subscribe("ardrone/navdata", 1000, &controller::controlCallback,this);

pub_takeoff = n.advertise<ardrone_msgs::Priority>("spooler/takeoff", 1000);

pub_land = n.advertise<ardrone_msgs::Priority>("spooler/land", 1000);

pub_move = n.advertise<ardrone_msgs::Vel>("spooler/cmd_vel", 1000);

timer_wait = n.createTimer(ros::Duration(2), &controller::WaitingCallback, this);

timer_wait.stop();


waiting=false;

timer_move = n.createTimer(ros::Duration(2), &controller::MovingCallback, this);

timer_move.stop();

moving=false;

state=LANDED;

msg_move.linear.x=0;

msg_move.linear.y=0;

msg_move.linear.z=0;

msg_move.angular.z=0;

msg_move.priority=false;
}

//------------- MAIN ------------------------------------------------------------------
int main(int argc, char **argv){

ros::init(argc, argv, "controller");

std::cout << "Waiting for any key to start the controller";
std::cin.ignore(); // It doesn't take off until the user presses a key
std::cin.get();

controller NewController;
ros::spin();

return 0;
}

A.3.2 first_controller.h

#include "ros/ros.h"
#include "ardrone_autonomy/Navdata.h"
#include <ardrone_msgs/Vel.h>
#include <ardrone_msgs/Priority.h>


class controller
{

public:

controller();

void controlCallback(const ardrone_autonomy::Navdata::ConstPtr& msg);

void WaitingCallback(const ros::TimerEvent& event);

void MovingCallback(const ros::TimerEvent& event);


private:

ros::Publisher pub_takeoff;

ros::Publisher pub_land;

ros::Publisher pub_move;

ros::Subscriber sub_data;

ros::NodeHandle n;



ardrone_msgs::Vel msg_move;


ros::Timer timer_wait;

ros::Timer timer_move;

bool waiting, moving;

int seq;

enum {LANDED = 0, TAKING_OFF = 1, MOVING = 2, STOPPING = 3, WAITING = 4, START_LANDING = 5, LANDING = 6, FINISHED = 7}
state;
};

72

Development of autonomous manoeuvres in a quadcopter Report


A.4 Pose control based on features (ptam)
A.4.1 pose_control_ptam.cpp

#include "pose_control_ptam.h"

//--------- Pose callback -------------------------------------------------------------------------------------
//active every time it receives the pose from PTAM
void pose_controller::poseCallback(const geometry_msgs::PoseWithCovarianceStamped::ConstPtr& msg){


if (state == LOST){

//To stop the lost timer and stop the drone if the ardrone founds the map while turning


info = "Ardrone has found the lost map - Stopping timer_lost";



//Set all velocities to zero


msg_move.linear.x = 0;


msg_move.linear.y = 0;



msg_move.linear.z = 0;


msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);



//Stop the lost timer


timer_lost.stop();

}


//Since the pose from PTAM is received, the map is working, and the state is POSITION KEEPING

map_works_ = true;

current_state = POSITION_KEEPING;

info = "Doing position keeping";

time_stamp_last_msg_ = msg->header.stamp;


//Start the scaling the map

if (scale == INIT_SCALING){


current_scale = "INIT_SCALING";


//Get the value of "y" of the PTAM and the altitude from the altitude sensor


z0 = msg->pose.pose.position.y;


alt0 = alt;


msg_move.linear.z = 0.3;


msg_move.priority = false;


pub_move.publish(msg_move);



timer_scale.start();


scale = MOVING;

}

else if (scale == MOVING){


//Going up


current_scale = "MOVING";

}

else if (scale == MOVED){


current_scale = "MOVED";


//Get the value of "y" of the PTAM and the altitude from the altitude sensor


z1 = msg->pose.pose.position.y;


alt1 = alt;



scale = END_SCALING;

}

else if (scale == END_SCALING){


current_scale = "END_SCALING";


alt_diff = alt1 - alt0;


alt_diff_ptam = z1 - z0;


//Calculate scale (in meters) with the two difference of altitude


sc = (alt_diff/alt_diff_ptam)/1000;


msg_move.linear.z = -0.3;


msg_move.priority = false;


pub_move.publish(msg_move);



timer_scale.start();


scale = GO_DOWN;

}

else if (scale == GO_DOWN){


//Going down to the initial position


current_scale = "GO_DOWN";

}

73

Development of autonomous manoeuvres in a quadcopter Report













































































}

//Scaling the map ended


//Memorizes the initial position
if (first == true && scale == SCALED){

goal_x = -sc * msg->pose.pose.position.z;

goal_y = sc * msg->pose.pose.position.x;

goal_z = sc * msg->pose.pose.position.y;

//Quaternion to Euler

tf::quaternionMsgToTF(msg->pose.pose.orientation, q);

btMatrix3x3(q).getRPY(pitch, yaw, roll);

goal_yaw = yaw;

first = false;
}
//---------------------------------------------------------------------------------------
//Position keeping controller
margin = 0.1; //Margin of the goal position
big_margin = 2 * margin; //Big margin (to increase the speed when is far from the goal)
if (scale == SCALED && first == false){

//Angle error calculation and correction:

//Quaternion to Euler

tf::quaternionMsgToTF(msg->pose.pose.orientation, q);

btMatrix3x3(q).getRPY(pitch, yaw, roll);

error_yaw = yaw - goal_yaw;



if (fabs(error_yaw) > margin){


msg_move.angular.z = -error_yaw*0.9;

}

else {msg_move.angular.z = 0;}



if(!n.getParam("/gain", gain)){


gain = 0.07;


n.setParam("/gain", gain);

}





































}

//X position error calculation and correction:


x = -sc * msg->pose.pose.position.z;
error_x = x - goal_x;
if (fabs(error_x) > sc*margin && fabs(error_yaw) < margin){

if (fabs(error_x) > sc*big_margin)


{msg_move.linear.x = -error_x * gain * 2;}

else


{msg_move.linear.x = -error_x * gain;}
}
else {msg_move.linear.x = 0;}
//Y position error calculation and correction:
y = sc * msg->pose.pose.position.x;
error_y = y - goal_y;
if ( fabs(error_y) > sc*margin && fabs(error_yaw) < margin){

if (fabs(error_y) > sc*big_margin)


{msg_move.linear.y = -error_y * gain * 2;}

else


{msg_move.linear.y = -error_y * gain;}
}
else {msg_move.linear.y = 0;}

//Altitude error calculation and correction:
z = sc * msg->pose.pose.position.y;
error_z = z - goal_z;
ROS_INFO_STREAM("Error_z:" << error_z);
if (fabs(error_z) > sc*margin){

if (fabs(error_z) > sc*big_margin)


{msg_move.linear.z = -error_z * gain * 2;}

else


{msg_move.linear.z = -error_z * gain;}
}
else {msg_move.linear.z = 0;}
//Publish velocity messages:
msg_move.priority = false;
pub_move.publish(msg_move);

74

Development of autonomous manoeuvres in a quadcopter Report



// ------- Control Callback (active whenever the drone is running) --------------------------------------
//To control the states of the drone, to control the drone when the map from PTAM is not active and to get the altitude from the navdata
void pose_controller::controlCallback(const ardrone_autonomy::Navdata::ConstPtr& msg){

//Reading the altitude and the drone_state

alt = msg -> altd;

drone_state = msg -> state;


if (drone_state == 0){


drone = "Unknown";

} else if (drone_state == 1){


drone = "Initied";

} else if (drone_state == 2){


drone = "Landed";

} else if (drone_state == 3){


drone = "Flying";

} else if (drone_state == 4){


drone = "Hovering";

} else if (drone_state == 5){


drone = "Test";

} else if (drone_state == 6){


drone = "Taking off";

} else if (drone_state == 7){


drone = "Flying";

} else if (drone_state == 8){


drone = "Landing";

}


//To check if the ardrone is mapping

//Wait one second to get the next map message, if not received in less than a second, set map_works_ as false

if(map_works_){


ros::Duration time_diff = ros::Time::now()-time_stamp_last_msg_;


ros::Duration duration(1);


if (time_diff > duration){



info = "Map doesn't work. More than a second since the las map message";



map_works_ = false;


}

}


//---------------------------------------------------------------------------------------

//Start the procedure to create a map

if (state == TAKEOFF){


current_state = "TAKEOFF";


msg_takeoff.priority = false;


pub_takeoff.publish(msg_takeoff);



if(drone_state == 4){



state = START_INIT;


}

}

if (state == START_INIT ){ // start map init


current_state = "START_INIT";



//Set a spacebar to take the features in PTAM node


std_msgs::String msg_spacebar;


msg_spacebar.data = "Space";


pub_map.publish(msg_spacebar);



//Go up


msg_move.linear.x = 0;


msg_move.linear.y = 0;



msg_move.linear.z = 0.5;


msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);



//Timer will wait 0.5 seconds and then set state END_INIT


timer_move.start();


state = MOVING_TO_INIT;

}


if(state == MOVING_TO_INIT){


current_state = "MOVING_TO_INIT";

}


if (state == END_INIT){


current_state = "END_INIT";

75

Development of autonomous manoeuvres in a quadcopter Report





//Set a spacebar to take the features in PTAM node


std_msgs::String msg_spacebar;


msg_spacebar.data = "Space";


pub_map.publish(msg_spacebar);



//Stop the drone


msg_move.linear.x = 0;


msg_move.linear.y = 0;


msg_move.linear.z = 0;


msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);



timer_wait.start();

}


if (state == RE_INIT){


current_state = "RE_INIT";



//Go down to the initial position to start the map again


msg_move.linear.x = 0;


msg_move.linear.y = 0;



msg_move.linear.z = -0.5;


msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);



timer_move.start();

}


if (state == POSITION_KEEPING && map_works_ == false){


//If the map is lost


state = LOST;


timer_lost.start();

}


if(state==LOST){


current_state = "LOST";


info = "Ardrone is lost - Searching map";



//Rotate to find the previous map


msg_move.angular.z = (fabs(error_yaw)/(error_yaw))*-0.3;


//Set all other velocities to zero


msg_move.linear.x = 0;


msg_move.linear.y = 0;


msg_move.linear.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);


}


print();
}


// ------- Timer move -------------------------------------------------
void pose_controller::MovingCallback(const ros::TimerEvent& event) {

timer_move.stop();


if (state == RE_INIT){


state = START_INIT;

}

else if (state == MOVING_TO_INIT){


state = END_INIT;

}
}


// -------- Timer lost ------------------------------------------------
void pose_controller::LostCallback(const ros::TimerEvent& event){

timer_lost.stop();


state = START_INIT;


//Stop the rotation of the ardrone

msg_move.linear.x = 0;

msg_move.linear.y = 0;

msg_move.linear.z = 0;

msg_move.angular.z = 0;

76

Development of autonomous manoeuvres in a quadcopter Report



msg_move.priority = false;

pub_move.publish(msg_move);


//And reset the map, so when it goes back to state START_INIT it will start a new map

std_msgs::String msg_reset;

msg_reset.data = "r";

pub_map.publish(msg_reset);


//Now it will start a new map

first = true;

scale = INIT_SCALING;
}


// --------- Timer wait -----------------------------------------------
void pose_controller::WaitCallback(const ros::TimerEvent& event) {

timer_wait.stop();


if (map_works_ == false){



state=RE_INIT;

}
}

// --------- Timer scale -----------------------------------------------
void pose_controller::ScaleCallback(const ros::TimerEvent& event) {

timer_scale.stop();


msg_move.linear.x = 0;

msg_move.linear.y = 0;

msg_move.linear.z = 0;

msg_move.angular.z = 0;

msg_move.priority = false;

pub_move.publish(msg_move);


if (scale == MOVING){

scale = MOVED;

}

else if (scale == GO_DOWN){

scale = SCALED;

current_scale = "SCALED";

}
}


// ---------- Init callback ----------------------------------------------
//To start the map from a desired position. It's to command the drone to a desired position and start a map and do the position keeping in
there
void pose_controller::initCallback(const ardrone_msgs::Priority::ConstPtr& msg){

info = "Reset map";


//Set all the initial values

timer_move.stop();

timer_lost.stop();

timer_scale.stop();

timer_wait.stop();

timer_x.stop();

timer_y.stop();


std_msgs::String msg_reset;

msg_reset.data = "r";

pub_map.publish(msg_reset);


map_works_ = false;

first = true;

time_x = false;

time_y = false;

state = START_INIT;

scale = INIT_SCALING;
}


// ----------- Waypoint callback ------------------------------------------------------
//This callback allows us to add one meter in the x or y goal position
void pose_controller::waypointCallback(const ardrone_msgs::Vel::ConstPtr& msg) {

//move the goal position forwards

if ( msg->linear.x > 0 && time_x == false){


goal_x = goal_x + 1;


info = "Goal moved 1m forwards";

77

Development of autonomous manoeuvres in a quadcopter Report




timer_x.start();


time_x = true;

}

//move the goal position backwards

if ( msg->linear.x < 0 && time_x == false){


goal_x = goal_x - 1;


info = "Goal moved 1m backwards";


timer_x.start();


time_x = true;

}

//move the goal position to the right

if ( msg->linear.y > 0 && time_y == false){


goal_y = goal_y + 1;


info = "Goal moved 1m to the right";


timer_y.start();


time_y = true;

}

//move the goal position to the left

if ( msg->linear.y < 0 && time_y == false){


info = "Goal moved 1m to the left";


goal_y = goal_y - 1;


timer_y.start();


time_y = true;

}
}

//Timer "x" and timer "y" are made to accept no more than one waypoint every second
// ----- Timer "x" ----------------------------------------------
void pose_controller::xCallback(const ros::TimerEvent& event){

time_x = false;

timer_x.stop();
}

// ----- Timer "y" ----------------------------------------------
void pose_controller::yCallback(const ros::TimerEvent& event){

time_y = false;

timer_y.stop();
}


//------- Print function ----------------------------------------------
void pose_controller::print(){

ROS_INFO_STREAM("--------------POSE CONTROL--------------");
/*
ROS_INFO_STREAM(" Position
Goal position");

ROS_INFO_STREAM("x: " << x << "
" << goal_x);

ROS_INFO_STREAM("y: " << y << "
" << goal_y);

ROS_INFO_STREAM("z: " << z << "
" << goal_z << std::endl);
*/

ROS_INFO_STREAM("Drone state: " << drone);

ROS_INFO_STREAM("Code state: " << current_state);

ROS_INFO_STREAM("Scale state: " << current_scale);

ROS_INFO_STREAM("Scale: " << sc);


ROS_INFO_STREAM("Info: " << info);


ROS_INFO_STREAM("------------CONTROL COMMANDS------------");

ROS_INFO_STREAM(" Error");

ROS_INFO_STREAM("x: " << error_x);

ROS_INFO_STREAM("y: " << error_y);

ROS_INFO_STREAM("z: " << error_z);

ROS_INFO_STREAM(" Velocity commands");

ROS_INFO_STREAM("Velocity x: " << msg_move.linear.x);

ROS_INFO_STREAM("Velocity y: " << msg_move.linear.y);

ROS_INFO_STREAM("Velocity z: " << msg_move.linear.z);

ROS_INFO_STREAM("yaw:
" << error_yaw << "
" << msg_move.angular.z << std::endl);


ROS_INFO_STREAM(std::endl);
}


// ----------- Main --------------------------------------------------------
int main(int argc, char **argv){
ros::init(argc, argv, "pose_controller_with_initialitzation");

pose_controller PoseController;

ros::spin();

78

Development of autonomous manoeuvres in a quadcopter Report


return 0;
}


// ------------- Constructor -------------------------------------------------------------------
pose_controller::pose_controller(){

sub_ardrone = n.subscribe("ardrone/navdata", 1000, &pose_controller::controlCallback,this);

sub_pose = n.subscribe("vslam/pose", 1000, &pose_controller::poseCallback,this);

sub_init = n.subscribe("spooler/custom", 1000, &pose_controller::initCallback,this);

sub_waypoint = n.subscribe("spooler/cmd_aux", 1000, &pose_controller::waypointCallback,this);

pub_move = n.advertise<ardrone_msgs::Vel>("spooler/cmd_vel", 1000);

pub_map = n.advertise<std_msgs::String>("vslam/key_pressed", 1000);

pub_takeoff = n.advertise<ardrone_msgs::Priority>("spooler/takeoff", 1000);


ros::Duration time_stamp_last_msg_(0.0);


timer_move = n.createTimer(ros::Duration(1), &pose_controller::MovingCallback, this);

timer_lost = n.createTimer(ros::Duration(5), &pose_controller::LostCallback, this);

timer_scale = n.createTimer(ros::Duration(2), &pose_controller::ScaleCallback, this);

timer_wait = n.createTimer(ros::Duration(1), &pose_controller::WaitCallback, this);

timer_x = n.createTimer(ros::Duration(1), &pose_controller::xCallback, this);

timer_y = n.createTimer(ros::Duration(1), &pose_controller::yCallback, this);


//Initialisation of variables

timer_move.stop();

timer_lost.stop();

timer_scale.stop();

timer_wait.stop();

timer_x.stop();

timer_y.stop();


msg_move.linear.x = 0;

msg_move.linear.y = 0;

msg_move.linear.z = 0;

msg_move.angular.z = 0;


map_works_ = false;

first = true;

time_x = false;

time_y = false;

state = TAKEOFF;

scale = INIT_SCALING;
}

A.4.2 pose_control_ptam

#include <ros/ros.h>
#include <std_msgs/String.h>
#include <std_msgs/Empty.h>
#include <tf/transform_datatypes.h>
#include <geometry_msgs/PoseWithCovarianceStamped.h>
#include <ardrone_msgs/Vel.h>
#include <ardrone_msgs/Priority.h>
#include <ardrone_autonomy/Navdata.h>

class pose_controller{

public:

pose_controller();


//Functions

void poseCallback(const geometry_msgs::PoseWithCovarianceStamped::ConstPtr& msg);

void controlCallback(const ardrone_autonomy::Navdata::ConstPtr& msg);

void initCallback(const ardrone_msgs::Priority::ConstPtr& msg);

void waypointCallback(const ardrone_msgs::Vel::ConstPtr& msg);

void MovingCallback(const ros::TimerEvent& event);

void LostCallback(const ros::TimerEvent& event);

void ScaleCallback(const ros::TimerEvent& event);

void WaitCallback(const ros::TimerEvent& event);

void xCallback(const ros::TimerEvent& event);

void yCallback(const ros::TimerEvent& event);

void print();

private:

79

Development of autonomous manoeuvres in a quadcopter Report



ros::NodeHandle n;


//Subscribers and publishers

ros::Publisher pub_move;

ros::Publisher pub_map;

ros::Publisher pub_takeoff;

ros::Subscriber sub_pose;

ros::Subscriber sub_ardrone;

ros::Subscriber sub_init;

ros::Subscriber sub_waypoint;

ros::Timer timer_lost;

ros::Timer timer_move;

ros::Timer timer_scale;

ros::Timer timer_wait;

ros::Timer timer_x;

ros::Timer timer_y;

ardrone_msgs::Vel msg_move;

ardrone_msgs::Priority msg_takeoff;


//Node variables

ros::Time time_stamp_last_msg_;

bool first, map_works_, time_x, time_y;

float alt, alt0, alt1, alt_diff, z0, z1, alt_diff_ptam, sc;

double gain;


float x, y, z, goal_x, goal_y, goal_z, goal_yaw;

float error_x, error_y, error_z, error_yaw;

float margin, big_margin;


int drone_state;


btScalar roll, pitch, yaw;

btQuaternion q;


std::string info, current_scale, current_state, drone;



enum {TAKEOFF = 0, START_INIT = 1, MOVING_TO_INIT = 2, END_INIT = 3, WAITING = 4, RE_INIT = 5, POSITION_KEEPING = 6, LOST
= 7} state;

enum {INIT_SCALING = 0, MOVING = 1, MOVED = 2, END_SCALING = 3, GO_DOWN = 4, SCALED = 5} scale;
};

80

Development of autonomous manoeuvres in a quadcopter Report


A.5.Pose control based on phase correlation
A.5.1 phase_correlation_two_images.cpp

#include <ros/ros.h>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <stdio.h>
#include <stdlib.h>
#include "sensor_msgs/Image.h"
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>

using namespace cv;
using namespace std;

int main(int argc, char ** argv){


Mat image1_in = imread("im_1.jpg", CV_LOAD_IMAGE_GRAYSCALE);


Mat image2_in = imread("im_2.jpg", CV_LOAD_IMAGE_GRAYSCALE);


imshow("image1_in", image1_in);

waitKey(100);

imshow("image2_in", image2_in);

waitKey(100);




//Image1, preparation and dft

Mat padded1; //expand input image to optimal size

int m1 = getOptimalDFTSize( image1_in.rows );

int n1 = getOptimalDFTSize( image1_in.cols ); // on the border add zero values

copyMakeBorder(image1_in, padded1, 0, m1 - image1_in.rows, 0, n1 - image1_in.cols, BORDER_CONSTANT, Scalar::all(0));


Mat planes1[] = {Mat_<float>(padded1), Mat::zeros(padded1.size(), CV_32F)};

Mat image1;

// Add to the expanded another plane with zeros. We obtain a double channel array.

// planes is the source array, image1 the destination array, 2 num. of source arrays

merge(planes1, 2, image1);


dft(image1, image1);


//Image2, preparation and dft

Mat padded2; //expand input image to optimal size

int m2 = getOptimalDFTSize( image2_in.rows );

int n2 = getOptimalDFTSize( image2_in.cols ); // on the border add zero values

copyMakeBorder(image2_in, padded2, 0, m2 - image2_in.rows, 0, n2 - image2_in.cols, BORDER_CONSTANT, Scalar::all(0));


Mat planes2[] = {Mat_<float>(padded2), Mat::zeros(padded2.size(), CV_32F)};

Mat image2;

merge(planes2, 2, image2); // Add to the expanded another plane with zeros


dft(image2, image2);



//obtain the cross power spectrum c(u,v)=F(u,v)G*(u,v)/abs(F(u,v)G*(u,v))

Mat divident, divisor, cross_power_spec, trans_mat;

mulSpectrums(image1, image2, divident, 0, true);
// divident = F(u,v)G*(u,v)













// conj=true, so thesecond argument
will be the complex conjugate of image 2

divisor=abs(divident);

divide(divident, divisor, cross_power_spec, 1);


//inverse dft of the cross_power_spec

dft(cross_power_spec, trans_mat, DFT_INVERSE);


//Normalize the trans_mat so that all the values are between 0 and 1

normalize(trans_mat, trans_mat, NORM_INF);


//Split trans_mat in it's real and imaginary parts

vector<Mat> trans_mat_vector;

split(trans_mat, trans_mat_vector);

Mat trans_mat_real = trans_mat_vector.at(0);

81

Development of autonomous manoeuvres in a quadcopter Report























































}

Mat trans_mat_im = trans_mat_vector.at(1);


imshow("trans_real", trans_mat_real);
waitKey(1000);
//Look for the maximum value and it's location on the trans_mat_real matrix
double* max_value;
Point* max_location;
double max_val;
Point max_loc;
max_value = &max_val;
max_location = &max_loc;
double* min_value;
Point* min_location;
double min_val;
Point min_loc;
min_value = &min_val;
min_location = &min_loc;
minMaxLoc(trans_mat_real, min_value, max_value, min_location, max_location);
ROS_INFO_STREAM("Image dimensions in pixels: width: " << image1.cols << " , " << "height: " << image1.rows);
ROS_INFO_STREAM("max_value: " << max_val << " - max_location: " << max_loc);

int pixel_x, pixel_y;
//Depending on the quadrant the point is located:
if(max_loc.x < (image2.cols/2) && max_loc.y < (image2.rows/2)){ // top-left quadrant

ROS_INFO_STREAM(" top - left quadrant");

pixel_x = max_loc.x;

pixel_y = - max_loc.y;
}
if(max_loc.x > (image2.cols/2) && max_loc.y > (image2.rows/2)){ // lower-right quadrant

ROS_INFO_STREAM(" lower - right quadrant");

pixel_x = - image2.cols + max_loc.x;

pixel_y = + image2.rows - max_loc.y;
}
if(max_loc.x > (image2.cols/2) && max_loc.y < (image2.rows/2)){ // top-right quadrant

ROS_INFO_STREAM(" top - right quadrant");

pixel_x = - image2.cols + max_loc.x;

pixel_y = - max_loc.y;
}
if(max_loc.x < (image2.cols/2) && max_loc.y > (image2.rows/2)){ // lower-left quadrant

ROS_INFO_STREAM(" lower - left quadrant");

pixel_x = max_loc.x;

pixel_y = image2.rows - max_loc.y;
}
ROS_INFO_STREAM("pixel_x " << pixel_x << " - pixel_y: " << pixel_y);
waitKey(0); //waits for a key to close all the windows

A.5.2 phase_correlation_camera.cpp

#include "phase_correlation_camera.h"

//--------CALLBACK CONTAINING THE IMAGE----------------------

void PhaseCorrelation::controller(const sensor_msgs::ImageConstPtr& msg) {

//Transform the image to opencv format
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}

82

Development of autonomous manoeuvres in a quadcopter Report












































































Mat image = cv_ptr->image;

imshow("image", image);
waitKey(1);
//Transform image to grayscale
cvtColor(image, image2_in, CV_RGB2GRAY); //Image2 is the newest image, the one we get from this callback
//image1 is the image2 from the last callback (we use the one that already has the dft applied.
Mat image1_dft = last_image_dft;
//Image2, preparation and dft
Mat padded2; //expand input image to optimal size
int m2 = getOptimalDFTSize( image2_in.rows );
int n2 = getOptimalDFTSize( image2_in.cols ); // on the border add zero values
copyMakeBorder(image2_in, padded2, 0, m2 - image2_in.rows, 0, n2 - image2_in.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes2[] = {Mat_<float>(padded2), Mat::zeros(padded2.size(), CV_32F)};
Mat image2,image2_dft;
merge(planes2, 2, image2); // Add to the expanded another plane with zeros
dft(image2, image2_dft);

//Image2 will be image1 on next iteration


last_image_dft=image2_dft;
if( !first){

//obtain the cross power spectrum c(u,v)=F(u,v)G*(u,v)/abs(F(u,v)G*(u,v))

Mat divident, divisor, cross_power_spec, trans_mat;

mulSpectrums(image1_dft, image2_dft, divident, 0, true);
//=F(u,v)G*(u,v) --> multiply the result of a dft
//divident-> where it stores the result.

// flags=0. conj=true, because i want the image2 to be the complex conjugate.

//divisor=abs(divident);

divide(divident, divisor, cross_power_spec, 1);

//dft of the cross_power_spec


dft(cross_power_spec, trans_mat, DFT_INVERSE);

//Normalize the trans_mat so that all the values are between 0 and 1
normalize(trans_mat, trans_mat, NORM_INF);

//Split trans_mat in it's real and imaginary parts


vector<Mat> trans_mat_vector;
split(trans_mat, trans_mat_vector);
Mat trans_mat_real = trans_mat_vector.at(0);
Mat trans_mat_im = trans_mat_vector.at(1);

imshow("trans_mat_real", trans_mat_real);
waitKey(1);

//Look for maximum value and it's location on the trans_mat_real matrix
double* max_value;

Point* max_location;

double max_val;
Point max_loc;
max_value = &max_val;
max_location = &max_loc;

ROS_INFO_STREAM("max_value: " << max_val << " - " << "max_location: " << max_loc);

int pixel_x, pixel_y;

if(max_loc.x < (image2.cols/2) && max_loc.y < (image2.rows/2)){ // top-left quadrant



ROS_INFO_STREAM(" top - left quadrant");

pixel_x = max_loc.x;

pixel_y = - max_loc.y;
}
if(max_loc.x > (image2.cols/2) && max_loc.y > (image2.rows/2)){ // lower-right quadrant

ROS_INFO_STREAM(" lower - right quadrant");

pixel_x = - image2.cols + max_loc.x;

minMaxLoc(trans_mat_real, NULL, max_value, NULL, max_location);


83

Development of autonomous manoeuvres in a quadcopter Report





pixel_y = + image2.rows - max_loc.y;


}


if(max_loc.x > (image2.cols/2) && max_loc.y < (image2.rows/2)){ // top-right quadrant



ROS_INFO_STREAM(" top - right quadrant");



pixel_x = - image2.cols + max_loc.x;



pixel_y = - max_loc.y;


}


if(max_loc.x < (image2.cols/2) && max_loc.y > (image2.rows/2)){ // lower-left quadrant



ROS_INFO_STREAM(" lower - left quadrant");



pixel_x = max_loc.x;



pixel_y = image2.rows - max_loc.y;


}




//Add the new displacement to the accumulated


pixels_x = pixels_x + pixel_x;


pixels_y = pixels_y + pixel_y;


ROS_INFO_STREAM("pixels_x: " << pixels_x << " - " << "pixel_y: " << pixels_y);





//------ transform pixels to mm ---------



//To get the focal lenght:


if (first){





Mat cameraMatrix(3, 3, CV_32F);



cameraMatrix.at<float>(0,0)= 672.03175;



//Values from the camera matrix are from the visp calibration



cameraMatrix.at<float>(0,1) = 0.00000;



cameraMatrix.at<float>(0,2) = 309.39349;



cameraMatrix.at<float>(1,0) = 0.00000;



cameraMatrix.at<float>(1,1) = 673.05595;



cameraMatrix.at<float>(1,2) = 166.52006;



cameraMatrix.at<float>(2,0) = 0.00000;



cameraMatrix.at<float>(2,1) = 0.00000;



cameraMatrix.at<float>(2,2) = 1.00000;




double apertureWidth, apertureHeight, fovx, fovy, focalLength, aspectRatio;



Point2d principalPoint;




calibrationMatrixValues(cameraMatrix, image.size(), apertureWidth, apertureHeight, fovx, fovy, focalLength,
principalPoint, aspectRatio);





ROS_INFO_STREAM("focalLength: " << focalLength);



fl = focalLength;




first=false;


}





float mov_x = pixel_x/fl*h;


float mov_y = pixel_y/fl*h;


mov_x_acc =
mov_x_acc + mov_x;


mov_y_acc = mov_y_acc + mov_y;



ROS_INFO_STREAM("mov_x: " << mov_x << " - mov_y: " << mov_y);


ROS_INFO_STREAM("mov_x_acc: " << mov_x_acc << " - mov_y_acc: " << mov_y_acc);


}

}


//-----Navdata Callback---------------------------------------

void PhaseCorrelation::navdata(const ardrone_autonomy::Navdata::ConstPtr& msg){

//
h = msg -> altd;

//This can only be used when the quadcopter is flying. For hand held trials:

h = 1000;

}

//---------------MAIN-----------------------------------------

int main(int argc, char** argv){


ros::init(argc, argv, "phase_corr_camera");

84

Development of autonomous manoeuvres in a quadcopter Report



PhaseCorrelation ic;



ros::spin();


return 0;
}

//-------CONSTRUCTOR-----------
PhaseCorrelation::PhaseCorrelation() : it_(nh_) {

image_sub_ = it_.subscribe("ardrone/bottom/image_raw", 1, &PhaseCorrelation::controller, this);

navdata_sub_ = nh_.subscribe("ardrone/navdata", 1000, &PhaseCorrelation::navdata, this);

first = true;

pixels_x = 0;

pixels_y = 0;

mov_x_acc = 0;

mov_y_acc = 0;
}

A.5.3 phase_correlation_camera.h

#include <ros/ros.h>
#include <opencv2/core/core.hpp>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include "sensor_msgs/Image.h"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <ardrone_autonomy/Navdata.h>

using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;

class PhaseCorrelation
{

public:

ros::NodeHandle nh_;

image_transport::ImageTransport it_;

image_transport::Subscriber image_sub_;

ros::Subscriber navdata_sub_;


PhaseCorrelation();

void controller(const sensor_msgs::ImageConstPtr& msg);

void navdata(const ardrone_autonomy::Navdata::ConstPtr& msg);


bool first;

int pixels_x, pixels_y;

float mov_x, mov_y;

float mov_x_acc, mov_y_acc;

int h;

double fl;


Mat image2_in;

Mat last_image_dft,image1_dft;
};

85

Development of autonomous manoeuvres in a quadcopter Report


A.6 Pose control based on odometry
A.6.1 pose_control_odom.cpp

#include "pose_control_odometry.h"

//------ Control callback ----------------------------------------------------------------
//To start the tracking when is required by the master
void pose_controller::controlCallback(const ardrone_autonomy::Navdata::ConstPtr& msg){

//Get the ardrone state, the battery percent and the altitude

drone_state = msg -> state;

battery = msg -> batteryPercent;

altd = msg -> altd;


//Check the battery

if (battery <= 6){


status = LANDING;


info = "BATTERY LOW";

}


if (drone_state == 0){


drone = "Unknown";

} else if (drone_state == 1){


drone = "Initied";

} else if (drone_state == 2){


drone = "Landed";

} else if (drone_state == 3){


drone = "Flying";

} else if (drone_state == 4){


drone = "Hovering";

} else if (drone_state == 5){


drone = "Test";

} else if (drone_state == 6){


drone = "Taking off";

} else if (drone_state == 7){


drone = "Flying";

} else if (drone_state == 8){


drone = "Landing";

}



if (state == TAKEOFF){


status = "TAKE OFF";


info = "Taking off";


msg_takeoff.priority = false;


pub_takeoff.publish(msg_takeoff);


if(drone_state == 4){



//Initialise zero position



x = 0;



y = 0;



z_init = msg->altd;



yaw = 0;



yaw_init = msg->rotZ;




//Change state



state = POSITION_KEEPING;



last_time = msg->header.stamp;


}

} else if (state == POSITION_KEEPING){


status = "POSITION KEEPING";


info = "Doing position keeping";


//Now update navigation information based on received data from ardrone odometry


time = msg->header.stamp;


delta_t = ((time.sec + time.nsec/10e9) - (last_time.sec + last_time.nsec/10e9));


yaw = msg->rotZ - yaw_init;


yaw_rad = yaw/180*M_PI;


x = x + msg->vx * (delta_t/1000);


y = y + msg->vy * (delta_t/1000);


z = msg->altd - z_init;


z = z / 1000;



margin = 0.1;


margin_yaw = 0.4;

86

Development of autonomous manoeuvres in a quadcopter Report




//Yaw control


if (fabs(yaw_rad) > margin_yaw){



yaw_vel = pid_yaw.updatePid(yaw, time - last_time);



msg_move.angular.z = yaw_vel;



info = "Correcting position";


} else {



msg_move.angular.z = 0;


}



//X control


if (fabs(x) > margin && fabs(yaw_rad) < margin_yaw){



x_vel = pid_x.updatePid(x, time - last_time);



msg_move.linear.x = x_vel;



info = "Correcting position";


} else {



msg_move.linear.x = 0;


}



//Y control


if (fabs(y) > margin && fabs(yaw_rad) < margin_yaw){



y_vel = pid_y.updatePid(y, time - last_time);



msg_move.linear.y = y_vel;



info = "Correcting position";


} else {



msg_move.linear.y = 0;


}



//Altitude control


if (fabs(z) > margin){



z_vel = pid_z.updatePid(z, time - last_time);



msg_move.linear.z = z_vel;



info = "Correcting position";


} else {



msg_move.linear.z = 0;


}



//Publish velocity messages:


msg_move.priority = false;


pub_move.publish(msg_move);



last_time = msg->header.stamp;


} else if (state == LANDING){


status = "LANDING";


msg_land.priority = true;


pub_land.publish(msg_land);

}


print();
}


// --------- Print function -------------------------------------------------------------
void pose_controller::print(){


float sec = 0.12;



ROS_INFO_STREAM_THROTTLE(sec, "-----------ODOMETRY CONTROL--------------");


ROS_INFO_STREAM_THROTTLE(sec, "Code state: " << status);


ROS_INFO_STREAM_THROTTLE(sec, "Battery: " << battery << "%");


ROS_INFO_STREAM_THROTTLE(sec, "Drone state:
" << drone);


ROS_INFO_STREAM_THROTTLE(sec, "Altitude: " << altd);


ROS_INFO_STREAM_THROTTLE(sec, "Info:

" << info);



ROS_INFO_STREAM_THROTTLE(sec, "------------------CONTROL------------------");


ROS_INFO_STREAM_THROTTLE(sec, "
Velocities");


ROS_INFO_STREAM_THROTTLE(sec, "
----------");


ROS_INFO_STREAM_THROTTLE(sec, "Velocity x: " << msg_move.linear.x);


ROS_INFO_STREAM_THROTTLE(sec, "Velocity y: " << msg_move.linear.y);


ROS_INFO_STREAM_THROTTLE(sec, "Velocity z: " << msg_move.linear.z);


ROS_INFO_STREAM_THROTTLE(sec, "Yaw:
" << msg_move.angular.z);


ROS_INFO_STREAM_THROTTLE(sec, "------------------------------------" << std::endl);



ROS_INFO_STREAM_THROTTLE(sec, std::endl);
}


// -------- Main --------------------------------------------------------

87

Development of autonomous manoeuvres in a quadcopter Report


int main(int argc, char **argv){
ros::init(argc, argv, "pose_controller_with_initialitzation");

pose_controller PoseController;

ros::spin();
return 0;
}


// ------- Constructor ------------------------------------------------------------------------
pose_controller::pose_controller(){

sub_ardrone = n.subscribe("ardrone/navdata", 1000, &pose_controller::controlCallback,this);

pub_move = n.advertise<ardrone_msgs::Vel>("spooler/cmd_vel", 1000);

pub_takeoff = n.advertise<ardrone_msgs::Priority>("spooler/takeoff", 1000);

ros::Duration time_stamp_last_msg_(0.0);


//Initialisation of variables

//Initialisation of pid's

pid_x.initPid(0.2, 0.1, 0.1, 0.1, -0.1);

pid_y.initPid(0.2, 0.1, 0.1, 0.1, -0.1);

pid_z.initPid(0.2, 0.1, 0.1, 0.1, -0.1);

pid_yaw.initPid(0.2, 0.1, 0.1, 0.1, -0.1);


//Set all velocities to 0

msg_move.linear.x = 0;

msg_move.linear.y = 0;

msg_move.linear.z = 0;

msg_move.angular.z = 0;



state = TAKEOFF;
}

A.6.2. pose_control_odom.h

#include <ros/ros.h>
#include <ardrone_msgs/Vel.h>
#include <ardrone_msgs/Priority.h>
#include <ardrone_autonomy/Navdata.h>
#include <std_msgs/String.h>
#include <tf/transform_datatypes.h>
#include <control_toolbox/pid.h>

class pose_controller{
public:

pose_controller();


//Functions

void controlCallback(const ardrone_autonomy::Navdata::ConstPtr& msg);

void print();

private:

ros::NodeHandle n;


//Subscribers and publishers

ros::Publisher pub_move;

ros::Publisher pub_takeoff;

ros::Publisher pub_land;

ros::Subscriber sub_ardrone;


ardrone_msgs::Vel msg_move;

ardrone_msgs::Priority msg_takeoff;

ardrone_msgs::Priority msg_land;


//Node variables

enum {TAKEOFF = 0, POSITION_KEEPING = 1, LANDING = 2} state;

double x, y, z, z_init, yaw_init, yaw, yaw_rad;

float delta_t;

double margin, margin_yaw;

double x_vel, y_vel, z_vel, yaw_vel;


control_toolbox::Pid pid_x, pid_y, pid_z, pid_yaw;

ros::Time last_time, time;

88

Development of autonomous manoeuvres in a quadcopter Report





};

float battery;
int drone_state, altd;
std::string status, drone, info;

89

Development of autonomous manoeuvres in a quadcopter Report


A.7 Visual servoing node
A.7.1 landing_control.cpp

#include "landing_control.h"

//--- Navdata Callback -------------------------------------------------------
void land::NavdataCallback(const ardrone_autonomy::Navdata::ConstPtr& msg){

drone_state = msg -> state;

battery = msg -> batteryPercent;

altd = msg -> altd;


if (drone_state == 0){


drone = "Unknown";

} else if (drone_state == 1){


drone = "Initied";

} else if (drone_state == 2){


drone = "Landed";

} else if (drone_state == 3){


drone = "Flying";

} else if (drone_state == 4){


drone = "Hovering";

} else if (drone_state == 5){


drone = "Test";

} else if (drone_state == 6){


drone = "Taking off";

} else if (drone_state == 7){


drone = "Flying";

} else if (drone_state == 8){


drone = "Landing";

}


//Check the battery

if (battery <= 6){


battery_low = true;


status = LANDING;


info = "BATTERY LOW";

}


if (status == TAKEOFF){


status_info = "TAKE OFF";

info = "----";


if (drone_state != 4){



//Take off



msg_takeoff.priority = false;



pub_takeoff.publish(msg_takeoff);


}



if (drone_state == 4)



status = SEARCHING;


} else if (status == SEARCHING){


status_info = "SEARCHING";


if (!init_detector){



//Wait for the target detection server to start



detection.waitForServer();



info = "Target searcher initialised";




//Send a goal to the action



visual_servoing::Tag_poseGoal detecting_goal;



detection.sendGoal(detecting_goal,
boost::bind(&land::detection_doneCb,
detect_client::SimpleActiveCallback(), boost::bind(&land::detection_feedbackCb, this, _1));



init_detector = true;


}


} else if (status == TRACKING){


status_info = "TRACKING";


if (!init_tracker){



//Wait for the tracking server to start



tracking.waitForServer();



info = "tracking initialised";




//Send a goal to the action



visual_servoing::trackingGoal tracking_goal;

this,

_1),

90

Development of autonomous manoeuvres in a quadcopter Report





tracking.sendGoal(tracking_goal, boost::bind(&land::tracking_doneCb, this, _1), track_client::SimpleActiveCallback(),
boost::bind(&land::tracking_feedbackCb, this, _1));



init_tracker = true;


}



if (!tracking_working){


info = "Target lost!";



status = LOST;


}


} else if (status == LOST){


status_info = "LOST";


//Wait a while, if found go to TRACKING status again, if not go to the timer lost callback and there to the SEARCHING status


if (!tracking_working){



timer_lost.start();


} else{



timer_lost.stop();



status = TRACKING;


}



} else if (status == LANDING){


status_info = "LANDING";


//Land


//Move the drone forwards to land over the target


msg_move.linear.x = 0.3;


msg_move.linear.y = 0;


msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);



if (ai){



if (battery_low == true){




msg_land.priority = true;



} else{




msg_land.priority = false;



}



pub_land.publish(msg_land);




ai = false;


}

}


//Call the printing function to show the control in the screen

print();
}


// ------ Detection feedback callback -------------------------------------------------------
void land::detection_feedbackCb(const visual_servoing::Tag_poseFeedbackConstPtr& feedback){

found = feedback->found;

foundSure = feedback->foundSure;

tag_x = feedback->tag_x;

tag_y = feedback->tag_y;

//Search the target between 800mm and 1300mm (the detection works well between this altitudes)

if (altd > 1300){


msg_move.linear.z = -0.08;


msg_move.linear.x = 0;


msg_move.linear.y = 0;


msg_move.angular.z = 0;

} else if (altd < 800){


msg_move.linear.z = 0.08;


msg_move.linear.x = 0;


msg_move.linear.y = 0;


msg_move.angular.z = 0;

} else{


if (found == false){



info = "Searching the target";



msg_move.linear.x = 0.015;



msg_move.linear.y = 0;



msg_move.linear.z = 0;



msg_move.angular.z = 0;


} else{



if(foundSure == false){




//When the target is found, wait for the verification




info = "Target found. Verifying";




msg_move.linear.x = 0;

91

Development of autonomous manoeuvres in a quadcopter Report






msg_move.linear.y = 0;




msg_move.linear.z = 0;




msg_move.angular.z = 0;



} else{




info = "Target verified";



}


}

}

msg_move.priority = false;

pub_move.publish(msg_move);
}


// ------ Detection done callback --------------------------------------------
void land::detection_doneCb(const actionlib::SimpleClientGoalState& state){

//Stop the drone

msg_move.linear.x = 0;

msg_move.linear.y = 0;

msg_move.linear.z = 0;

msg_move.angular.z = 0;

msg_move.priority = false;

pub_move.publish(msg_move);


//Change to tracking status

status = TRACKING;
}


// ------- Tracking feedback callback ------------------------------------------------------
void land::tracking_feedbackCb(const visual_servoing::trackingFeedbackConstPtr& feedback){

tracking_working = feedback->working;

vel_x = feedback->vel_x;

vel_y = feedback->vel_y;

tag_x = feedback->centre_x;

tag_y = feedback->centre_y;


if (tracking_working){


//Center the target and decend


info = "Tracking working. Approaching";


msg_move.linear.x = vel_x;


msg_move.linear.y = vel_y;



//If the target is in the centre (22cm square) descend


if (abs(tag_x) > 22 || abs(tag_y) > 22){



msg_move.linear.z = 0;


} else{



msg_move.linear.z = -0.1;


}



msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);


} else{


info = "TRACKING NOT WORKING!";

}
}


// ----- Tracking done callback ---------------------------------------------
void land::tracking_doneCb(const actionlib::SimpleClientGoalState& state){

if (strcmp(state.toString().c_str(),"SUCCEEDED") == 0){


//Move the drone forwards to land over the target


msg_move.linear.x = 0.3;


msg_move.linear.y = 0;


msg_move.angular.z = 0;


msg_move.priority = false;


pub_move.publish(msg_move);



for(int i = 0; i < 5; i++ ){}


//Change to landing status


status = LANDING;


info = "Mission finished?";


} else{
//The state is not SUCCEEDED, that means that the goal has been preempted, so search the target again


status = SEARCHING;

92

Development of autonomous manoeuvres in a quadcopter Report



}
}


// ------ Timer lost ------------------------------------------
void land::LostCallback(const ros::TimerEvent& event){

timer_lost.stop();

tracking_working = false;

init_tracker = false;

init_detector = false;

tracking.cancelGoal();
}


// ------ Print function ------------------------------------------------------------
void land::print(){


float sec = 0.05;



ROS_INFO_STREAM_THROTTLE(sec, "--------------LANDING CONTROL--------------");


ROS_INFO_STREAM_THROTTLE(sec, "Code status:
" << status_info);


ROS_INFO_STREAM_THROTTLE(sec, "Battery: " << battery << "%");


ROS_INFO_STREAM_THROTTLE(sec, "Drone state:
" << drone);


ROS_INFO_STREAM_THROTTLE(sec, "Altitude: " << altd);


ROS_INFO_STREAM_THROTTLE(sec, "INFO: " << info);


ROS_INFO_STREAM_THROTTLE(sec, "Centre of the target:
" << tag_x << ", " << tag_y);


ROS_INFO_STREAM_THROTTLE(sec, "------------------CONTROL------------------");


ROS_INFO_STREAM_THROTTLE(sec, "Velocities");


ROS_INFO_STREAM_THROTTLE(sec, "----------");


ROS_INFO_STREAM_THROTTLE(sec, "Velocity x: " << msg_move.linear.x);


ROS_INFO_STREAM_THROTTLE(sec, "Velocity y: " << msg_move.linear.y);


ROS_INFO_STREAM_THROTTLE(sec, "Velocity z: " << msg_move.linear.z);


ROS_INFO_STREAM_THROTTLE(sec, "Yaw:
" << msg_move.angular.z);


ROS_INFO_STREAM_THROTTLE(sec, "-----------------------------------" << std::endl);



ROS_INFO_STREAM_THROTTLE(sec, std::endl);
}


// -------- Main ---------------------------------------------------------
int main(int argc, char **argv){

ros::init(argc, argv, "landing_controller");


land landing_controller;


ros::spin();

return 0;
}


// -------- Constructor -------------------------------------------------------------------
land::land():
//true causes the client to spin its own thread
detection("target_detection", true),
tracking("tracking", true)
{

//Set to hd camera

client = nh_.serviceClient<ardrone_autonomy::CamSelect>("ardrone/setcamchannel");

srv.request.channel = 0;

client.call(srv);


sub_ardrone = nh_.subscribe("ardrone/navdata", 1000, &land::NavdataCallback,this);

pub_takeoff = nh_.advertise<ardrone_msgs::Priority>("spooler/takeoff", 1000);

pub_move = nh_.advertise<ardrone_msgs::Vel>("spooler/cmd_vel", 1000);

pub_land = nh_.advertise<ardrone_msgs::Priority>("spooler/land",1000);


//Initialisation of variables

//Set all velocities to 0

msg_move.linear.x=0;

msg_move.linear.y=0;

msg_move.linear.z=0;

msg_move.angular.z=0;


timer_lost = nh_.createTimer(ros::Duration(3), &land::LostCallback, this);

timer_lost.stop();


status = TAKEOFF;

tracking_working = false;

93

Development of autonomous manoeuvres in a quadcopter Report















}

init_takeoff = false;
init_tracker = false;
init_detector = false;
battery_low = false;
ai = true;
altd = 0;
tag_x = 0;
tag_y = 0;
drone_state = 0;
info = "Landing control node started";
print();

A.7.2 landing_control.h

#include <ros/ros.h>
#include <iostream>
#include <string>
#include <boost/thread.hpp>

#include <actionlib/client/simple_action_client.h>
#include <actionlib/client/terminal_state.h>
#include <visual_servoing/Tag_poseAction.h>
#include <visual_servoing/trackingAction.h>

#include <ardrone_msgs/Vel.h>
#include <ardrone_msgs/Priority.h>
#include <ardrone_autonomy/Navdata.h>
#include <ardrone_autonomy/CamSelect.h>

typedef actionlib::SimpleActionClient<visual_servoing::Tag_poseAction> detect_client;
typedef actionlib::SimpleActionClient<visual_servoing::trackingAction> track_client;

class land{

public:

land();
~land(void){}


//Functions

void NavdataCallback(const ardrone_autonomy::Navdata::ConstPtr& msg);

void detection_feedbackCb(const visual_servoing::Tag_poseFeedbackConstPtr& feedback);

void detection_doneCb(const actionlib::SimpleClientGoalState& state);

void tracking_feedbackCb(const visual_servoing::trackingFeedbackConstPtr& feedback);

void tracking_doneCb(const actionlib::SimpleClientGoalState& state);

void pid(float gain, float error_x, float error_y);

void LostCallback(const ros::TimerEvent& event);

void print();

private:

ros::NodeHandle nh_;


//Create the action clients

detect_client detection;

track_client tracking;



//Subscribers and publishers

ros::Subscriber sub_ardrone;

ros::Subscriber sub_tag_feedback;

ros::Publisher pub_takeoff;

ros::Publisher pub_move;

ros::Publisher pub_land;


ardrone_msgs::Priority msg_takeoff;

ardrone_msgs::Vel msg_move;

ardrone_msgs::Priority msg_land;


//To set the bottom camera

ros::ServiceClient client;

ardrone_autonomy::CamSelect srv;


//Node variables

94

Development of autonomous manoeuvres in a quadcopter Report














};

enum {TAKEOFF = 0, SEARCHING = 1, TRACKING = 2, LOST = 3, LANDING = 4} status;


int drone_state, t;
float battery;
bool searching, target_found, landing, battery_low, ai;
bool found, foundSure, tracking_working;
bool init_takeoff, init_detector, init_tracker;
int altd, tag_x, tag_y;
float vel_x, vel_y;
float velo_x, velo_y, velo_z;
ros::Time last_time, time;
ros::Timer timer_lost;
std::string info, status_info, drone;

A.7.3 tag_detection_server.cpp

#include "tag_detection_server.h"

namespace enc = sensor_msgs::image_encodings;

// ---------- Goal callback ----------------------------------------------------------------------
//to start the target searching when required for the master
void TagDetection::goalCB(){

start = true;


Server_.acceptNewGoal();
//Make sure the goal hasn't been cancelled between it was found and we accept it, as between his time period preemt callback is
not triggered

if(Server_.isPreemptRequested()){


preemptCB();

}
}

// ------------- Preempt callback ----------------------------------------
//to cancel the goal when required by the master
void TagDetection::preemptCB(){

Server_.setPreempted();

restart = false;
}


// --------- Camera image callback ---------------------------------------------------------------
//contains the images
void TagDetection::image_cam(const sensor_msgs::ImageConstPtr& msg){

if (start){



//If the restart of the searching is required, set all the variables to its initial value


if (restart){



start = false;



try_next = 1;



trying = 5;



found = false;



found_sure = 0;



foundSure = false;



start = false;



templ_loaded = false;



restart = false;




} else{



//Get a frame



cv_bridge::CvImagePtr cv_ptr;



try{




cv_ptr = cv_bridge::toCvCopy(msg, enc::BGR8);



}



catch (cv_bridge::Exception& e){




ROS_ERROR("tag_detection_server: cv_bridge exception: %s", e.what());




return;



}



frame = cv_ptr->image;




//Call the function to find the squares



findSquares(frame, squares);




//Show the detected squares

95

Development of autonomous manoeuvres in a quadcopter Report





/*fr_squares = frame.clone();



drawSquares(fr_squares, squares);



imshow("Squares detection", fr_squares);*/




//Call the function to compare the template with the found squares



findTarget(frame, squares, rotated, templ);




//Show me what you got



imshow("Target detection", frame);



//imshow("Square normalized", rotated);



//imshow("Result", result);




if (!found){




found_sure = 0;




foundSure = false;



}




//Send the feedback to the master



feedback_.found = found;



feedback_.foundSure = foundSure;



feedback_.tag_x = tag_x - frame.size().width/2;



feedback_.tag_y = tag_y - frame.size().height/2;



Server_.publishFeedback(feedback_);




//To publish the video



cv_bridge::CvImagePtr cv_ptr_out(new cv_bridge::CvImage);



cv_ptr_out->image = frame;



cv_ptr_out->encoding = "bgr8";



sensor_msgs::Image message;



cv_ptr_out->toImageMsg(message);



image_pub_.publish(message);




waitKey(3);


}

}
}


// ---------- findSquares function -----------------------------------------------------------------------------
//Returns a sequence of squares detected on the image. The sequence is stored in the specified memory storage
void TagDetection::findSquares(const Mat& image, vector<vector<Point> >& squares){

squares.clear();


Mat pyr, timg, gray0(image.size(), CV_8U), gray;

vector<vector<Point> > contours;


//Down-scale and upscale the image to filter out the noise

pyrDown(image, pyr, Size(image.cols/2, image.rows/2));

pyrUp(pyr, timg, image.size());


//Find squares in only one color plane of the image

int ch[] = {1, 0};

mixChannels(&timg, 1, &gray0, 1, ch, 1);


//Apply Canny. Take the upper threshold 50 and set the lower to 0 (which forces edges merging)

Canny(gray0, gray, 0, 50, 5);

//Dilate canny output to remove potential holes between edge segments

dilate(gray, gray, Mat(), Point(-1,-1));


//Find contours and store them all as a list

findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);


vector<Point> approx;


//Test each contour

for(size_t i = 0; i < contours.size(); i++){


//Approximate contour with accuracy proportional to the contour perimeter


approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);

//Square contours should have 4 vertices after approximation relatively large area (to filter out noisy contours) be convex, and be
smaller than the 80% of the frame area
//Note: absolute value of an area is used because area may be positive or negative - in accordance with the contour orientation


if(
approx.size()
==
4
&&
fabs(contourArea(Mat(approx)))
>
1000
&&
contourArea(Mat(approx))<image.size().width*image.size().height*0.8 && isContourConvex(Mat(approx))){




double maxCosine = 0;

96

Development of autonomous manoeuvres in a quadcopter Report







for( int j = 2; j < 5; j++ ){




//Find the maximum cosine of the angle between joint edges

double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));

maxCosine = MAX(maxCosine, cosine);
}
//If cosines of all angles are small (all angles are ~90 degree) then write quandrange vertices to resultant
sequence
if(maxCosine < 0.1)

{ squares.push_back(approx); }









}

}
}


// ----- Angle function --------------------------------------------------------------------
//to check if the countours found in the findSquares function
double TagDetection::angle(Point pt1, Point pt2, Point pt0){

double dx1 = pt1.x - pt0.x;

double dy1 = pt1.y - pt0.y;

double dx2 = pt2.x - pt0.x;

double dy2 = pt2.y - pt0.y;


return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}

//------- draWSquares function ---------------------------------------------------------------------
//Drows all the squares in the image
void TagDetection::drawSquares( Mat& image, const vector<vector<Point> >& squares ){

for(size_t i = 0; i < squares.size(); i++){


const Point* p = &squares[i][0];


int n = (int)squares[i].size();


polylines(image, &p, &n, 1, true, Scalar(0,255,0), 1, CV_AA);

}
}

//------- findTarget function ---------------------------------------------------------------------
//Compare the squares with the template
void TagDetection::findTarget(Mat& image, const vector<vector<Point> >& square, Mat& rotated, Mat& templ){

//Load the template

if (templ_loaded == false){


templ = imread("/home/marc/electric_workspace/visual_servoing/templ_H.jpg", 1);


if (templ.empty()){



cout << "tag_detection_server: Cannot open the target image" << endl;


}


resize(templ, templ, Size(), 1, 1, INTER_NEAREST);


templ_loaded = true;

}


if (square.empty()){


found = false;

} else{


//Normalize the squares detected


for(size_t i = 0; i < square.size(); i++){




//Get the four points of the square



vector<Point> orig_square = square[i];



vector<Point> not_a_rect_shape;



not_a_rect_shape.push_back(Point(orig_square[0]));



not_a_rect_shape.push_back(Point(orig_square[1]));



not_a_rect_shape.push_back(Point(orig_square[2]));



not_a_rect_shape.push_back(Point(orig_square[3]));



RotatedRect box = minAreaRect(Mat(not_a_rect_shape));



box.points(pts);




if (try_next == 1){




src_vertices[0] = pts[2];




src_vertices[1] = pts[3];




src_vertices[2] = pts[1];




src_vertices[3] = pts[0];



} else if (try_next == 2){




src_vertices[0] = pts[1];




src_vertices[1] = pts[2];




src_vertices[2] = pts[0];




src_vertices[3] = pts[3];



} else if (try_next == 3){




src_vertices[0] = pts[0];

97

Development of autonomous manoeuvres in a quadcopter Report






src_vertices[1] = pts[1];




src_vertices[2] = pts[3];




src_vertices[3] = pts[2];



} else if (try_next == 4){




src_vertices[0] = pts[3];




src_vertices[1] = pts[0];




src_vertices[2] = pts[2];




src_vertices[3] = pts[1];



}




Point2f dst_vertices[4];



dst_vertices[0] = Point(0, 0);



dst_vertices[1] = Point(box.boundingRect().width-1, 0);



dst_vertices[2] = Point(0, box.boundingRect().height-1);



dst_vertices[3] = Point(box.boundingRect().width-1, box.boundingRect().height-1);




//Normalise the square. Resize it the square detected



warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices);



Size size(box.boundingRect().width, box.boundingRect().height);



warpAffine(image, rotated, warpAffineMatrix, size, INTER_LINEAR, BORDER_CONSTANT);




resize(rotated, rotated, templ.size(), 1, 1, INTER_NEAREST);



resize(rotated, rotated, Size(), 2.2, 2.2, INTER_NEAREST);




//Call the MatchingMetod to search the template in the resized square



MatchingMethod(0,0);




if(found == true){ break; }


}

}
}


// ---------- function MatchingMethod -------------------------------------------------
void TagDetection::MatchingMethod(int, void*){

if (rotated.empty()){


ROS_INFO_THROTTLE(2, "tag_detection_server: No possible template in frame");


found = false;


} else{


img_display = frame.clone();



//Create the result matrix


int result_cols = rotated.cols - templ.cols + 1;


int result_rows = rotated.rows - templ.rows + 1;



result.create( result_rows, result_cols, CV_32FC1 );



//Do the Matching and Normalize. Search the template in the rotated frame


matchTemplate(rotated, templ, result, CV_TM_SQDIFF);


normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());



//Localizing the best match with minMaxLoc


double minVal, maxVal;


Point minLoc, maxLoc, matchLoc;



minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, Mat());


matchLoc = minLoc;



int x = matchLoc.x;


int y = matchLoc.y;



found = false;

//Check if there is the possibility to find the template inside the detected square. If there is the template, the matchLoc
point should be in the centre of the result window:


if (x > result.cols/3 && x < result.cols*2/3 && y > result.rows/5 && y < result.rows*4/5){



rectangle(result, Point(result.cols*2/5, result.rows*3/7), Point(result.cols*3/5, result.rows*4/7), Scalar(0,0,0), 2, 8,
0);



//rectangle(result, Point(result.cols/3, result.rows/5), Point(result.cols*2/3, result.rows*4/5), Scalar(0,0,0), 2, 8, 0);



possible = true;



//Possible target!




if (x > result.cols*2/5 && x < result.cols*3/5 && y > result.rows*3/7 && y < result.rows*4/7){




found = true;




//Target found!!



}

98

Development of autonomous manoeuvres in a quadcopter Report



//If it is found, wait a bit and draw a green rectangle in the target, if not, try to rotated it if there is a possible target


if (found == true){



//Find it during 5 consecutives frames to set founSure to true



found_sure++;



if (found_sure >= 5 || (trying < 5 && trying > 0)){




foundSure = true;



} else{




foundSure = false;



}




//Draw a rectangle in the target



if (foundSure == false){




//Show me what you got




rectangle(rotated, matchLoc, Point(x + templ.cols, y + templ.rows), Scalar(0,150,255), 2, 8, 0);





//Draw an orange rectangle in the target




line(frame, src_vertices[0], src_vertices[1], Scalar(0,150,255), 5, 8, 0);




line(frame, src_vertices[1], src_vertices[3], Scalar(0,150,255), 5, 8, 0);




line(frame, src_vertices[3], src_vertices[2], Scalar(0,150,255), 5, 8, 0);




line(frame, src_vertices[2], src_vertices[0], Scalar(0,150,255), 5, 8, 0);




} else{




//Show me what you got




rectangle(rotated, matchLoc, Point(x + templ.cols, y + templ.rows), Scalar(0,255,0), 2, 8, 0);





//Draw a green rectangle in the target




line(frame, src_vertices[0], src_vertices[1], Scalar(0,255,0), 5, 8, 0);




line(frame, src_vertices[1], src_vertices[3], Scalar(0,255,0), 5, 8, 0);




line(frame, src_vertices[3], src_vertices[2], Scalar(0,255,0), 5, 8, 0);




line(frame, src_vertices[2], src_vertices[0], Scalar(0,255,0), 5, 8, 0);





//Find the centre of the target




if (src_vertices[0].x < src_vertices[3].x){





d_x = src_vertices[3].x - src_vertices[0].x;





tag_x = src_vertices[0].x + d_x/2;




} else{





d_x = src_vertices[0].x-src_vertices[3].x;





tag_x = src_vertices[3].x + d_x/2;




}





if (src_vertices[0].y < src_vertices[3].y){





d_y = src_vertices[3].y-src_vertices[0].y;





tag_y = src_vertices[0].y + d_y/2;




} else{





d_y = src_vertices[0].y-src_vertices[3].y;





tag_y = src_vertices[3].y + d_y/2;




}




//Set the params to send the centre of the target to the tracking node



nh_.setParam("/target/x", tag_x);



nh_.setParam("/target/y", tag_y);




//Set the action state to succeeded



Server_.setSucceeded(result_);



restart = true;



}



//If found is false but possible is true


} else if(possible == true){



//If there is no matching target, try to rotate the image



if (try_next == 4)




{try_next = 1;} else




{try_next = try_next + 1;}



trying++;


}



return;

}
}


// --------------- Main -------------------------------------------------
int main(int argc, char** argv){

ros::init(argc, argv, "visual_servoing");

99

Development of autonomous manoeuvres in a quadcopter Report



TagDetection Target_detection;


ros::spin();

return 0;
}


// --------------- Constructor -----------------------------------------------------------------------
TagDetection::TagDetection():
it_(nh_),
Server_(nh_,"target_detection", false)
{

//Register the Goal and the Preept Callbacks

Server_.registerGoalCallback(boost::bind(&TagDetection::goalCB,this));

Server_.registerPreemptCallback(boost::bind(&TagDetection::preemptCB,this));


image_sub_ = it_.subscribe("ardrone/front/image_rect_color", 1, &TagDetection::image_cam, this);

image_pub_ = it_.advertise("detecting/image", 1);


//Start the server

Server_.start();


//Initialisation of variables

try_next = 1;

trying = 5;

found = false;

foundSure = false;

start = false;

templ_loaded = false;

restart = false;
}

A.7.4 target_detection_server.h

#include <ros/ros.h>
#include <stdio.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include "sensor_msgs/Image.h"

#include <actionlib/server/simple_action_server.h>
#include <visual_servoing/Tag_poseAction.h>

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/video/tracking.hpp"
#include "opencv2/calib3d/calib3d.hpp"

using namespace std;
using namespace cv;

namespace enc = sensor_msgs::image_encodings;
typedef actionlib::SimpleActionServer<visual_servoing::Tag_poseAction> TagServer;

class TagDetection{

public:

TagDetection();
~TagDetection(void){}


//Functions

void goalCB();

void preemptCB();

void image_cam(const sensor_msgs::ImageConstPtr& msg);

double angle( Point pt1, Point pt2, Point pt0 );

void findSquares( const Mat& image, vector<vector<Point> >& squares );

void drawSquares( Mat& image, const vector<vector<Point> >& squares );

void findTarget( Mat& image, const vector<vector<Point> >& square, Mat& rotated, Mat& templ );

void MatchingMethod( int, void* );
protected:

ros::NodeHandle nh_;

100

Development of autonomous manoeuvres in a quadcopter Report



//Subscribers and publishers

image_transport::ImageTransport it_;

image_transport::Subscriber image_sub_;

image_transport::Publisher image_pub_;

//Create the server and messages that are used to published feedback/result

TagServer Server_;

visual_servoing::Tag_poseFeedback feedback_;

visual_servoing::Tag_poseResult result_;
private:

//Node variables

bool start, restart, templ_loaded, possible, found, foundSure;

int try_next, found_sure, trying;

double d_x, d_y, tag_x, tag_y;


vector<vector<Point> > squares;

Mat frame, fr_squares, rotated, templ, img_display, result, gray_frame, warpAffineMatrix;

Point2f src_vertices[4], pts[4];
};

A.7.5 tracker_server.cpp

#include "tracker_server.h"

// ----- Goal callback ------------------------------------------------------------
//to start the tracking when is required by the master
void Tracking::goalCB(){

start = true;


Server_.acceptNewGoal();
//Make sure the goal hasn't been cancelled between it was found and we accept it, as between his time period preemt callback is
not triggered

if(Server_.isPreemptRequested()){


preemptCB();

}
}

// ------ Preempt callback -------------------------------------
//to cancel the goal when required by the master
void Tracking::preemptCB(){

Server_.setPreempted();

start = false;

first = true;

working = false;

taskPtr->kill();

delete taskPtr;
}


// ------ Navdata callback ------------------------------------------------------
//to get the altitude and the battery percent)
void Tracking::NavdataCallback(const ardrone_autonomy::Navdata::ConstPtr& msg){

altd = msg -> altd;

battery = msg -> batteryPercent;
}


// ------ Camera info callback --------------------------------------------------
//to get the camera parameters
void Tracking::camerainfo(const sensor_msgs::CameraInfo& msg){

sensor_msgs::CameraInfo cam_info=msg;

cam = visp_bridge::toVispCameraParameters(cam_info);
}


// ------ Controller callback ----------------------------------------------------
//contains the images
void Tracking::controller(const sensor_msgs::ImageConstPtr& msg){

if (start){


cv_bridge::CvImagePtr cv_ptr;


try{



cv_ptr = cv_bridge::toCvCopy(msg, enc::BGR8);


}


catch (cv_bridge::Exception& e){



ROS_ERROR("cv_bridge exception: %s", e.what());

101

Development of autonomous manoeuvres in a quadcopter Report















































































return;
}
frame = cv_ptr->image;

//Convert image to gray and to VISP vpImage class


cvtColor(frame, frame, CV_RGB2GRAY);
vpImageConvert::convert(frame, I);

//If first is true, do the initialisation


if(first){

//Display initialisation

d.init(I, 0, 0, "Tracking");

vpDisplay::display(I);
vpDisplay::flush(I);

//Initialise the tracking


dot.setGrayLevelPrecision(0.9);
//To allow the tracking of a non ellipoid shape
dot.setEllipsoidShapePrecision(0);
//Get the points of the centre of the target
nh_.getParam("/target/x", target_x);
nh_.getParam("/target/y", target_y);
ip.set_i(target_y);
ip.set_j(target_x);

dot.initTracking(I, ip);
vpDisplay::flush(I);

//Initialize visual servoing task


cog = dot.getCog();
tag.setWorldCoordinates(I.getHeight()/2, I.getWidth()/2, 1);
vpFeatureBuilder::create(sd,tag);

taskPtr = new vpServo();


taskPtr->setServo(vpServo::EYEINHAND_CAMERA);
taskPtr->setInteractionMatrixType(vpServo::DESIRED);
taskPtr->addFeature(s,sd);

first = false;

//Set the lambda of the visual servoing depending on the battery


if (altd >= 700){

if (battery > 70){


lambda = 0.33;

} else if(battery > 50){


lambda = 0.35;

} else{


lambda = 0.32;

}
} else{

if (battery > 70){


lambda = 0.25;

} else if(battery > 50){


lambda = 0.22;

} else{


lambda = 0.18;

}
}
taskPtr->setLambda(lambda);

vpDisplay::display(I);
//Try to do the tracking of the dot, if it fails, set working to false
try{

//Track the dot

dot.track(I);

dot.display(I, vpColor::cyan, 3);

vpFeatureBuilder::create(s, cam, dot);

v = taskPtr->computeControlLaw();

working = true;
}
catch (...){

working = false;

SendPose();
}

102

Development of autonomous manoeuvres in a quadcopter Report





//Get the centre of gravity of the dot (which is the centre of the target) if it is tracked


if (working){



cog = dot.getCog();



centre_x = -(cog.get_i() - frame.size().height/2);



centre_y = cog.get_j() - frame.size().width/2;




//Transformation the target centre into cm



k = (-0.03632) * altd + (32.802867);



centre_x = centre_x / k;



centre_y = centre_y / k;


}



//Set the velocities to send to the drone calculated with the visual servoing task


vel_x = -v[1];


vel_y = -v[0];



//Display the centre of the dot and the goal position of it in the image


vpDisplay::displayCross(I, vpImagePoint(I.getHeight()/2, I.getWidth()/2), 20, vpColor::red, 2);



vpDisplay::flush(I);



//To publish the video


vpDisplay::getImage(I,Ivideo);


vpImageConvert::convert(Ivideo, video);


cv_bridge::CvImagePtr cv_ptr_out(new cv_bridge::CvImage);


cv_ptr_out->image = video;


cv_ptr_out->encoding = "bgr8";


sensor_msgs::Image message;


cv_ptr_out->toImageMsg(message);


image_pub_.publish(message);



//If working and centered, don't send


if (working && altd < 275 && abs(centre_x) < 30 && abs(centre_y) < 30){


//If its difficult to land: remove the condition to be centred



if (sure){




//Move forwards to land over the target




vel_x = 0.3;




SendPose();





//Set the action state to succeeded




Server_.setSucceeded(result_);




start = false;



}



sure = true;


}



//Call the function to send the feedback to the master


SendPose();

}
}


// ------ SendPose callback ---------------------------
//to send the feedback to the master
void Tracking::SendPose(){

feedback_.working = working;

feedback_.vel_x = vel_x;

feedback_.vel_y = vel_y;

feedback_.centre_x = centre_x;

feedback_.centre_y = centre_y;

Server_.publishFeedback(feedback_);
}

// -------- Main ----------------------------------------
int main(int argc, char** argv){

ros::init(argc, argv, "tracker_pub");


Tracking ic;


ros::spin();

return 0;
}

//---------- Destructor --------------------------------------

Tracking::~Tracking(){

103

Development of autonomous manoeuvres in a quadcopter Report



taskPtr->kill();

delete taskPtr;
}

// --------- Constructor ---------------------------------------------------------------------------
Tracking::Tracking():
it_(nh_),
Server_(nh_,"tracking", false)
{

//Register the Goal and the Preept Callbacks

Server_.registerGoalCallback(boost::bind(&Tracking::goalCB,this));

Server_.registerPreemptCallback(boost::bind(&Tracking::preemptCB,this));


sub_ardrone = nh_.subscribe("ardrone/navdata", 1000, &Tracking::NavdataCallback,this);

image_sub_ = it_.subscribe("ardrone/front/image_rect_color", 1, &Tracking::controller, this);

camera_info_sub_ =nh_.subscribe("ardrone/front/camera_info", 1, &Tracking::camerainfo, this);

image_pub_ = it_.advertise("tracking/image", 1);


//Start the server

Server_.start();


//Initialisation of variables

start = false;

first = true;

working = true;

sure = false;
}

A.7.6 tracker_server.h

#include <ros/ros.h>
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "sensor_msgs/Image.h"
#include <ardrone_autonomy/Navdata.h>
#include <conversions/camera.h>

#include <actionlib/server/simple_action_server.h>
#include <visual_servoing/trackingAction.h>

#include <visp/vpConfig.h>
#include <visp/vpImage.h>
#include <visp/vpDot2.h>
#include <visp/vpOpenCVGrabber.h>
#include <visp/vpDisplayOpenCV.h>
#include <visp/vpDisplayGDI.h>
#include <visp/vpDisplayX.h>
#include <visp/vpPoint.h>
#include <visp/vpServo.h>
#include <visp/vpFeatureBuilder.h>
#include <visp/vpServoDisplay.h>
#include <visp/vpConfig.h>
#include <visp/vpVideoWriter.h>

using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;

typedef actionlib::SimpleActionServer<visual_servoing::trackingAction> TrackServer;

class Tracking{

public:

Tracking();

~Tracking();


//Functions

104

Development of autonomous manoeuvres in a quadcopter Report



void goalCB();

void preemptCB();

void controller(const sensor_msgs::ImageConstPtr& msg);

void NavdataCallback(const ardrone_autonomy::Navdata::ConstPtr& msg);

void camerainfo(const sensor_msgs::CameraInfo& msg);

void SendPose();

protected:

ros::NodeHandle nh_;


//Subscribers and publishers

image_transport::ImageTransport it_;

image_transport::Subscriber image_sub_;

ros::Subscriber sub_ardrone;

ros::Subscriber camera_info_sub_;

image_transport::Publisher image_pub_;


//Create the server and messages that are used to published feedback/result

TrackServer Server_;

visual_servoing::trackingFeedback feedback_;

visual_servoing::trackingResult result_;


//Node variables

//Visual servoing variables

vpImage<unsigned char> I;

vpDisplayOpenCV d;

vpServo* taskPtr;

vpDot2 dot;

vpPoint tag;

vpImagePoint ip, cog;

vpCameraParameters cam;

vpFeaturePoint s, sd;

vpColVector v;

vpImage<vpRGBa> Ivideo;

//Other node variables

Mat frame, video;

bool start, first, working, sure;

double target_x, target_y;

double lambda;

float vel_x, vel_y, k;

int centre_x, centre_y, altd;

float battery;
};

105

Development of autonomous manoeuvres in a quadcopter Report


A.8 Ardrone messages
A.8.1 Pose.msg

Vector3 real

Vector3 pixels

bool priority

A.8.2 Priority.msg

bool priority

A.8.3 Vector3.msg

# This represents a vector in free space.

float64 x
float64 y
float64 z

A.8.4 Vel.msg

Vector3 linear

Vector3 angular

bool priority

106

Você também pode gostar