Você está na página 1de 5

AQUATIC ROBOTICS

TE (MECH) SSBTS COET BAMBHORI, JALGAON


Dinesh Bhosale
Piyushkumar chaudhari
dinu0017@gmail.com
pbc143@gmail.com
Mob. 9975310521
Mob. 9595792120

ABSTRACT
In the field of robotics. Robot is nothing but a new dimension to a human ability which may
have a potential to challenge the nature. When we think of the robot, we think it as a heavy
metal body with plenty wires around it which compel the robot to run efficiently. But in
todays world the advanced technologies like nanotechnology & chip designing, polymers
and plastic, are going to enhance at a very high speed. Its really a dream for us to build a
robot, by using such kind of advanced technologies, which can float in the water & provide a
captivative window to the world of the water. But this dream will soon come in existence with
the help of the concept called Aquatic Robot. Aquatic robot may have a concept called
VISUAL SERVOING SYSTEM. According to this system the robot is allowed to move inside
the water for applying a big class of significant applications.

1. Introduction
1.1 What is Aqua robot?
This part describes the visually based servo control of the swimming robot. According to the
recent work it is found that Aquatic robot uses legged motion to swim and navigation
technique under water which is totally depends on the vision. These robots may have the
power to sense the targeted object by vision. Sonar is the efficient system used by under
water vehicle. Similar to sonar vision is also very effective but its use is very complicated.
Because of this reason simple navigation task is accomplished in the by using visual feedback
system. In this, the visual feedback is used to modify the robot. This modification forces the
robots to follow the artificial target which is handled by the divers. Actually doing all this
process under water is really very complicated because of the visibility and lighting of the
water in which the robot is working and another is the motion of the vehicle from which it is
handled.
The influence of underwater robotics has been rapidly increasing. Because computer
vision became mature in last few decades. The vision technique is basically highly ignored
under the water rather than on the land In fact, vision can be as valuable a sensing medium
underwater, and perhaps seven more significant a sensing medium underwater than on land.
Simple inspection of marina fauna demonstrates the ubiquity of eyes, and other optical
sensors, in the marine environment and thus suggests its potential utility.
In our particular application we are interested in tracking a diver as she swims either along
the surface or under water. In this case we need a tracking technology that imposes a very
limited cognitive load on the driver, which operates despite variations in lighting (due to
refractive effects and/or waves), which is immune to nearby wave action and which operates
over a moderate range of distances.
In our applications vision has the advantage of being a passive sensing medium and it
is thus both non-intrusive as well as energy efficient. These are both important considerations
(in contrast to sonar) in a range of applications ranging from environmental assays to security
surveillance. Alternative sensing media such as sonar also suffer from several deficiencies
with make them difficult to use for tracking moving targets at Close range in potentially
turbulent water.

1.2 overview
The AQUA robot is designed as an aquatic swimming robot that is capable of operating both
on land as well as under water. A direct descendant of the Rhex hexapod robot, AQUA was
built with underwater applications in mind, one of which was monitoring of marine life (i.e.
coral reef, fish population). The robot has a waterproof aluminium shell inside which the
electronics and sensors are housed. There are three cameras currently mounted on the robot;
two at the front and one at the back. One of the cameras in the front provides digital output
and has been used for servo task as described in this paper.

Fig.1. the AQUA robot following a diver with a yellow ball as a target
The robot can swim, walk, maintain station and crawl at the bottom of the sea using six
paddles or flippers. Using these six flippers the robot can directly
Control five of the six degrees of freedom it is capable of. . The AQUA robot following a
diver with a yellow ball as a target the flippers also act as surfaces for underwater control.
There are lights for aiding visual navigation at the front of the robot in low-lighting
conditions.
1.3 Applications.
As such, underwater environments represent a substantial area in which robotics can make a
natural contribution. A range of applications can be identified for which simple inspection
even in moderately shallow even mundane activities underwater pose problems for humans in
terms of logistics, cost, efficiency and safety. Water can prove useful. These applications
include underwater search and rescue, coral health monitoring, monitoring of underwater
establishments (e.g. oil pipelines, communication cables) and many more. Specifically, we
are interested in environmental assessment tasks in which visual measurements of a marine
ecosystem must be taken on a regular basis. While automatically selecting regions of interest
is beyond the scope of present technologies, once a biologist identifies areas of interest we
believe a robot may be capable of collecting supplementary data or even independently
executing inspection tours. It is in this context that the present work is framed.
2. Approch to visual servoing.
In general, visual servoing refers to the task of controlling the pose of the robot or robotic
end- effectors mounted on the robot by using visual information. This commonly closed-loop
system is used with eye-on-hand camera configuration in manufacturing industries, but
there have been a quite few applications of vehicle control with a fixed camera mounted on
the robot for car steering and aircraft landing, among others. Although a number of different
approaches to visual servoing exist, the process fundamentally works by tracking an object or
set of objects in successive

Fig. 2. The AQUA Servoing Architecture


2.1 System Architecture
As discussed in the previous section, visual servoing systems are commonly made up of two
components: a visual processor or tracker, and a controller that takes visual information as
input and produces as output commands to the actuator. The servoing mechanism for AQUA
follows this pattern. The hardware architecture for servoing AQUA is presented in the
following two subsections. Details about the tracker and controller are explained in the next
section.
2.1.1. Mechanical specification
As mentioned before, the propulsion for AQUA is generated by six paddles or flippers, three
on each side of the robot. These flippers provide thrust and act as control surfaces for turning
(yaw), diving (pitch) and rolling on its axis (roll). The driving and navigation of the robot is
controlled by a computer running the QNX real-time operating system. The robot control
software is written using the RoboDevel library and implements closed-loop control of the
leg actuators. Each leg executes a periodic motion and is controlled by three individual free
parameters: period, amplitude and offset which results in an eighteen-dimensional space for
all six legs. For a given gait however, the parameters of a set of legs are coupled to each
other, thereby reducing the overall dimensionality of the parameter space for leg control once
a gait is selected. To change gaits while swimming, a phaseoffset is applied to the individual
legs. An amplitude offset is used to control the pitch and roll of the robot. For yaw, a
difference in the amplitude scale factor is utilized between the sets of legs on the two sides;
i.e. if the amplitude scale factor is higher on the left side of legs, the robot will yaw right. An
infinite-impulse response filter (IIR) filter is used to smooth out the sine parameter changes
when controlling the robot, to avoiding abrupt changes in leg positions. The filter coefficients
are currently adjusted manually without any tuning and learning.
2.1.2 Software and electronics
For visual servoing, high speed image processing is of key importance. To assure this, a
dedicated vision processor is installed for sensor data interpretation. It uses a Pentium M
processor running at 1.1 GHz, a gigabyte of RAM and conforms to the PC/104 Plus form
factor. The vision stack is based on Linux and runs off a 512 megabyte CompactFlash
card.For image acquisition a IEEE1394 (Firewire) digital color camera with resolution of
640-by-480 pixels is used with a PC/104 plus IEEE1394 adapter card. The vision processor
interfaces with the camera through the IEEE1394 capture card. The tracker is written in C++
using an open source vision library called VXL (Vision Something Libraries). VXL1
includes libraries for computer vision, mathematics file I/O, image processing and other
useful vision processing functionality. Communication between the control and vision stack
takes place over the network using the UDP protocol.

3. Approach of servo Control


3.1 Tracker
The tracker is built on the principle of a Region of Interest operator. For our purpose, we use
a color blob tracker which segments out a particular region based on the color parameters that
are given as input to the tracker. For simplicity the region of interest computations are carried
out in RGB color space. After a frame was acquired, the RGB values are normalized by
dividing each pixels red, green and blue value by the sum of these individual value,
accomplishing a transform to hue space, albeit over-represented as a redundant RGB triple. In
addition, a thresholding is performed on the absolute value. This is done in order to prevent
the darker areas of the image from contributing to the region of interest. The parameters for
finding the region of interest

fig. 3 View of the target ball underwater from the FireWire camera. Images like this were
use to calibrate the system.
3.2 Controller
The controller is a proportional-derivative (PD) controller that takes the error signals from the
tracker and generates pitch and yaw commands for the gait controller. In order to maintain the
target in the center of the cameras field of view, two feedback loops are necessary. Yaw
commands are used to correct error in the images x-axis, and pitch commands in the y-axis.
Note that in the current experiment, the roll axis was left uncorrected. This would have
required either using some form of shape recognition and an asymmetric target, or to be able
to establish the direction of the vertical axis by using an Inertial Measurement Unit.
Provisions have been made to integrate such a device in upcoming experiments. However,
since no robot response data (such as step response) was available, the transfer function as
well as the frequency response of the robot for each axis was unknown. It was therefore not
possible to fully tune the PD controller beforehand. This limited our ability to find the
optimal parameters for the controllers

4. Future Work
The visual servoing system described here has been proven to work in the real world, but
there is room for many more enhancements and new provisions. Improvements can be made
to the tracker as well as the overall control system, to build a more robust and stable visual
servoing mechanism. To date, the tracker only looks for an object of a certain color, without
looking for an object of predefined shape. We plan to integrate shape and pattern matching
with the tracker in the near future. Also, from the size of the tracked object on screen, the
speed of the robot can be controlled; so that the robot can catch up with the moving target
when it is about to lose track. Neither the tracker nor the controller incorporates any learning
scheme at the present time.

fig.4
fig.5
fig.4 the yaw relative position over a trial of visual servoing. The dashed line represents the
average position.
fig.5 The pitch relative position over a trial of visual servoing.
All gains and parameters are tuned manually with the aid of data from previous trials. A
probabilistic learning scheme incorporated with the servoing could greatly increase the
robustness as well as provide for automatic object recognition and training of parameters and
gains. An Inertial Measurement Unit (IMU) can be used as a stability augmentation system,
making for a semi-dynamic look-and-move servoing architecture. Currently, the servo system
has no control over the roll command of the robot. Using feedforward control, we can also
compensate for the coupling between the axis controls.

5. Conclusions
Here we have described a visual servoing system for an aquatic swimming robot called
AQUA. The approach to servo-control is based on simple color tracking coupled with a
control loop whose low-pass properties are tuned to eliminate the natural undulations caused
by the robots swimming gait. We discussed the underlying hardware and software
components of the system and presented the data collected during a sea trial of the system.
The system is inherently simple and enables AQUA to achieve some degree of autonomy in
navigating underwater. The recent sea trials of the system have proven to be very successful,
and present exciting new directions for future work. It appears that a more flexible learningbased scheme for target acquisition and tracking would permit the system cooperate more
robustly. While we have not experienced serious tracking failures where illuminations
prevent the target from being acquired, one might expect this to occur in the absence of online auto-calibration. More important, it appears that the tracking system can be fooled by
distracting objects whose coloration matches the target of interest. While using
supplementary shape-based cues would be a natural improvement to the tracker, the
computing overhead, particularly in the robots small form-factor make this a challenge the
servo system usually consists of Tracer and feedback controller. With the help of some minor
modifications these robots can even work on the ground as well.

6. References
1. K. Arbter, J. Langwald, G.
Hirzinger G. Wei and P.
Wunsch, Proven techniques for
Robust visual servo control,
2. http://vxl.sourceforge.net
3. S. Hutchinson, G. D. Hager
And P. I. Corke,A Tutorial on
Visual Servo Control, in IEEE
Transactions on Robotics and
Automation

Você também pode gostar