Você está na página 1de 6

An Interactive Module-Based Software Architecture for

Experimental Autonomous Robots


Albert J.N. van Breemen, Ko Crucq
Philips Research, The Netherlands

Abstract
A good software architecture is one of the major factors that determines the success of realizing a robot
application. The software architecture should implement the requirements that have been formulated for
different problem areas indicated by the robot application designer. We are currently working on an
experimental robot project to create an autonomous
domestic scout that will interact with its user by
means of emotional feedback. Three problem areas
have been identified by the robot application designer:
1) reusing and integrating existing software code with
new to be written software code, 2) having an interactive development framework in which the designer
can easily test different software configurations at
runtime, and 3) implementing the robot application
using a heterogeneous distributed computing environment. In this paper we present a global overview of a
software architecture that we have developed to deal
with these problem areas.

Introduction

We are working on a project to build a robot that


should function as a mediator between a user and its
increasingly more complex home environment. This
robot is called Lino. The focus of the overall project
is on the natural interaction of the robot with people
by means of speech and emotional feedback of facial
expressions, gestures and body language [1, 2]. It is
expected that the emotional feedback will greatly enhance the natural interaction, and thus the mediator
role of the robot.
The human interaction with the robot will result in:
(1) voice controlled navigation of the robot with autonomous obstacle avoidance, (2) remote voice control of all kind of home appliances and services (light
level, heating, TV, radio, phone, email, internet,
etc), (3) fun, games and conversations.
Corresponding

author:
Philips Research,
Prof.
Holstlaan 4, 5656 AA Eindhoven, The Netherlands,
albert.van.breemen@philips.com

Our approach is to integrate existing software for


the various capabilities of the robot. The robot
will be equipped with: (1) acoustic speaker localization for fast eye-contact, (2) speaker identification,
(3) speech recognition & synthesis for command and
control dialogues, (4) people & face detection and
recognition, (5) emotion engine based on the OCCmodel [3].
The robot project is embedded in the European
ITEA Ambience project [4]. Software contributions
from three other parties have to be integrated too.
The IAS-group of the University of Amsterdam contributes localization & world modelling, PMA of the
University of Leuven contributes navigation & obstacle avoidance and the company Epictoid contributes
context sensitive emotion generation.
Regarding the hardware we currently use a Pioneer2
DXe platform [5] with a wireless notebook and focus
on building a 3-D mechanical head for the emotional
feedback. Figure 1 illustrates our prototype head design. Different emotional expressions can be realized
by changing the posture of the eyebrows, eyes and
lips. Additional CPU-power is achieved with a number of connected desktop computers. The Universal
Serial Bus (USB) is used to connect all peripherals
(servomotors, sensors) except video for which we will
use the IEEE 1394.
The remainder of this article discusses our software
architecture to support the integration of the mentioned software parts. Section 2 discusses the requirements for the software architecture. Then, in
section 3, the main concepts of the software architecture are presented. Experiences with our architecture are described in section 4. In section 5 we
present further research and development directions.
Finally, section 6 presents our conclusions thus far.

Requirements

Developing robot applications requires more effort


than just developing a bunch of individual algorithms. In fact, developing these algorithms is often
not the most difficult part of a project; integrating

be packed and deployed as a reusable software


module independently from other modules. Furthermore, modules should have an interface that
allows them to be connected to other modules
in order to easily create compositional robot applications.
The second area is the problem of having an interactive development environment; that is, the designer
should have the ability to interact with the robot application at runtime. To achieve this, the following
requirements have been formulated:
Runtime flexibility It must be possible to change
the algorithm of modules at runtime, to extend
the robot application with new modules at runtime, and to probe ingoing and outgoing data of
modules at runtime.

Figure 1: Facial expressions of Lino: (a) happy,


(b) bored, (c) sad, (d) surprised.

them into a coherent robot application is. This matter was already studied by one of the authors in previous work on mechatronic control applications [6, 7].
It was then realized that in order to tackle this problem a dedicated software architecture is needed. The
result was a software architecture based on modular software constructs, called controller-agents, that
supports the integration of individual control algorithms in a systematic manner. Partly based on the
experiences of that work we now describe a software
architecture to support the development of complex
control software for autonomous robots; that is, to
support the development of robot applications.
Traditionally there is much attention for the planning aspects of a robot application in a project. Several planning architectures have been proposed such
as logic-based [8], behavior-based [9, 10, 11] and hierarchical system architectures [12]. The first focus in
our project is not on one of these planning architectures but we focus instead on developing a software
architecture that allows us to build rapid prototypes
in order to experiment with different techniques and
algorithms. We have identified three main problem
areas that will complicate the development of our
robot application. The first area is the problem of
reusing and integrating software parts developed by
different parties. Therefore, we have formulated the
requirement of modularity to solve this problem:
Modularity Each functional software part should

Runtime robustness Stopping (or crashing) one


or more modules of a running robot application
should not result into an overall stopping (or
crashing) of the robot application.
Runtime configurability It must be possible to
define at runtime the configuration of modules
(that is, the connections between the modules)
that make up the robot application.
The third problem area is related to the implementation of a robot application. The following two requirements were formulated:
Distribution It must be possible to distribute the
robot application over several host computers in
order to enlarge the computational resources.
Portability There should be support for the most
popular programming languages (C, C++,
Java) and operating systems (Windows, Linux).

3
3.1

A module-based software architecture


Overview

There are several ways to integrate functional software parts of a robot application. For instance, one
can use adhoc methods and hack everything together in one large source file, or one can define software components with interfaces specified in terms
of method-calls. However, these specific software engineering ways of integrating different software functionalities are considered not to correspond with the
methods and concepts of other disciplines involved
in our robot project, such as electrical, mechanical
and control engineering. Therefore, we want to have
a software architecture which conceptually resembles

techniques used by the other disciplines. The block


diagram language, as used by the Simulink tool [13],
seems to serve this purpose.
In a block diagram functional blocks are connected
to each other by means of ports. In our software
architecture we adopt the same concepts, however, a
block is called a module. Modules have ports that
can be connected to each other in order to exchange
data between the modules (see Figure 2)
In the following sections a more in-depth description
of modules, ports and connections is given.

rithm is executed at particular internal events


and external events. External events occur when
an input port receives data or when an output
port receives a request to send new data (see
trigger attribute of ports, section 3.3). An internal event occurs when inside the module some
condition has been met, such as a time-out.
Time-based In a time-based module, the algorithm
is executed at external events as well as at periodic time instances. This type is a special case
of the event-based model.
Which model to use depends on the particular functionality one wants to implement.

Figure 2: Modules, ports and connections.

3.2

Modules

Modules are building blocks used to implement separate pieces of functionality. For instance, modules
typically implement functionalities such as sensors,
actuators, vision algorithms, control behaviors, state
estimators, etc. In our framework, a robot application is defined in terms of modules and connections
only.
More technically, a module is an independent software process that operates asynchronously with respect to other modules in a robot application. The
resources needed by a module, such as processor time
and memory, are provided by the operating system.
Each module is started as separate executable in the
operating system. Furthermore, modules can be implemented independently from other modules.
Modules have input and output ports to share data
(see also section 3.3) and they contain an algorithm
to process this data. Furthermore, modules are characterized by two attributes. The first attribute is
the type of execution model, which describes how
the algorithm of a module is executed. The second attribute is the type of task model, which describes when the algorithm is executed. These two
attributes are described in more detail below.
Execution models. We use two types of execution models that are based on formalisms described
by [14]:
Event-based In an event-based module, the algo-

Task models. In robot applications it often occurs


that one wants to realize particular task sequences.
In order to realize this, one needs the ability to start
and stop the execution of the algorithm of a module by a remote module. Although this behavior can
be realized using the ports of a module (just add a
start/stop input port and handle the incoming requests by the algorithm) it is a very laborious way
to realize this behavior. We want to preserve the
designer from implementing such control structures.
Modules have a built in task type which makes that
the execution of the algorithm of one particular module can be controlled by other modules. The following task models have been implemented
Continuous A module with a continuous task
model starts executing its algorithm directly
when the module is created and stops executing when the module is killed. Example: most
sensor and actuator modules.
Start/stop A module with a start/stop task model
starts executing its algorithm when its receives
a start signal from an other module, and stops
when it receive a stop signal. Example: a face
recognizer algorithm which is only turned on
when a face has been detected (in order to save
resources) or an explorer behavior.
One shot A module with a one shot task model
starts executing its algorithm when its receives
a start signal from an other module, and stops
when it has finished its task. Example: a motion behavior that spins the robot one time or
that puts the robot into a particular position.
In our current implementation no difference is made
between the modules which send the start and stop
signals. For instance, module A can start module B,
while a third module C can stop the module B again.
We are currently working on an extended task model

which offers the designer more possibilities to build


in restrictions and security checks.
3.3

Ports

Because modules are independent processes in the


operating system, data sharing between modules is
based on message passing. However, message passing
requires from the designer to build in different forms
of checks to see who sent the message and what data
type the message contains. Therefore, we use in our
architecture the concept of a port, which is common
in block diagram oriented tools.
Our ports provide an advanced way to share data
between modules. Depending on the attributes of the
ports, modules can share data by connecting their
ports at runtime. Below, the attributes of ports are
discussed.
Port type. There are three different types of
ports. The first type are input ports. These ports
are used to read data from other modules. The second type are output ports. These ports are used to
write data to other modules. Reading and writing
from these ports is done asynchronously. That is,
if one module writes data to another module, it will
not wait until the data has been received, but instead
it will continue executing its algorithm. In order to
have the ability to use synchronous communication,
we have created a third type of port. This is the bidirectional port, from which one can read and write.
This type of port allows one to easily build in protocols to realize synchronous communication.
Data type. Each port supports one data type
only. Ports that share the same data type can be
connected to each other. Different data types, such
as strings, byte, integers and floats, or arrays of these
types, are supported in our software architecture currently.
Buffer type. Because input and output ports operate asynchronously, we must deal with the situation in which multiple data items has been send to
one output port, but has not been read yet. Therefore one must define the buffer type of the port. We
have build in three types of buffer mechanisms. First,
we have the keep last mechanism which stores a
new data item that has arrived and throws away
all the old data. Secondly, we have the keep first
mechanism, which keeps the first unread data item
and throws away all the new arrived ones. And finally, we have the keep all mechanism which just
stores all received and unread data items (in the sequence at which they arrived).
Trigger type. There are two types of mechanisms
to trigger the transfer of data between ports. The

first mechanism is the push mechanism. A pushoutput port directly transfers data to push-input
ports connected to it, when the algorithm of the
module performs a write action. This generates an
external event at the input ports of the connected
modules. The second mechanism is the pull mechanism. A pull-input port sends a request for new
data to the pull-output port it is connected to when
the algorithm of the module performs a read action.
This generates an external event at the output port
of the connected module. All combinations between
push/pull-input ports and push/pull-output ports is
allowed except for the combination: push-input port
/ pull-output port. In this last combination there is
no port which takes the initiative to transfer data.
Notice that if a pull-input port / push-output port
combination is used, then one needs to define a strategy for the output port which data item to send.
3.4

Connections

Each port can be connected to multiple other ports


(if the data type and input/output combination
matches). For instance, if an output port is connected to multiple input ports, then all input ports
will receive a copy of the data being transferred.
There are several ways to create connections between
ports. One method consists of explicitly connecting
a port by means of a programming statement in the
algorithm of the module. An other method is to
connect ports externally. That is, we have built in a
mechanism to connect ports of running modules at
runtime by sending special messages to the ports.
3.5

Registry

As mentioned previously, modules are independent


processes. Therefore, in order to connect modules
to each other, modules and ports must find each
other. This is accomplished by giving modules and
ports an unique name and using a registry.
In a robot application only one registry is running.
A registry is a process which maintains a list of the
names of all running modules. When a module is created, it uses a procedure called discovery to find the
registry in the robot application. If it has found the
registry, it sends its name and additional information to the registry in order to get registered. Once
a module is registered, it can search for other modules in the robot application by name. The function
of the registry can therefore be compared with the
lookup service as found in Jini [15].
3.6

Handling errors / robustness

In order to give our software architecture the desired


runtime robustness, we distinguish between different

levels of errors. At each level we deal with the problems that arise when something goes wrong at that
particular level. The following error levels have been
introduced
Module level errors - This error level deals with
the problem of how to proceed with the execution of an algorithm if connections of the module
are lost (for example, if connected remote modules have crashed).
System level errors - This error level deals with
the problem of how to proceed with the overall
robot application if some modules are stopped
or have crashed.
The first type of errors must be solved by the algorithm of a module. If a connected module crashes
or is stopped then the remaining module will receive
a notification message. It depends on the algorithm
how to react on this. The second type of errors typically must be solved by the registry. We have not
yet defined general error handling mechanisms for
this type of errors. Currently, the remaining application just continuous if one of its modules crashes
or is stopped.
3.7

the orginal C version of PVM still out performs alternative implementions of PVM.
Figure 3.7 illustrates the overall system architecture
of our robot system. The bottom layer is the hardware layer. We use a computing network of pentiumbased computers. The mobile robot platform we are
using (a Pioneer2 DXe [5]) is equipped with a notebook. This notebook is connected by wireless LAN
802.11b to a desktop computer. The second layer is
the operating system layer; on both computers Windows 2000 is installed. The third layer consists of the
mentioned PVM package (latest version 3.4, available at [17]). The fourth layer is our Dynamic Module Library. This layer provides the module, ports
and connection constructs. It also contains one instance of the registry (see section 3.5). Finally, the
top-most layer is the robot application.

Implementation

In section 2 two requirements related to the implementation of our robot application were mentioned.
These are the ability to distribute the robot application over several hosts, and the ability to port
the robot application to different programming languages and operating systems. Ideally, one wants to
abstract the level of parallel running modules of the
robot application from the underlying physical architecture. Fortunately, we do not have to implement
this part of the software architecture ourselves, as
we can use an existing software package: the Parallel Virtual Machine (PVM) [16].
The Parallel Virtual Machine is a set of software
tools and libraries to build heterogenous computing
networks. It has been compiled for different computers (e.g. PCs and CRAYs), and it has been
ported to different operating systems (e.g. Windows,
Linux and UNIX). Originally, PVM has been implemented in C and Fortran, but several extensions to
other programming languages, such as C++, Java
and Python, are available nowadays (see the official
website [17]).
Our software architecture is implemented as a library
in C++ and has been built upon the C version of
PVM. Our library is called the Dynamic Module Library. The main reason to use this version of PVM
is that several performance tests [18, 19] show that

Figure 3: Software layers.

Experiences

We have used the software architecture presented


here to realize some different prototype robot applications. During the development of these prototypes
we were able to build up a library of modules that
could be reused for different prototypes. This library
now consists of modules that realize sensor, actuator
and motion behavior functionalities. During one experiment, we changed the hardware robot platform
for an other. The only thing we had to do was replacing the module that interfaced with that particular
robot hardware platform with a module that could
interface with the new hardware platform.
Now that the major part of the software architecture
has been implemented we are concentrating on the
robot application level, that is, we are realizing the
autonomous planning behavior of Lino. The runtime interactivity of our software architecture does
help us in realizing prototypes more easily and do
experiments in less conventional ways. Due to the

runtime robustness we are able to add and remove


modules at runtime. This saves us much time we
otherwise would spent on rebooting the whole robot
application.

Future work

The first version of our software library is currently


being tested. From these tests we identified future
research and extension directions. First, we want to
extend our architecture with mechanisms to handle
system-level errors. For instance, we are thinking
about building in a mechanism that would start up
a safeguard module that would take over the control
over the robot platform if particular crucial modules
of a robot application would crash. Secondly, we
want to do performance tests in order to get more
insight in the real-time behavior of our software architecture. In particular, the modules are currently
scheduled by the operating system (Windows 2000)
and we want to investigate the benefits of a real-time
operating system. And finally, we want to build in
the ability for modules to search for other modules,
not only by name, but also by the particular functionality a modules provides.

Conclusions

We have presented an interactive module-based software architecture to support the development of our
experimental autonomous mobile robot Lino. Three
problem areas were mentioned that had to be dealt
with by the software architecture: code integration,
interactivity and implementation. The first problem area is solved within the software architecture
by using the concepts of modules and ports. Every
particular piece of software code that realizes some
functionality within our robot application is packed
and deployed as an independent module. The second problem area is solved within the software architecture by letting modules become independent processes in the operating system and registering them
to a registry. Furthermore, different levels of error
handling were introduced to give the software architecture the desired robustness. The third problem
area is solved by using the Parallel Virtual Machine
as supporting software layer.
Acknowledgments
We want to thank Dennis Taapken for his effort in
implementing the presented framework. Maarten
Buijs and Jaap van der Heijden we want to thank
for making available resources for this project.

References
[1] C. Breazeal, Sociable Machines: Expressive Social
Exchange Between Humans and Robots. Sc.D. dis-

sertation, Department of Electrical Engineering and


Computer Science, MIT, 2000.
[2] Personal
Robot
Papero,
http://www.
incx.nec.co.jp/robot/PaPeRo/english/p index.html.
[3] A. Ortony, A. Clore, and G. Collins G. The Cognitive
Structure of Emotions. Cambridge University press.
Cambridge, England 1988.
[4] ITEA Ambience project, http://www.extra.research.
philips.com/euprojects/ambience/.
[5] Pioneer2
Dxe,
http://www.activrobots.com/
ROBOTS/p2dx.html.
[6] A.J.N. van Breemen, Design and implementation of
a room thermostat using an agent-based approach,
Control Engineering Practice, 9(3), pp. 233-248, 2001.
[7] A.J.N. van Breemen, Agent-Based Multi-Controller
Systems - A design framework for complex control problems, PhD Thesis, Twente University Press
(http://www.tup.utwente.nl), Enschede, The Netherlands, 2001, ISBN 9036515955.
[8] S. Russel and P. Norvig, Artificial Intelligence - A
Modern Approach, Prentice Hall, Englewood Cliffs,
New Jersey, 1995.
[9] R.A. Brooks, A Robust Layered Control System for
a Mobile Robot, IEEE Journal of Robotics and Automation RA-2, pp. 14-23, 1986.
[10] R.C. Arkin, Behaviour-Based Robotics, The MIT
Press, Cambridge, Massachusetts, second printing,
1999.
[11] R. Pfeifer and C. Scheier, Understanding Intelligence,
The MIT Press, Cambridge, Massachusetts, 1999.
[12] J.S. Albus, Outline for a Theory of Intelligence,
IEEE Transactions on Systems, Man, and Cybernatics, vol 21, no 3, pp. 473-509, 1991.
[13] Mathworks. The mathworks: develops of matlab and
simulink, www.mathworks.com, 2000.
[14] B.P. Zeigler, H. Praehofer and T.G. Kim, Theory of
Modeling and Simulation - Second Edition, Academic
Press, 2000.
[15] K. Arnold, B.OSullivan et al., The JiniT M Specification, Addison-Wesley, 1999, ISBN 0-201-61634-3.
[16] A. Geist, A. Beguelin, et al.: PVM:Parallel Virtual Machines - A Users Guide and Tutorial for Networked Parallel Computing, The MIT Press, ISBN
0-262-57108-0.
[17] The
official
PVM
http://www.epm.ornl.gov/pvm/.

website,

[18] Bu-Sung Lee, Yan Gu, Wentong Cai and Alfred Heng,
Performance Evaluation of JPVM, Parallel Processing Letters, vol. 9, no. 3, pp. 401-410, 1999.
[19] N. Yalamanchilli and W. Cohen, Communication
Performance of Java based Parallel Virtual Machine,
in Proceedings of the ACM Workshop on Java for
High-Performance Network Computing, Feb 1998.

Você também pode gostar