Você está na página 1de 106

2003 Evolution Robotics, Inc. All rights reserved.

Evolution Robotics and the Evolution Robotics logo are trademarks of Evolution Robotics, Inc. All other trademarks are the property of their respective owners. Evolution Robotics Software Platform TM is a trademark of Evolution Robotics, Inc. Microsoft Windows is a trademark of Microsoft Corporation Inc. IBM ViavoiceTM is a trademark of International Business Machines Corporation. WinVoiceTM is a trademark of Microsoft Corporation Inc. JavaTM Runtime Environment version 1.4 is a trademark of Sun Microsystems, Inc. This product includes software developed by the Apache Software Foundation (http://www.apache.org/).

Part number MC6100 Last revised 6/18/03.

Table of Contents
Chapter 1 Introduction
Manual Overview..............................................................................................................1-1 Introducing ERSP .............................................................................................................1-2 Why Use ERSP? ...............................................................................................................1-2 Who Should Use ERSP? ...................................................................................................1-3 ERSP Structure and Organization.....................................................................................1-4 Evolution Robotics Software Architecture (ERSA) .........................................................1-5 ER Vision ..........................................................................................................................1-8 Object Recognition.....................................................................................................1-8 Motion Flow ...............................................................................................................1-8 Color Segmentation....................................................................................................1-8 ER Navigation...................................................................................................................1-9 Target Following ........................................................................................................1-9 Obstacle Avoidance....................................................................................................1-9 Hazard Avoidance ......................................................................................................1-9 Teleoperation..............................................................................................................1-9 ER Human-Robot Interaction ...........................................................................................1-10 Speech Recognition and Text to Speech ....................................................................1-10 Robot Emotions and Personality ................................................................................1-10 Person Detection and Head Gestures .........................................................................1-10 Core Libraries ...................................................................................................................1-11 Whats Next ......................................................................................................................1-12

Chapter 2 Installing ERSP


Recommended Skills.........................................................................................................2-1 Requirements ....................................................................................................................2-1

Getting Started Guide

Customer Support ............................................................................................................. 2-2 Hardware Compatibility in Linux ..................................................................................... 2-2 Before You Install ERSP .................................................................................................. 2-2 Typographic Conventions ................................................................................................. 2-3 Installing ERSP for Linux................................................................................................. 2-3 Installing ERSP for Windows ........................................................................................... 2-4 Sample Code Installation .................................................................................................. 2-4 Installation File Structure.................................................................................................. 2-6 Diagnostics........................................................................................................................ 2-6 The Drive Test............................................................................................................ 2-6 The Camera Test ........................................................................................................ 2-7 Camera Troubleshooting ............................................................................................ 2-8 The IR Sensor Test ..................................................................................................... 2-8

Chapter 3 ERSP Basics


API Documentation........................................................................................................... 3-1 Conventions ...................................................................................................................... 3-1 About X, Y Coordinates............................................................................................. 3-1 Camera Coordinates ................................................................................................... 3-3 Units ........................................................................................................................... 3-3 Setting Up Your Resource Configuration File.................................................................. 3-4 Schema Files ..................................................................................................................... 3-7 Behave Command ............................................................................................................. 3-8 Configuring Your IR Sensors ........................................................................................... 3-8 Configuring Speech Recognition and Text-to-Speech...................................................... 3-9 In Windows ................................................................................................................ 3-9 In Linux ...................................................................................................................... 3-9 ViaVoice Setup .......................................................................................................... 3-9 ViaVoice ASR Environment Variables Setup ........................................................... 3-10 About Text to Speech................................................................................................. 3-10 Grammars ................................................................................................................... 3-10

Chapter 4 Tutorials
Getting Started with Visual C++ Projects......................................................................... 4-1 Compiling and Building Existing Sample Code Projects .......................................... 4-1 Compiling and Building New Applications ............................................................... 4-1 Getting Started with Linux Projects.................................................................................. 4-2 Before You Start ............................................................................................................... 4-2

Getting Started Guide

Task Tutorials ...................................................................................................................4-3 01-simple ....................................................................................................................4-3 02-parallel...................................................................................................................4-6 03-custom-task ...........................................................................................................4-8 04-event ......................................................................................................................4-13 Python Tutorials ................................................................................................................4-17 01-simple ....................................................................................................................4-17 02-parallel...................................................................................................................4-19 Behavior Tutorials.............................................................................................................4-21 01-network..................................................................................................................4-22 02-custom-behavior....................................................................................................4-25 03-teleop.....................................................................................................................4-31 04-primitive ................................................................................................................4-36 Resource Tutorials ............................................................................................................4-41 01-config-camera........................................................................................................4-41 02-config-ir.................................................................................................................4-42 03-camera ...................................................................................................................4-46 04-drive-system ..........................................................................................................4-50 05-custom-driver ........................................................................................................4-56

Chapter 5 Sample Code


Directory Layout ...............................................................................................................5-1 Hardware Layer..........................................................................................................5-1 Behavior Layer ...........................................................................................................5-2 Task Layer..................................................................................................................5-2 Vision SDK ................................................................................................................5-2

Getting Started Guide

Getting Started Guide

Chapter 1

Introduction

Manual Overview
The following is an overview of the chapters in this Getting Started Guide. For a more detailed description of the ERSP software, see the ERSP Users Guide and the Doxygen documents described in the API Documentation section of the ERSP Basics chapter. Introduction - This chapter introduces the ERSP software and some basic concepts that are needed to use it. Installing ERSP - Walks you through installing the software and testing the installation. ERSP Basics - Covers the basic concepts and skills needed to use ERSP effectively. Tutorials - Step-by-step instructions lead you through each of the software layers and how to use those layer to create robotic applications. Sample Code - Gives an overview of the sample code available with ERSP.

Getting Started Guide

1-1

Chapter 1

Introducing ERSP
This introductory chapter is intended to provide the reader with a overview of ERSPs functionality and how it can be used to prototype and develop software for a wide range of robotic systems. This introduction also walks you through related resources that will enhance your ability to use ERSP and its Application Programmers Interfaces (APIs) to maximum advantage. ERSP is a software development kit for programming robots. At the lowest level, ERSP consists of several hundred thousand lines of C++ code, which gives application developers a big head start with their robotics projects. The code is organized as a number of core libraries that define the basis of application programs. The ERSP libraries consist of a large set of functions that are useful for a wide variety of robotic applications. The infrastructure can be partitioned into four major components: Software Control Architecture Computer Vision Robot Navigation Human-Robot Interaction (HRI) Associated with each major component are tools that provide configuration management, programming languages, or graphical user interfaces.

Why Use ERSP?


ERSP enables developers to build powerful, rich robotics applications quickly and easily. ERSP supports this objective in several ways. First, it provides tools for efficient software/hardware integration. Interfacing the software with sensors, actuators, and user interface components (LCDs, buttons, etc.) can be a tedious, time-consuming, and costly task. ERSP provides a powerful paradigm for software/hardware integration making these tasks easier. By taking advantage of the object-oriented mechanisms of C++, ERSP provides powerful tools for easily extending a users robotic system to support new hardware components without the need to rebuild code from scratch. See the HAL chapter of the ERSP User Guide for more information. Second, ERSP provides a system architecture which contains a rich set of mechanisms and algorithms for controlling the activities of a robot. This architecture consists of several layers that deal with control issues ranging from as simple as turning a single motor to complex issues such as making a robot follow a person while avoiding obstacles. The system architecture is modular, with well-defined interfaces between its layers and interacting software modules. A developer can chose to use one or more layers of the architecture in a target system allowing scalability of computational requirements. This makes the target application more computationally efficient. For maximum flexibility, ERSP provides easily accessible Application Programmers Interfaces (APIs) so that developers can easily extend and modify them to fit the requirements of their target systems. The open APIs also make it very easy to integrate 3rd party software into ERSP. For instance, a company could use these APIs to integrate a proprietary face recognition technology into ERSP.

1-2

Getting Started Guide

Who Should Use ERSP?

Third, ERSP puts a number of unique and very powerful technologies into the developers hands. A partial list includes: Vision Object Recognition Voice Recognition Text-to-speech Emotion Navigation And more In the area of computer vision, ERSP provides a very powerful object recognition system that can be trained to recognize an almost unlimited number of objects in its environment. Recognition can be used for many applications such as reading books to children, locating a charging station and docking into it, or localization and mapping. ERSPs voice recognition and text-to-speech modules can be used for enhanced voice interactivity between the user and the robot. A model of emotion is used to emulate and express robot emotions which enhance the user interface for applications such as entertainment robots. In the area of navigation, ERSP provides modules for controlling the movement of the robot relative to its environment. For instance, a target following module can be used to track and follow a given target while at the same time obstacle avoidance can be used to assure safe movement around obstacles. These modules define a set of high-level components upon which an application can be developed.

Who Should Use ERSP?


ERSP is for companies, organizations, developers, and researchers who are working on robotic products or projects. Most robotic projects require a large subset of the modules and technologies that ERSP provides. Often, companies with robotics initiatives need to develop an entire system from the ground up, from drivers to common components to the final complex robot application. Evolution Robotics, with ERSP, provides companies with these common, yet critical, software components necessary to develop systems for any robotics application. These applications could be anything to allow a robot perform cleaning, delivery, factory automation tasks or entertainment. ERSP frees companies from the mundane and resource-consuming task of developing common subsystems such as vision and navigation. With ERSP, companies can focus entirely on the value-added functionality of their particular robot applications. One of the additional benefits of this approach is that robotics applications developed using ERSP can be made portable to a wide range of hardware, enabling companies to extend valuable engineering resources. Using ERSP, customers can build robot applications faster, cheaper, and at lower risk. The value that ERSP has for an organization depends on the companys existing software infrastructure. Companies with a new initiative in robotics often find ERSP valuable because it gives them a head start, whereas starting from scratch would months or years of development time and cost. Companies that have had robotics initiatives for many years will have some legacy infrastructure. These companies typically find specific modules within ERSP such as the

Getting Started Guide

1-3

Chapter 1

visual object recognition, voice recognition, and obstacle avoidance, useful for integration with their own products. Some mature companies with several robotics initiatives may find that their existing software infrastructure is not being leveraged across projects; they end up building the same functions many times over, or finding that these functions from different projects do not talk to each other. These companies find ERSP valuable because it provides a cross-platform standard that encourages cross-project fertilization.

ERSP Structure and Organization


The collection of ERSP libraries provide APIs that can be divided in several important functional categories (see the figure below): ER Software Architecture: The software architecture provides a set of APIs for integration of all the software components with each other and with the robot hardware. The infrastructure consists of APIs to deal with the hardware, for building task-achieving modules that can make decisions and control the robot, for orchestrating the coordination and execution of these modules, and for controlling access to system resources. ER Vision: The Vision APIs provide access to very powerful computer vision algorithms for analyzing camera images and extracting information that can be used for various tasks such as recognizing an object, detecting motion, or detecting skin (for detection of people).

Vision

Human-Robot Interaction

Navigation

ERSA

TEL BEL HAL

ER Navigation: The Navigation APIs provide mechanisms for controlling movement of the robot. These APIs provide access to modules for teleoperation control, obstacle avoidance, and target following. ER Human-Robot Interaction (HRI): The Human-Robot APIs support building user interfaces for applications with graphical user interfaces, voice recognition, and speech synthesis. Additionally, the HRI components include modules for robot emulation of emotions and personality to enhance the users experience and improve human-robot interaction. Also, these APIs support modules for recognition of gestures that can be used to interact with the robot. The software platform also provides developer tools which consist of well-defined application programmer's interfaces in Python, C++, XML scripting language, and visual programming tools. These tools provide a flexible environment for developing software

1-4

Getting Started Guide

Evolution Robotics Software Architecture (ERSA)

for application programs without the need for in-depth knowledge of the intimate details of ERSP.

Evolution Robotics Software Architecture (ERSA)


ERSA consists of three main layers, where each of the layers provides infrastructure for dealing with three different aspects of application development. The Hardware Abstraction Layer (HAL) provides abstraction of the hardware devices and low-level operating system (OS) dependencies. This assures portability of the architecture and application programs to other robots and computing environments. At the lowest level, the HAL interfaces with device drivers, which communicate with the hardware devices through a communication bus. The description of the resources, devices, busses, their specifications and the corresponding drivers are managed through a number of configuration files. Configuration files employ a user-specified XML framework and syntax. The advantage of managing the resource specifications through configuration files is that it provides a high degree of flexibility. If you have two robots with significantly different devices, sensors, and motors, you only need to create a single resource configuration file for each. That file describes the mapping between the software modules and the hardware for each robot. HAL reads the specifications from the configuration file and reconfigures the software to work transparently, without modifications, with the application software. The XML configuration files typically contain information about the geometry of the robot, the sensors, sensor placements, interfaces to hardware devices, and parameters for hardware devices. The second layer, the Behavior Execution Layer (BEL), provides infrastructure for development of modular robot competencies, known as behaviors, for achieving tasks with a tight feedback loop such as finding a target, following a person, avoiding an object, etc. The behaviors become the basic building blocks on which software applications are built. The BEL also provides powerful techniques for coordination of the activities of behaviors for conflict resolution and resource scheduling. Each group of behaviors is typically organized in a behavior network which executes at a fixed rate. Behaviors are executed synchronously with an execution rate that can be set by the developer. The BEL also allows running several behavior networks simultaneously with each executing at a different execution rate. The communication ports and protocols between behaviors can be defined and implemented by the user. The BEL defines a common and uniform interface for all behaviors and the protocols for interaction among the behaviors. In each cycle, a Behavior Manager executes all sensor behaviors to acquire fresh sensory data then executes the network of behaviors to control the robot. The coordination of behaviors is transparent to the user. An XML interface enables behaviors to interact with scripts written in XML. The XML interface provides a convenient and powerful approach to building application programs using XML scripts. XML files (known as schemas) can be used to define the characteristics of a behavior module, such as parameters, input/output interface, etc. Schemas for behaviors are similar to classes in C++, whereas specific behaviors correspond to objects which are instances of classes. A behavior network can be specified in an XML file that instantiates behaviors using the schema files, specifies values for optional parameters, and specifies the interconnections between behavior ports. A

Getting Started Guide

1-5

Chapter 1

behavior network written in XML can then be executed using the behave command (see the Behave Command section of the ERSP Basics chapter of this Guide for details). The advantage of using XML for developing behavior networks is that it is very flexible and does not require recompilation of the code each time the tiniest change has been made to the network. Setting up the connections between behaviors using the C++ APIs could be a tedious task. Therefore, to further improve the process of developing behavior networks, ERSP provides the Behavior Composer, a graphical user interface. Typically, behavior networks are more conveniently developed using the Behavior Composer because it can be used to build application programs visually. With the Behavior Composer, you can use a mouse and keyboard to drag-and-drop behaviors and connect them together to form an application. This visual program is converted to an XML script that then can be executed by the ERSA. This figure is a graphical representation of how the different layers of the software interact with each other and the input XML files.

Python

TEL

Tasks

Primitive Tasks

BEL Behavior Networks Behavior Networks XML Files Behavior Composer

Behaviors

HAL

Resources

Drivers

Hardware Configuration XML Files

1-6

Getting Started Guide

Evolution Robotics Software Architecture (ERSA)

The Task Execution Layer (TEL) provides infrastructure for developing goal oriented tasks along with mechanisms for coordination of complex execution of tasks. Tasks can run in sequence or in parallel. Execution of tasks is triggered by user-defined events. (Events are conditions or predicates defined on values of variables within the Behavior Execution Layer or the Task Execution Layer.) Complex events can be defined by logical expressions of basic events. While behaviors are highly reactive, and are appropriate for creating robust control loops, tasks are a way to express higher-level execution knowledge and coordinate the actions of behaviors. Tasks run asynchronously as events are triggered. Time-critical modules such as obstacle avoidance are typically implemented in the BEL while tasks implement behaviors that are not required to run at a fixed execution rate. Tasks are developed hierarchically, starting with the primitive tasks, which are wrappers of behavior networks. At invocation, a primitive task loads and starts the execution of a behavior network. Tasks can monitor the execution of behavior networks and values of the data flow between behaviors to define certain events. Tasks can manipulate the behavior networks to cause desired outcomes. For example, tasks can inject values into the behavior network to cause a desired outcome. To change context of execution based on the goals of the robot, the TEL can cause termination of one behavior network and loading and execution of another. Asynchronous events provide a flexible mechanism for inter-task communication as well as communication between BEL and TEL. Tight feedback loops for controlling the actions of the robot according to perceptual stimuli (presence of obstacles, detection of a person, etc.) are typically implemented in the Behavior Execution Layer. Behaviors tend to be synchronous and highly data driven. The Task Execution Layer is more appropriate to deal with complex control flow which depends on context and certain conditions that can arise asynchronously. Tasks tend to be asynchronous and highly event driven. The TEL provides an interface to Python, an interpreted scripting language. Prototyping in Python is convenient because it is a programming language at a higher abstraction layer than C++, and it is interpreted. The design of TEL makes it easy to interface it to other programming or scripting languages. ERSA has been engineered to be highly flexible and reconfigurable to meet the requirements of numerous application programs. Any subset of the ERSA layers can be combined to embody a range of architectures with radically different characteristics. The possible embodiments of the architecture could consist of using any of the layers in isolation, any two of the layers in combination, or all three layers. For example, for applications with limited requirements for high-level functionality may require only HAL or HAL and BEL. The advantage of restricting the use to HAL would be in saving computational resources (memory, CPU power, etc.). If hardware abstraction is not of a concern to a project or product, then BEL can be used in isolation. Or if only high-level, event-driven control follow is required then TEL may be used.

Getting Started Guide

1-7

Chapter 1

ER Vision
ERSP provides powerful vision algorithms for object recognition, motion flow estimation, and color segmentation. Object Recognition The object recognition system is a vision-based module that can be trained to recognize objects using a single, low-cost camera. The main strengths of the object recognition module lie in its robustness in providing reliable recognition in realistic environments where, for example, lighting can change dramatically. Object recognition provides a fundamental building block for many useful tasks and applications for consumer robotic products, including object identification, visual servoing and navigation, docking, and hand-eye coordination. Other useful and interesting applications include entertainment and education. The object recognition module is implemented in the objrec library (in the Core Libraries). The Behavior and Task libraries implement several useful behaviors and tasks that use the object recognition for tracking and following an object. To train the software, you need to capture one or more images of the object of interest, name them using a text string, and load them into a database known as the model set (using file extension .mdl). The software then analyzes the objects image and finds up to 1,000 unique and local features to build an internal model of the object. ERSP provides graphical and command line tools that help in creating and manipulating object model sets. (See the Vision chapter of the ERSP User Guide). To use the object recognition, the user employs the APIs to load a model set and executes the object recognition algorithm (using the library APIs, the behaviors, or tasks). Once the object is seen in the robot cameras field of view, it will be recognized. The recognition returns the name of the object, the pixel coordinates of where in the video image it was recognized, and a distance to the object. The object recognition can be trained on hundreds of objects and can recognize more than one simultaneously. Motion Flow While object recognition provides a key technology for building fundamental robot capabilities, it does not process movement in objects such as people and other robots. Motion Flow analyzes an image sequence rather than a single image at a time, making it possible to discern motion in the field of view. This fundamental capability can be used for a number of tasks, ranging from detection of motion at a gross scale (moving people) to analysis of motion at a very fine scale (moving pixels). The optical flow algorithm provides a robust analysis of motion in the field of view. This algorithm correlates blocks of pixels between two consecutive frames of a video to determine how much they have moved from one frame to the next. Color Segmentation Color segmentation can be useful for finding objects of a specific color. For instance, looking for an object using color can be used for a number of human-robot interaction components. This algorithm can also be used to detect people by searching for skin color under various lighting conditions.

1-8

Getting Started Guide

ER Navigation

The color segmentation algorithm provides for a reliable color segmentation based on a probabilistic model of the desired color. Using a mixture of Gaussian distributions, it can be trained to classify pixels into the desired color or background and allow for significant variation in pixel color caused by lighting changes or diversity of the object population. The color segmentation module builds models for a desired color based on a training set that contains a population of objects with the desired color. Once the model is learned by the module, it is able to classify objects based on the model.

ER Navigation
ERSP provides modules for safe navigation in realistic environments. The navigation modules consist of behaviors for following targets and for obstacle and hazard avoidance. In addition, ERSP provides facilities for teleoperation of robots remotely. Target Following Target following modules are available in the BEL as well as the TEL. These modules track and follow the position of a target. The input to these modules comes from a target detection module which can be based on visual detection or detection using odometry information. Obstacle Avoidance Using the obstacle avoidance algorithm, the robot generates corrective movements to avoid obstacles. The robot continuously detects obstacles using its sensors and rapidly controls its speed and heading to avoid obstacles. Our obstacle avoidance algorithm uses a description of the robots mechanics and sensor characteristics in order to generate optimally safe control commands. The description of the robots mechanics and sensors are done in a generic configuration description language defined in XML so that the obstacle avoidance algorithm can easily be integrated onto different types of robots. Porting of obstacle avoidance (and other modules for that matter) to a new robot with different hardware just requires describing the new hardware in the configuration description language. Hazard Avoidance The hazard avoidance mechanisms provide a reflexive response to a hazardous situation in order to insure the robots safety and guarantee that it does not cause any damage to itself or the environment. Mechanisms for hazard avoidance include collision detection (using not one but a set of sensors and techniques). Collision detection provides a last resort for negotiating around obstacles in case obstacle avoidance fails to do so, which can be caused by moving objects, software or hardware failures. Stairs and other drop-off areas are handled by a cliff avoidance module. Cliff avoidance uses a set of redundant sensors to detect the hazard and ensures the robots safety in the case of faulty sensors. The robot immediately stops and moves away from a drop-off. Teleoperation ERSA provides infrastructure for cross network operation of the robot. Applications of this capability include multi-robot systems, off-board processing, and teleoperation. For

Getting Started Guide

1-9

Chapter 1

more information on the networking infrastructure see the 03-teleop and 04-primitive tutorials, and the Doxygen documents pertaining to, for example, MalleableBehavior.

ER Human-Robot Interaction
Evolution Robotics provides a variety of component technologies for developing rich interfaces for engaging interactions between humans and robots. These components support a number of interfaces for command and control of a robot and allow the robot to provide feedback about its internal status. Furthermore, these components enable the robot to interact with a user in interesting and even entertaining ways. The core technologies provided for developing human-robot interfaces (HRIs) consist of: Speech recognition and text-to-speech (TTS) for verbal interaction Robot emotions and personality to create interesting and entertaining life-like robot characters Person detection and recognition of simple gestures Speech Recognition and Text to Speech Two speech engines are available for use in user applications: one for input that converts a speech waveform into text (Automatic Speech Recognition or ASR) and one for output that converts text into audio (Text-to-Speech or TTS). Both engines are third-party applications that are included in the ERSP. The speech engines are resources available in HAL similar to resources for interacting with sensors and actuators such as IRs and motors. The speech modules can be integrated into behaviors, tasks, or both. Robot Emotions and Personality The robot emotion behaviors are used to describes the robot's internal and emotional states. For example, the emotional state defines whether the robot is sad or happy, angry or surprised. The emotion behaviors can also describe personality traits. For example, an optimistic robot would tend toward a state of happy, whereas a pessimistic robot would tend toward a state of sad. A graphical robot face is also available in ERSP. This face is capable of expressing emotion and having the appearance of forming words. This functionality allows the user to create a wide variety of emotions and responses triggered by user-specified stimuli. This greatly enhances the human-robot experience. See the Behaviors Library chapter of the ERSP User Guide and the Doxygen documents for details. Person Detection and Head Gestures Person detection and tracking can enable very diverse human-robot interaction. For instance, being able to detect, approach, and follow a person can be very useful primitives for HRI. Evolution Robotics, Inc. has a reliable person-tracking technology using vision, combining some of our technologies for object recognition, optical flow, and skin segmentation.

1-10

Getting Started Guide

Core Libraries

Gesture recognition provides another powerful technology for enhanced human-robot interfaces. Using gestures for interacting with a robot provides a natural and powerful interface for commanding a robot to perform tasks such as pick-and-place. Using our vision component technologies for motion analysis and skin segmentation (using color segmentation) ERSP can detect gestures including head nodding and head shaking. This is done by tracking the motion of head and hands of a user which are segmented using skin segmentation. These modules can be used to extend the system to recognize other gestures such as waving and pointing.

Core Libraries
The Core Libraries implement the basic functionality of ERSP upon which all other infrastructure is built. The core libraries can also be said to implement standards for later software modules. An application can build directly on any subset of the core libraries. The Driver Libraries implement interfaces for specific hardware components such as controller boards, drive systems, positioning systems, graphics engines, sensors, audio devices, etc. These drivers build on the infrastructure implemented in the core libraries. Specific drivers such as the Robot Control Module driver are implemented as a C++ class that is derived from a driver class in the resource library. This modular scheme assures, for example, that all derived driver classes for motor controllers provide a uniform interface defined by the driver class in the resource library. Thus, one controller can easily be replaced with another without propagating the change throughout the modules/classes that use the driver for the controller. The core libraries named Resource, Behavior, and Task implement the three layers of the software control architecture of the ERSA. While the core libraries implement the core functions of ERSA, the Behavior Libraries and Task Libraries provide higher-level functionality that builds on the core. For example, the navigation library in the Behavior Libraries provides modules for obstacle avoidance. A user can easily use this behavior without being concerned about how it is implemented using the core libraries. Finally, the core libraries implement basic and powerful functionality for object recognition and other vision algorithms. These modules become basic building blocks for building higher-level modules in the BEL and TEL. ERSP consists of the following set of libraries which implement its core functionality. The libraries can be found in the Install_dir\lib directory. Core Libraries Driver Libraries (Hardware Abstraction Layer) Behavior Libraries (Behavior Execution Layer) Task Libraries (Task Execution Layer) For details on these libraries, see the Core Libraries, Hardware Abstraction Layer, Behavior Execution Layer, and Task Execution Layer chapters, of the ERSP User Guide.

Getting Started Guide

1-11

Chapter 1

Whats Next
Now that you have an overview of ERSP, its time to get started. The next chapter, Installing ERSP, will walk you through installing and testing the software.

1-12

Getting Started Guide

Chapter 2

Installing ERSP

Recommended Skills
The following skills are strongly recommended: Familiarity with object-oriented programming, specifically C++ and, optionally, Python Depending on which ERSP version youre using, you must be proficient in Linux or Microsoft Windows command line setup, file manipulation, and execution For Windows: familiarity with Microsoft Visual Studio.Net or Microsoft Visual Studio .net Professional For Linux: proficiency in g++ 3.0 and building programs using make from the command line

Requirements
You must supply a computer with at least the following specifications: Pentium III - 800MHz or faster (Needed for development, target applications will vary widely depending on the application.)

Getting Started Guide

2-1

Chapter 2

500 MB hard disk space 128MB RAM (256MB RAM recommended) USB port 802.11b wireless network adaptor (recommended) Microsoft Windows 2000, Microsoft Windows XP or Red Hat Linux 7.3 Full-duplex sound card

Customer Support
Evolution Robotics Customer support is available by email at customerservice@evolution.com or by filling out the form at www.evolution.com/support/. Customer Service representatives are available by calling toll free at 866-ROBO4ME or, for international customers 626-229-3198, Monday though Friday, 9 A.M. to 5 P.M. Pacific Time.

Hardware Compatibility in Linux


The Evolution peripherals (i.e. Gripper, IR) are compatible with the more common UHCI (universal host controller interface) controller for USB. The Evolution ER1 peripherals are not supported with the OHCI (open host control interface) controller.

How to Identify your Controller


If you want to see which type of controller your computer has, then watch the display during the boot up. There should be a line about loading USB UHCI or OHCI controllers.

Before You Install ERSP


Before you start the installation process, do the following:

For Windows
ERSP is compatible with Microsoft Windows 2000 and XP. Install Microsoft Visual C++ or Visual Studio .NET, Version 7. Install Python 2.2.2. To get this version of Python, go to www.python.org. Download and follow the installation instructions there. If you have an installation of the ER1 Python SDK, uninstall it before installing the ERSP SDK. Note that the functionality from the ER1 Python SDK has been included in the ERSP SDK. Make sure to back up your system before installing this software.

2-2

Getting Started Guide

Typographic Conventions

For Linux
Linux version must be RedHat 7.3 and GCC 3.0. RedHat 8.0 and GCC 3.2 are not supported. You must have kernel 2.7.18-24.7.x Install Python 2.2.2. To get this version of Python, go to www.python.org. Download and follow the installation instructions there. If you have an installation of the ER1 Python SDK, uninstall it before installing the ERSP SDK. Note that the functionality from the ER1 Python SDK has been included in the ERSP SDK. Make sure to back up your system before installing this software.

Typographic Conventions
There are various typographic conventions that are used in both this Guide and the ERSP User Guide. The following describes these conventions: Italics are used to denote variables that are specific to your system. The most common use of this convention is Install_dir, which stands for your ERSP installation directory. Courier is used to denote paths, filenames, function names, executables, words to type on the command line, and output from ERSP. You will see an example of this on the Installing ERSP for Linux section of the Installing ERSP chapter. Bold is used for Graphical User Interface (GUI) parameters and button names. You can find examples of this in the Tutorials chapter of this Guide. Blue is used in the PDF file of the Getting Started Guide and the ERSP User Guide to indicate hyperlinks. You will find examples of this in the Table of Contents, the Index and interspersed throughout the text of this Guide and the ERSP User Guide. A back slash \at the end of a line of code is an editorial convention that indicates that the next line of code should be typed on the same line as the first.

Installing ERSP for Linux


1. Login as root. 2. Place the installation CD in the CD-ROM and mount the disk by typing:
cd /mnt/cdrom/ERSP

3. Run the install script Important Note: Make sure you are root when running this script. 4. You will be prompted Do you want to continue?. Type yes. You will be asked a series of questions. Respond appropriately.
./install.sh

Getting Started Guide

2-3

Chapter 2

Important Note: The ERSP installation directory will be referred to as Install_dir for the rest of this Guide. 5. The software is now installed. To ensure that your installation was performed properly, run the tests found in the Diagnostics section of this chapter.

Installing ERSP for Windows


After you download and install the products listed in the Before You Install ERSP section of this chapter, you are ready to install ERSP. 1. Put the CD into the CD-ROM drive. 2. Open the installation CD directory in Windows Explorer. Click on the setup.exe file to start the Installshield Wizard. This will walk you through the installation process. You see the following messages: Preparing to Install. Welcome to InstallShield Wizard for ERSP. 3. Do the following: Click on Next. Read the License Agreement carefully and then click on Yes. Select a destination folder. The default is C:\Program Files\ERSP. Click on Next. Important Note: The installation directory will be referred to as Install_dir for the rest of this Guide. The installation process starts. Cancel the installation process at any time by clicking on the Cancel button. 4. You will see a prompt for installing Java Runtime Environment 1.3 or later. Click Yes. Java will also display a license agreement. Read this agreement and then click Yes. Then, you must select a destination folder. Finally, select a default browser. 5. When the installation process is complete, you see the message "Setup has finished installing ERSP on your computer. 6. Click on the Finish button. 7. The ERSP is now ready to use. To ensure that your installation was performed properly, run the test in the Diagnostics section of this chapter.

Sample Code Installation


To follow platform conventions, the installation of the sample code differs a bit between Linux and Windows. In Windows, as with most SDKs, the sample code is installed with the rest of the ERSP, directly under the root ERSP directory. Thus, the default location for the sample code is Install_dir\sample_code. The Windows sample code includes standard Visual Studio

2-4

Getting Started Guide

Sample Code Installation

.NET project and solution files, with a separate solution file for each main section of the sample code. Though installation varies by platform, the directory structure of the sample code is identical. For the remainder of this chapter, the top sample code directory as Samp_code_dir and other subdirectories will be specified relative to this path. In Linux, the sample code is distributed as a standard tarball (tar archive compressed by gzip), on the CD in the sample_code directory. To use the sample code, extract it in the usual way: $ cd <directory_to_install_in>
$ tar zxvf \ <path_to_cd>/sample_code/evolution_robotics-sample_code-W.X.Y-Z.tar.gz

This allows multiple users to have their own copies of the sample code, without needing write access to the ERSP installation. The sample code is now located in the Samp_code_dir directory. The structure of the sample code as follows: behavior - Examples of behavior networks (C++ and XML). config - Examples of Schema XML configuration files. driver - Examples of drivers. objrec - Examples of applications that use the object recognition library. python - Examples of task programs written in Python task examples of task programs written in C++. Linux_Project_Template - Templates for starting Linux projects. (Linux only) VC_Project_Template - Templates for starting Windows projects. (Windows only) task - Examples of tasks. tutorial - The tutorials found in the Tutorials chapter. viavoice - ViaVoice tutorials. (Linux only) Compile the C++ examples using either Microsoft Visual C++ version 7.0 for Windows or g++3.0 and make for Linux. The Linux sample code uses the GNU build tools; you simply configure and make the code:
$ cd evolution_robotics-sample_code-W.X.Y-Z $ ./configure $ make

Solution files are provided for compilation of the C++ examples. For example, in the behavior directory the behavior.sln file can be opened with Microsoft Visual C++ in Windows. Select the build solution option of the Build menu to compile the examples. Binary files are generated in each corresponding directory. For example, go into the Install_dir/behavior/emotion directory. In Windows, double click on the emotion_example.exe. In Linux, type emotion_test on the command line. A

Getting Started Guide

2-5

Chapter 2

command window appears, showing run-time messages and a window with an animated face that displays different expressions.

Installation File Structure


When you are done with the installation, the software will be located in the Install_dir directory for Windows and Linux. You should have the following directories: bin - Executables config - Configuration files data - Application data doc - Documentation external - External libraries used by ERSP (Windows only) include - Header files java - Java applications lib - ERSP library files licenses - Licenses (Windows only) python - Python libraries sample_code - Sample code (In Windows. In Linux, the location of this directory is user-determined)

Diagnostics
After you install and configure the ERSP software on the laptop, you should run the following tests to verify that the installation was successful. The tests can be found in the following directory: Install_dir/bin (for Linux and Windows) In Linux, it is recommended that you add this directory to your path, like this:
$ PATH="$PATH:/opt/evolution_robotics/bin" $ export PATH

In this chapter, it is assumed that the tests are in your PATH. Important Note: For fast online help, all these tests support the --help option.

The Drive Test


After setting up the robot, it is a good idea to run test_drive_system to make sure that things are working correctly. The drive test exercises the robot's drive system. The robot should move forward, then backward. After that, it should move forward ten centimeters, then back ten centimeters. It should then turn left, then to the right, and then re-center itself by heading back to the left. The correct output on the screen includes no error or warning messages.

2-6

Getting Started Guide

Diagnostics

Important Note: Before you run this test, make sure that you have a 4' clearance all around the robot. This test doesn't make use of the robot's vision or bump sensors, so if something is in the robot's path, the robot will bang into it. 1. On the command line, type the command:
test_drive_system

2. The test takes a few seconds to initiate, then the robot starts to move. 3. Here's what you see on the screen:
*** test_drive_system *** Obtained drive system: drive Forward 1 second: passed Checking velocities: passed Forward stop: passed Moving backward 1 second: passed Backward stop: passed Forward 10 cm: passed Backward 10 cm: passed Turning left 90 degrees: passed Turning right 180 degrees: passed Turning left 90 degrees: passed

4. The robot moves through its paces. At the end of the test, the robot stops in its original location.

The Camera Test


The test_camera program tests the robot's camera. 1. The usage for the camera command is below:
test_camera --help Usage: test_camera [OPTIONS] [<camera1_id> [..<cameraN_id>]] OPTIONS: --frames <count> --quality <quality> --pause-time <duration> Frames to output (default = 5). Quality from [0-1] (default = 0.8). Duration to pause between readings, in seconds (default = 0.1 = 100ms).

2. By default, the camera will output images from all available cameras. 3. Run the camera command with your chosen option(s). For example:
test_camera --frames 5

4. You see a display similar to the following:


*** test_camera *** Obtained cameras: camera0 Frame count: 5. Pause time: 0.5 sec. Writing file camera0_001.jpg Writing file camera0_002.jpg Writing file camera0_003.jpg Writing file camera0_004.jpg Writing file camera0_005.jpg

Getting Started Guide

2-7

Chapter 2

5. In this case, the robot is saving multiple .jpg snapshots from the camera. The .jpg files are written to the directory in which the camera command is run. For example:
$ ls -l *.jpg -rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r-1 1 1 1 1 user user user user root user user user user root 6361 12700 12687 12712 12730 Mar Mar Mar Mar Mar 18 18 18 18 18 15:43 15:43 15:43 15:43 15:43 camera0_001.jpg camera0_002.jpg camera0_003.jpg camera0_004.jpg camera0_005.jpg

6. You can open the .jpg files to view the snapshots and assess the camera operation.

Camera Troubleshooting
If you see the following message:
Initializing...Failure opening /dev/video0 - check permissions

1. Check to see if all the connections are seated correctly. 2. In the Krittercam, if the light is not brightly lit, the video driver may not be running. 3. First, su to root, then unload the video driver: For KritterCam and Hawking cameras using the evolution_ov511 package use the following command:
$ modprobe -r ov511

For the Logitech Pro 3000/4000:


$ modprobe -r pwcx-i386 pwc

4. Then load the video driver. For KritterCam and Hawking cameras:
$ modprobe ov511

For the Logitech Pro 3000/4000:


$ modprobe pwc $ insmod -f <path_to_driver>/pwcx-i386.o

5. Remember to exit out of root before running the tests. This should fix the problem.

The IR Sensor Test


1. The test_range_sensor diagnostic checks the range sensors (e.g. IRs) present on your robot. The usage of this command is as follows:
$ test_range_sensor --help Usage: test_range_sensor [OPTIONS] [<sensor1_id> [..<sensorN_id>]] OPTIONS: --read-count <COUNT> --pause-time <DURATION> Number of sensor readings to perform. Duration to pause between readings, in seconds (default = 0.1 = 100ms).

2-8

Getting Started Guide

Diagnostics

2. You may specify one or more range sensors to check, or, if none are specified, all present are polled:
$ test_range_sensor --read-count *** test_range_sensor *** Obtained range sensors: IR_tn, IR_tne, IR_tnw IR_tn: distance = 49.48 IR_tne: distance = 44.9 IR_tnw: distance = 1.798e+308 IR_tn: distance = 61.69 IR_tne: distance = 45.94 IR_tnw: distance = 1.798e+308 IR_tn: distance = 68.05 IR_tne: distance = 41.73 IR_tnw: distance = 1.798e+308 IR_tn: distance = 69.27 IR_tne: distance = 42.38 IR_tnw: distance = 1.798e+308 IR_tn: distance = 66.85 IR_tne: distance = 48.57 IR_tnw: distance = 67.25 raw = 34 raw = 164 raw = 0 raw = 3 raw = 165 raw = 0 raw = 0 raw = 178 raw = 0 raw = 22 raw = 173 raw = 3 raw = 29 raw = 188 raw = 19 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0 time = 0

Getting Started Guide

2-9

Chapter 2

2-10

Getting Started Guide

Chapter 3

ERSP Basics

API Documentation
All of the C++ APIs are documented in detail in the Doxygen documents that are included in the Installation. These files can be found in the Install_dir/doc/ERSP-API/html directory for Linux and Windows. To find something in the Doxygen documents, open the index.html file in your Internet browser. Click on the Compound List hyperlink. Use you browsers Find function to find the behavior or task that you are looking for. The name of the behavior or task is hyperlink to the detailed information you will need to create programs and scripts with ERSP.

Conventions
About X, Y Coordinates This coordinate system, with the positive X axis pointing forward and the positive Y axis pointed toward the left, is the ordinary x, y coordinate system (positive X axis pointed to the right, positive Y axis pointed forward, +X, +Y values in the forward-right quadrant), but rotated 90 degrees counter-clockwise. Also note that the Z axis points straight up. The reason for the rotation is that the 0 degree mark (i.e. positive X axis) needs to be pointed forward. This coordinate system, with the X axis pointed forward, is the standard in all of

Getting Started Guide

3-1

Chapter 3

robotics, and that is why ERSP uses it. The robotics coordinate system is always used in the resource config file. The following figure gives a visual representation of this coordinate system.

The next figure shows you how to use the robotic coordinate system while piloting your robot.

1. Robot starting position (0, 0) with front of robot pointing along X+ axis. 2. Robot path to new relative position of x=10, y=20. 3. Robot position after first relative move of x=10, y=20. Axes are redrawn so that robot is again at the position 0,0, with the front of the robot pointing along the X + axis. 4. Robot path to new relative position of x=10, y= -30

3-2

Getting Started Guide

Conventions

5. Robot position after relative move of x=10, y= -30. Robot is facing in the direction it would have been facing if the robot had traveled in a straight line to its new position. Camera Coordinates The camera coordinate system is different than the X, Y coordinates used for navigation. The camera coordinate system is expressed as the Z axis is forward (the direction that the o camera is facing), positive X axis is to the right and Y is at 90 and down in relation to the to the Z, X plane.

Z X

The camera coordinate system is used for activities related to vision algorithms and camera calibration. An example of a function that uses this coordinate system is PointAndGo described in the Existing Behaviors chapter of the ERSP User Guide. Units ERSP uses a certain set of default units for its functions. These are centimeters for forward and backward motion, radians for rotation, and seconds for time. These units are used at the resource and behavior levels. However, at the task level, you may use other units such as inches, feet, or meters for distance, degrees for rotation, or minutes for time. For example, when using tasks in Python scripts, you can use the setDefaultUnits function to set the units or the getDefaultUnits function to find out how your units are set. Below are some examples of how to change the default units being used in Python.

setDefaultUnits
Usage
import ersp.task ersp.task.setDefaultUnits(ersp.task.UNIT_type, unit)

Parameters
UNIT_type

This parameter specifies the UNIT_type: UNIT_DISTANCE, UNIT_ANGLE, and/or UNIT_TIME.

Getting Started Guide

3-3

Chapter 3

unit

This parameter sets the units to be used for each UNIT_type. These are: DISTANCE - This parameter can be set to cm (centimeters), ft (feet), m (meters), or in (inches). ANGLE - The ANGLE parameter can set to rad (radians) or deg (degrees). TIME - This can be set to sec (seconds) or min (minutes).

Returns
Nothing.

getDefaultUnits
Usage
import ersp.task ersp.task.getDefaultUnits (UNIT_type)

Parameters
UNIT_type

This parameter can be set to UNIT_DISTANCE, UNIT_ANGLE, or UNIT_TIME.

Returns
This function returns the distance, angle and/or time setting requested.

Setting Up Your Resource Configuration File


The primary resource configuration file is named resource-config.xml. This file can be found in the Install_dir/config/ directory in the default installation. Important Note: This file is already configured for the standard configuration of Evolutions SDK Robot. To configure this file, uncomment any areas of the file that pertain to your robot. For example, if you have a Gripper, uncomment the Gripper section of the file. The standard resource config file should look like this:
<?xml version="1.0" encoding="ISO-8859-1"?> <Resources> <Dimensions> <Shape id="body" type="rectangular"> <Parameter name="lz" value="50"/> <Parameter name="ly" value="50"/> <Parameter name="lx" value="60"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-15"/> <Parameter name="y" value="0"/> <Parameter name="z" value="25"/> <Parameter name="roll" value="0"/> <Parameter name="pitch" value="0"/> <Parameter name="yaw" value="0"/> </Shape> </Dimensions> <Devices> <DeviceBus id="rcm_bus0" type="Evolution.RCMBus">

3-4

Getting Started Guide

Setting Up Your Resource Configuration File

<!-- Motors --> <Device id="Drive_left" type="Evolution.RCMMotor"> <Parameter name="address" value="1:0"/> <Parameter name="power_output" value="45"/> <Parameter name="position_factor" value="-1711"/> <Parameter name="cycle_time" value="150E-6"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="0"/> <Parameter name="y" value="19"/> <Parameter name="z" value="0"/> </Device> <Device id="Drive_right" type="Evolution.RCMMotor"> <Parameter name="address" value="0:0"/> <Parameter name="power_output" value="45"/> <Parameter name="position_factor" value="1711"/> <Parameter name="cycle_time" value="150E-6"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="0"/> <Parameter name="y" value="-19"/> <Parameter name="z" value="0"/> </Device> <!-- Battery --> <Device id="Battery" type="Evolution.RCMBattery"> <Parameter name="address" value="1:7" /> <Parameter name="max_voltage" value="12.4" /> <Parameter name="voltage_floor" value="10.5" /> <Parameter name="voltage_multiplier" value="3.12766" /> <Parameter name="voltage_output_eps" value="0" /> </Device> </DeviceBus> <!-- Gripper --> <!-- Uncomment to use the ER1 Gripper accessory. --> <DeviceBus id="HID1" type="Evolution.USB"> <Device id="gripper" type="Evolution.USBGripper"> <Parameter name="address" value="0"/> </Device> </DeviceBus> <!-- IR sensors --> <!-- Uncomment to use the ER1 IR sensor accessory. --> <DeviceBus id="UsbHID" type="Evolution.USB"> <Parameter name="device_index" value="0"/> <!-- Front right --> <Device id="IR_tne" type="Evolution.USBIrSensor"> <Parameter name="address" value="0"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-10"/> <Parameter name="y" value="-20"/> <Parameter name="z" value="22"/> <Parameter name="roll" value="0"/> <Parameter name="pitch" value="0"/> <Parameter name="yaw" value="-pi/4"/> </Device> <!-- Front --> <Device id="IR_tn" type="Evolution.USBIrSensor"> <Parameter name="address" value="1"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-15"/> <Parameter name="y" value="0"/>

Getting Started Guide

3-5

Chapter 3

<Parameter <Parameter <Parameter <Parameter </Device>

name="z" value="40"/> name="roll" value="0"/> name="pitch" value="0"/> name="yaw" value="0"/>

<!-- Front left --> <Device id="IR_tnw" type="Evolution.USBIrSensor"> <Parameter name="address" value="2"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-10"/> <Parameter name="y" value="20"/> <Parameter name="z" value="22"/> <Parameter name="roll" value="0"/> <Parameter name="pitch" value="0"/> <Parameter name="yaw" value="pi/4"/> </Device> </DeviceBus> <!-- Uncomment these IRs if you have second set. --> <!-<DeviceBus id="UsbHID2" type="Evolution.USB"> <Parameter name="device_index" value="1"/> <Device id="IR_tse" type="Evolution.USBIrSensor"> <Parameter name="address" value="1"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-30"/> <Parameter name="y" value="0"/> <Parameter name="z" value="22"/> <Parameter name="roll" value="0"/> <Parameter name="pitch" value="0"/> <Parameter name="yaw" value="-5*pi/6"/> </Device> <Device id="IR_tn2" type="Evolution.USBIrSensor"> <Parameter name="address" value="0"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-10"/> <Parameter name="y" value="0"/> <Parameter name="z" value="10"/> <Parameter name="roll" value="0"/> <Parameter name="pitch" value="0"/> <Parameter name="yaw" value="0"/> </Device> <Device id="IR_tsw" type="Evolution.USBIrSensor"> <Parameter name="address" value="2"/> <Parameter name="link" value="origin"/> <Parameter name="x" value="-30"/> <Parameter name="y" value="0"/> <Parameter name="z" value="22"/> <Parameter name="roll" value="0"/> <Parameter name="pitch" value="0"/> <Parameter name="yaw" value="5*pi/6"/> </Device> </DeviceBus> --> <!-- Cameras --> <DeviceBus id="Usb0" type="Evolution.USB"> <Device id="camera0" type="Evolution.DirectXCamera"/> <!-- Uncomment to enable a second camera --> <!-- <Device id="camera1" type="Evolution.DirectXCamera"/> -->

3-6

Getting Started Guide

Schema Files

<Device id="joystick" type="Evolution.JoystickWin32DirectInput"/> </DeviceBus> <!-- Face --> <DeviceBus id="FGD0" type="Evolution.FGD"> <Device id="robohead" type="Evolution.MorphedFace"/> </DeviceBus> <!-- Speech recognition --> <DeviceBus id="ASR0" type="Evolution.ASR"> <Device id="winvoice_asr" type="Evolution.WinVoiceRecognizer"/> <Device id="winvoice_audio_level" type="Evolution.WinVoiceAudioLevel"/> <!-- Uncomment to use IBM Speech Recognition. --> <!-- <Device id="viavoice_asr" type="Evolution.ViaVoiceRecognizer"/> --> </DeviceBus> <!-- Text-to-speech --> <DeviceBus id="TTS0" type="Evolution.TTS"> <Device id="win_tts" type="Evolution.WinTTS"/> <!-- Uncomment to use IBM Text-to-Speech. --> <!-- <Device id="eloquence_tts" type="Evolution.EloquenceTTS"/> --> </DeviceBus> <!-- Audio --> <DeviceBus id="Audio" type="Evolution.Audio"> <Device id="audio_out" type="Evolution.WmmeAudioPlayback"/> <Device id="audio_in" type="Evolution.WmmeAudioRecord"/> <Device id="wav_file_audio_play" type="Evolution.WmmeFileAudioPlay"/> <Device id="wav_file_audio_record" type="Evolution.WmmeFileAudioRecord"/> <Device id="mp3_file_audio_play" type="Evolution.DirectXMP3FileAudioPlay"/> </DeviceBus> </Devices> <Groups> <Group id="drive" type="Evolution.Diff2Drive"> <Member member_id = "Drive_left"/> <Member member_id = "Drive_right"/> </Group> <Group id="odometry" type="Evolution.Diff2Odometry"> <Parameter name="polling_interval" value="50"/> <Member member_id = "Drive_left"/> <Member member_id = "Drive_right"/> </Group> </Groups> </Resources>

For details on the HTML tags used in the resource config file, see the Resource Configuration section of the Hardware Abstract Layer chapter in the ERSP User Guide.

Schema Files
Behaviors require an .xml schema file that defines how they interface with each other to work. The default location for these is in Install_dir/config/behavior/Evolution/<BehaviorName>.xml. In general, they belong in <some path>/behavior/<NameSpace>/<BehaviorName>.xml. For example, the PrintBehavior (from Samp_code_dir/behavior/tutorial/) has a schema file located in Samp_code_dir/config/behavior/Examples/PrintBehavior.xml. The

Getting Started Guide

3-7

Chapter 3

system needs to be told where to look for these schema files or the user will get errors when trying to run behave on a network that uses a behavior with a missing schema. To use the example behaviors, modify the following line of $HOME/.bash_profile:
export EVOLUTION_CONFIG_PATH=/opt/evolution_robotics/config

to read export
EVOLUTION_CONFIG_PATH=/opt/evolution_robotics/config:/opt/\ evolution_robotics/sample_code/config

so that the examples know where to find there schema files.

Behave Command
The following is the usage for the behave command. The behave command is used to execute behaviors. For more information on behaviors, see the Behavior Execution Layer and Behavior Libraries chapters of the ERSP Users Guide.

Usage
behave [Options] <behavior_path>

Parameters
--help --debug[=<ca tegory>] --duration=< seconds> --invocation -count=<invo cations> --invocation -interval=< seconds> --without-re sources --load-all-r esources

Print this usage. Debugging category. Duration in seconds (at least 0.01s). Number of invocations.

Interval between invocations in seconds (at least 0.01s).

Do not load hardware resources. Load all resources at start (default loads only as needed).

Configuring Your IR Sensors


First, a few things you need to know: If you are facing the robot, left is east, the front of robot is north, right is west, and the back of the robot is south. In order to know which IR sensor corresponds to which actual physical sensor, you need to use the IR sensor test program named test_range_sensor. (This is the same test you

3-8

Getting Started Guide

Configuring Speech Recognition and Text-to-Speech

used in the Installing ERSP chapter.) Waving your hand in front of the sensor will change the corresponding sensor reading. 1. In Windows, on the DOS command line, type:
cd Install_dir\bin test_range_sensor.exe

In Linux, type:
cd Install_dir/evolution_robotics/bin $ ./test_range_sensor

You should see something like:


*** test_range_sensor *** Obtained range sensors: IR_tn, IR_tne, IR_tnw IR_tn: distance = 59.06 IR_tne: distance = 45.94 IR_tnw: distance = 50.1 IR_tn: distance = 56.84 IR_tne: distance = 42.38 IR_tnw: distance = 46.94 IR_tn: distance = 51.92 IR_tne: distance = 51.45 IR_tnw: distance = 56.14 raw = 118 raw = 165 raw = 152 raw = 141 raw = 171 raw = 162 raw = 142 raw = 147 raw = 129 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004 time = 9.621e+004

The information for the sensor with your hand in front of it will change. Use this information to place the sensor in the proper location on the robot.

Configuring Speech Recognition and Text-to-Speech


If you would like your robot to recognize your speech, or to speak written text, you must configure your system to process this data.

In Windows
Microsofts speech recognition program, WinVoice, works better after it has been trained on the users voice. To train the speech recognition software, open the Microsoft Speech Applet in the Control Panel and click on the Train button in the Speech Recognition tab. The speech software will prompt you from there.

In Linux ViaVoice Setup


In order to use speech recognition, you need to have a ViaVoice directory in your home directory. This directory contains all the user-related ViaVoice speech parameters and it is used by ViaVoice to dump running logs and other data. There are two ways in which you could set-up the ViaVoice directory in your user directory.

Getting Started Guide

3-9

Chapter 3

1. Read the README file located in /usr/doc/ViaVoice/sdk.readme.txt, and then run vvstartuserguru. 2. Make a symbolic link to the ViaVoice directory in the sample_code directory by typing:
$ cd $HOME $ ln -s $SAMPLE_CODE_INSTALL_DIR/viavoice viavoice

ViaVoice ASR Environment Variables Setup


The ASR engine uses a variety of environment variables to know where it resources are located. The setup for these variables can be performed using the script vvsetenv provided by ViaVoice. These script needs to be loaded before running the ASR. There are two ways of loading it: 1. Type:
source vvsetenv

2. Add a line to your .bash_profile or your .bashrc that says:


source vvsetenv

About Text to Speech The speech synthesis engine (TTS) uses the E-sound daemon to send the utterances to the speakers. Therefore, the daemon MUST be running before any TTS-enabled program in run. In order to activate the E-sound daemon, you must execute the command esd&. Grammars Both WinVoiceTM and ViaVoice support the use of grammar files. Grammar files are used to increase the accuracy and speed of ERSPs voice recognition. Each grammar file contains a list of words and phrases that you would like your robot to understand. Any words and phrases that are not specified in this file will be ignored. For information on file formatting, see Appendix A, Grammar Information of the ERSP User Guide.

3-10

Getting Started Guide

Chapter 4

Tutorials

Getting Started with Visual C++ Projects


To simplify programming with our APIs in Microsoft Windows, you have created several general purpose Microsoft Visual C++ .Net projects. These projects can be used to compile existing sample code or to build new applications/libraries from scratch.

Compiling and Building Existing Sample Code Projects


Open the *.sln file associated with the project. For this example, you will use behavior.sln (located in Samp_code_dir\behavior directory). Double click the behavior.sln file and wait for Microsoft Visual C++ .Net to launch the project. You should see a tree-view representation of the project on either corner. This view contains all of the projects that comprise the behavior solution. To compile and link the code, either press F7 or select Build\Build Solution from the menu bar. You should now be able to execute the generated code.

Compiling and Building New Applications


You have provided you with six Microsoft Visual C++ .Net quasi-project templates. These projects are quasi-templates because at this time, they are not fully integrated with Visual

Getting Started Guide

4-1

Chapter 4

C++ .Nets project wizard facility and will require some copying/pasting and renaming on your part. Now lets walk through the steps necessary to build a simple Hello World project. 1. Make a new directory and name it anything you like. 2. Copy the contents of the Samp_code_dir\VC_Project_Template\Empty_Console_App into this directory. 3. Launch Microsoft Visual C++ .Net by double clicking on Empty_Console_App.sln. 4. From the File menu, select Add New Item. From the dialog, select the C++ file. Enter SimpleTest in the name field of the dialog. Select Open. Visual C++ will add this new file into the project. 5. Insert the following text into the new file:
#include <iostream>

#include <evolution/core/base/Platform.hpp>

int main(int argc, char *argv[]) {

std::cout << "Hello World" << std::endl;

Evolution::Platform::millisecond_sleep(6000);

return (0); }

6. Now generate the executable by selecting Build/Build Solution (F7) from the menu. Select Debug/Run (F5) to execute the program. 7. You should now see a console window with the text Hello World in it. The window will be displayed for approximately 6 seconds. This is a very simple example that demonstrates the use of one ERSP call (Evolution::Platform::millisecond_sleep(6000)). What makes these projects different from standard Microsoft Visual C++ .Net project wizard equivalents is that they are setup with the correct attributes required by ERSP (paths to headers/libs, preprocessor directives, the correct Microsoft C++ runtime libraries and compiler/linker flags).

Getting Started with Linux Projects


There are several files that have been created as Linux project example templates. These are located in the Linux_Project_Template directory of the ERSP installation. Begin with the README file, which gives an overview of the examples.

Before You Start


Several important directories are required for the operation of the tutorials. The first is the directory where ERSP is installed in the system. This directory is referred to in the

4-2

Getting Started Guide

Task Tutorials

tutorials as Install_dir. The other important directory is where the sample code containing the tutorials are installed. This directory is referred to as Samp_code_dir. In Linux, in the Samp_code_dir directory, be sure to run the command:
./configure -with-evolution-config=Install_dir/bin/evolution-config

to generate the proper make files for the tutorials and other sample codes. The Samp_code_dir/tutorial directory contains a number of tutorials designed to take the user through various features of ERSP. Tutorials are provided for the Hardware Abstraction Layer, Behavior Execution Layer, Task Execution Layer, and Python scripting, and are grouped into the following sub directories of Samp_code_dir/tutorial: resource (HAL), behavior (BEL), task (TEL), and python (Python scripting). Each tutorial is in turn contained in its own subdirectory, which are labeled with a number and a descriptive name. Examples are the subdirectories 01-config-camera and 02-config-ir of Samp_code_dir/tutorial/resource. The numbers indicate the order in which the tutorials should be performed, because later tutorials often build on the skills learned in earlier tutorials. Most tutorials require command line execution of programs. Linux developers should work on the tutorials in command line shells with the active directory changed to the directory containing the tutorial. In Linux, the bin directory found in, Install_dir/bin, should be part of the system path, so that ERSP tools and programs can be invoked on the command line without typing the full path. The EVOLUTION_CONFIG_PATH directory should contain the Samp_code_dir/config path, so that ERSP can find the various configuration and schema files used by the sample code in the tutorials. The CXX and CC environment variables should also be properly set for the GNU C and C++ compiler installed on the your system. Set the INSTALL file in the sample code directory for more details. The robot that you are using with ERSP should be connected to the computer used for the tutorials. Additional peripherals like cameras and sensors might need to be connected to the robot for the particular tutorials. The tutorial prerequisites will indicate which additional peripherals are required.

Task Tutorials
01-simple

Purpose
Tasks are useful for scripting a sequence of actions to be taken by the robot. This tutorial demonstrates how to sequence of two simple tasks. This example will show you how to use simple tasks, including setting default units, setting up task context and arguments, and receiving task values.

Getting Started Guide

4-3

Chapter 4

Prerequisites
An ERSP-supported camera must be connected to the robot. (See
http://www.evolution.com/support/recommended.masn#hub for a listing

of approved cameras.) The Install_dir/bin directory must be in the system executable path. The ERSP sample code package should be installed as described in the Installing ERSP chapter. The active directory should be Samp_code_dir/tutorial/task/01-simple. The robot must have one meter of clearance in front of it.

Task
This tutorial will walk you through the process of sequencing two simple tasks. The tasks will move the robot forward 20 inches and then take a picture with the robots camera. The source file used in this tutorial is the simple.cpp file in Samp_code_dir/tutorial/task/01-simple. Note that the TEL supports the use of units other than the centimeter/radian/second used by the Behavior and Hardware layers. The file begins by specifying some default units for the three unit categories: distance, angle, and time with the following code:
Units::set_default_units(UNIT_DISTANCE, "inches"); Units::set_default_units(UNIT_ANGLE, Units::set_default_units(UNIT_TIME, "degrees"); "seconds");

All values of distance, angle, and time, as well as all derived values, such as velocity (distance / time), will be assumed to be set to the specified units. On to the first task: moving forward for 20 inches. For this task, use Evolution.DriveMoveDelta. It commands the drive system to move a specified delta distance from the current position. To find the task, look for it in the task registry by name. If the task is found, the task registry will return a pointer to the desired tasks functor (an object that wraps a single function call). Task functors wrap the run method, which executes the task. Here is the code to get the DriveMoveDelta task functor from the task registry:
TaskFunctor* drive_move_delta = TaskRegistry::find_task("Evolution.DriveMoveDelta");

Most tasks require that some arguments are specified to determine how the task should perform. DriveMoveDelta requires that how far the drive system should move be specified, how fast it should move, and also how fast the drive system should accelerate while moving. Task arguments are specified in a TaskArg object and are then stored in a TaskContext. Say you want to move the robot forward 20 inches at 5 inches / second velocity and 20 inches / second 2 acceleration. The code below constructs the above task arguments in a TaskArg object and then creates a TaskContext object to hold the task arguments:
TaskArg args[] = { 20, 5, 20 };

// Arguments to the task have to be specified in a task context. TaskContextPtr context(TaskContext::task_args(3, args));

4-4

Getting Started Guide

Task Tutorials

The TaskContext::task_args method creates a task context holding the TaskArgs object with the arguments. The first parameter specifies the number of arguments. The second parameter is the TaskArg object. The method returns a pointer to a heap-allocated TaskContext object. To prevent you from having to delete the TaskContext object after using it, the above code uses the smart pointer type TaskContextPtr to keep track of the TaskContext pointer returned by the task_args call. The TaskContext object will automatically be cleaned up when the smart pointer TaskContextPtr context object goes out of scope. You are now ready to run the DriveMoveDelta task and will do so by calling the run method:
drive_move_delta->run(context.get());

The run method takes the TaskContext pointer containing the arguments for the task. Recall that the context object is actually the smart pointer TaskContextPtr. Calling its get method returns the raw TaskContext pointer that the run method takes as its sole parameter. After executing the move task, its time to take a picture. You can do this using the Evolution.GetImage task. This task needs to be set up with arguments and context, just like the previous task. The code for all of this is in the simple.cpp file and follows the same pattern as for Evolution.DriveMoveDelta, so it wont be discussed here. The one difference regarding this second task is that you are interested in the tasks return value. The run method of all task functors return a pointer to a TaskValue type, a variant type which can contain one of many types used in ERSP. The return value from the GetImage task is the image taken obtained by the task from the camera. The following code line executes the GetImage task functor while preserving its return value:
TaskValuePtr result (get_image->run (context1.get()));

The run method is once again called to execute the task, and the returned pointer to the TaskValue type is wrapped in the smart pointer TaskValuePtr result object. Again, this is done to keep the user from having to manually delete the returned TaskValue pointer after using it. After a call to run, a pointer to the task is available by calling the get_task status of the task context, from which the tasks execution status can be obtained with the get_status call. The following code check is to make sure that the GetImage task was executed successfully, and if so, to obtain the image from the TaskValue and save it to file:
if (context1->get_task ()->get_status () == TASK_SUCCESS) { // Obtain the image from the task result. Image* incoming_image = result->get_image(); incoming_image->write_file ("image.jpg", .9); }

Build the simple.cpp file by typing make on Linux or build the Visual Studio project in Windows and run the tutorial program. The robot should move forward 20 inches, then take a picture and save it to a file named image.jpg.

Summary
This tutorial illustrates the basic steps to using a task. First, the default units are specified if units other than the default set of centimeters, radians and seconds will be used. Next,

Getting Started Guide

4-5

Chapter 4

pointers to functors of the task to be used are obtained from the task registry. Arguments to the task are then created in a TaskArg object and assigned to a task context with the TaskContext::task_args method. The smart pointer type TaskContextPtr can be used to automatically clean up the task context object. The task can now be executed with a call to the run method, passing in the task context pointer as the sole parameter. The task execution may return a value, as in the case of GetImage. The returned value can be wrapped in a TaskValuePtr smart pointer object for ease of maintenance and used appropriately. 02-parallel

Purpose
In the previous tutorial you saw how to find and run tasks in sequence. This tutorial will show how tasks can be run in parallel. Multiple tasks can be set to run in parallel until one of the tasks complete or until all of the tasks complete.

Prerequisites
A supported camera must be connected to the robot. See the Evolution website at http://www.evolution.com/support/recommended.masn#hub for a list of approved cameras. The Install_dir/bin directory must be in the system executable path. The ERSP sample code package must be extracted, and the active directory should be Samp_code_dir/tutorial/task/02-parallel. There should be one meter of clearance in front of the robot.

Task
The file parallel.cpp in the Samp_code_dir/tutorial/task/02-parallel directory contains the source code for this tutorial. The source code starts by setting units and creating a context, which should be familiar after the previous tutorial. Next a Parallel object is constructed. This manages the parallel execution of multiple tasks and takes a TaskContext pointer as the sole parameter to its constructor:
Parallel parallel(context);

The next step is to create the tasks you want to run in parallel and add them to the Parallel object. You will be using DriveMoveDelta and GetImage again. The add_task method of Parallel takes three parameters: the task functor pointer, the number of arguments, and the TaskArg object containing the arguments. Adding a task involves using the TaskRegistry to locate the task functor, creating the TaskArg object with the arguments, and calling the add_task method, as shown here for DriveMoveModel and GetImage:
// Get a task functor for DriveMoveDelta. TaskFunctor* drive_move_delta = TaskRegistry::find_task("Evolution.DriveMoveDelta");

// Specify the arguments to the DriveMoveDelta tasks.

4-6

Getting Started Guide

Task Tutorials

TaskArg args[] = { 20, 5, 20 };

// Add the DriveMoveDelta task to the Parallel object. Task* task1 = parallel.add_task(drive_move_delta, 3, args);

// Get a task functor for GetImage. TaskFunctor* get_image = TaskRegistry::find_task("Evolution.GetImage");

// Specify the arguments to the GetImage tasks. TaskArg args1[] = { "camera0" };

// Add the GetImage task to the Parallel object. Task* task2 = parallel.add_task(get_image, 1, args1);

Note that the add_task method returns a pointer to the task added to the Parallel object. This pointer can be used to retrieve the tasks execution status and return value after the parallel execution. You are now ready to run the task in parallel with the following code:
// Execute both tasks and wait until both tasks are done. parallel.wait_for_all_complete_tasks();

The above call to the wait_for_all_complete_tasks method simultaneously starts all tasks that have been added to the parallel object and waits until all those tasks are done. There is also a wait_for_first_complete_task method that terminates all remaining tasks when one task is done. Once the tasks are executed, task2, the task pointer returned by the add_task call, that added the GetImage task to the parallel object, can be used to see if the GetImage task completed successfully. If so, it can obtain the image from the task result. This is done by the following code:
// Verify success of the GetImage task. if (task2->get_status () == TASK_SUCCESS) { TaskValue result = task2->get_result();

// Obtain the image from the task result. Image* incoming_image = result.get_image(); if (incoming_image == 0) { std::cerr << "Invalid image returned. } else { incoming_image->write_file ("image.jpg", .9); } } else { std::cerr << "Error getting image.\n; }

Build the parallel.cpp file by typing make on Linux or build the Visual Studio project in Windows and run the tutorial program. The robot should move forward 20 inches as

Getting Started Guide

4-7

Chapter 4

before. However, if the robot starts out at the same place as in the first tutorial, the image saved should be different, because it would have been taken at the beginning of the move and not at the end, because both the move and the image captures will start at the same time, and in parallel.

Summary
This tutorial shows how to use the Parallel object to start multiple tasks in parallel. Task functors are obtained from the task registry and added to the Parallel object along with their arguments using Parallels add_task method. This method returns a pointer to the task, which can be used to obtain the tasks success status and return value after the task is run. To start the added tasks in parallel, use the wait_for_all_complete_tasks method to start all added tasks and wait until all tasks are complete. The wait_for_first_complete_task method can also be used to start all tasks, but stops after one task is done and terminates the rest. 03-custom-task

Purpose
This tutorial will demonstrate how to create a reusable custom class contained in its own library. The steps required in creating a task will be discussed in the context of creating a task that uses the camera to repeatedly take photos. The tutorial will highlight a number of issues specific to task creation such as unit conversion, parameter handing, and returning task result. A test program that makes use of the task will also be provided.

Prerequisites
A supported camera must be connected to the robot. The Install_dir/bin directory must be in the system executable path. The ERSP sample code package should also be extracted, and the active directory should be Samp_code_dir/tutorial/task/02-parallel. There should be one meter of clearance in front of the robot.

Task
Suppose that you want to take photos at regular intervals while moving. This cannot be done by the GetImage task used in the last couple of tutorials. A new custom task will need to be created to do this, and this tutorial will show how to create such a custom task. The custom task will be called PhotoShoot and will be stored in its own library, so that it can be easily reused. The source file for this task is PhotoShoot.cpp in the Samp_code_dir/tutorial/task/03-custom-task directory. In the same directory, there is also the ExampleTypes.hpp file. As with other sample code, the PhotoShoot task class is implemented in the Examples namespace. The ExampleTypes.hpp file contains a number of typedefs and macros that declare Evolution types in the Examples namespace so that the tutorial code is more concise and readable. There is also

4-8

Getting Started Guide

Task Tutorials

a test_shoot.cpp file in the same directory, which provides an example of how to use the PhotoShoot task and will be discussed later in this tutorial. Open up the PhotoShoot.cpp file in a text editor for reference throughout the tutorial. There are some comments immediately inside the Examples namespace describing the functionality and arguments of this task. Briefly, this task will take photos from the camera at regular intervals. The task takes two arguments: a delay argument to specify the time interval between successive photos, and an optional stop_count argument to specify that the task should stop after taking a certain number of photos. If this optional argument is not specified, the task will continue indefinitely. When done, it will return the number of photos taken. Now lets proceed to look at the source code for PhotoShoot. The code starts with the following macro:
ERSP_DECLARE_TASK_LINKAGE (PhotoShoot, "Examples.PhotoShoot", EVOLUTION_EXPORT);

This macro declares the new PhotoShoot task. The first parameter is the C++ class name of the task. The second parameter is the new tasks string ID. The third parameter is an export macro that contains platform-specific linkage directive. The macro EVOLUTION_EXPORT should be properly defined for the current platform and should be used for this third parameter. In Linux, the EVOLUTION_EXPORT macro should be defined as nothing. In Windows when using Visual Studio, the following definition should be used:
#define EVOLUTION_EXPORT __declspec(dllexport)

Next in the source code is the following macro:


ERSP_IMPLEMENT_TASK (PhotoShoot)

This macro does pretty much what it claims by implementing the task in a single function body. All code to perform the task will be contained in this single block after this macro. This code block begins by defining a number of useful variables, including the two that will contain the argument values: delay and stop_count. The last variable defined, image_count, will be used to keep track of how many images has been taken. This value will be returned when the task is done. Units conversion follows the variable declarations. As mentioned previously, the TEL fully supports the use of a variety of units. The job of making the proper conversion falls to the task implementation. The value of the arguments passed in are assumed to be in default units, so the task must convert these values into the user-specified unit that is used by the task internally. The one argument of PhotoShoot that needs to be converted is delay, which is in time unit. Later, you will be using the millisecond_sleep method to specify the interval between successive photos, so internally the time unit used by the task is in milliseconds. ERSP provides the Units::convert_to_specified_units method to return a scale factor between the default units and the specified units. PhotoShoot calls this method to obtain the factor between millisecond and the default unit as follows:
// Unit conversion. double time_factor; Evolution::Units::convert_to_specified_units ( Evolution::UNIT_TIME, "millisecond", 1, &time_factor)

The third parameter in the above method call is the amount to convert from default units to specified units. Because the time factor has the conversion value of 1 default unit in

Getting Started Guide

4-9

Chapter 4

specified units, this parameter is 1. The factor is now stored in the time_factor variable. To convert time-unit argument values to milliseconds, you need only to multiply the argument value with time_factor. Now it is time to obtain the arguments, which is done with the following code:
// Retrieve task arguments. const TaskArgVector& args = *(context->get_arguments());

// Check number of arguments. unsigned int argnum = 1; if (args.size() < argnum) { // Not enough arguments.

At least 1 argument (delay) is required.

ERSP_LOG_WARN ("At least %d arguments are required.", argnum); context->set_failed(); return (NULL); } if (args[0].get_data_type() != TaskArg::TYPE_DOUBLE) { // Not enough arguments. ERSP_LOG_WARN ("delay argument must be a double."); context->set_failed(); return (NULL); } // Assign delay value converted to milliseconds. delay = (unsigned long) (args[0].get_double() * time_factor);

With tasks, it is a good idea to verify that the proper arguments have been sent by the user. The above code does this. First, it checks to see if the minimum number of arguments (at least 1) were sent. It then checks the data type of the first argument to make sure that it is a double, the expected type for the time delay argument. When both of these checks passed, the argument value is multiplied by time_factor to convert to milliseconds before it is assigned to the delay variable. This variable is of type unsigned long because that is the type expected by the millisecond time call. The next few lines of code check to see if the optional stop_count argument is given by checking if there is more than the minimum number of arguments present. If the optional argument is there, the code checks if the type of the argument is a double as expected. A further check is made to make sure the value is a positive number, because a negative or zero stop count does not make any sense. Here is the described code:
// Check to see if optional stop_count argument is present. if ((args.size() > argnum) && args[argnum].get_data_type() == TaskArg::TYPE_DOUBLE) { stop_count = (int) args[argnum].get_double(); // Verify that stop_count is a positive number. if (stop_count <= 0) { ERSP_LOG_WARN ("stop_count argument must be positive."); context->set_failed(); return (NULL); } }

4-10

Getting Started Guide

Task Tutorials

With the task arguments parsed and verified, the next step is to get ready for taking photos. For simplicitys sake, this task will try to automatically detect the presence of a camera and use the first camera it finds. It does so with the help of the TaskUtils::search_for_resource method as shown by following code:
// Autodetect camera. result = Evolution::TaskUtils::search_for_resource(*context, resource_name, interface_name, &resource); if (result != RESULT_SUCCESS) { ERSP_LOG_WARN ("Camera autodetect failed"); context->set_failed(); return(NULL); }

If a camera is present, its resource identifier will be returned in the resource variable, and passed as the last parameter to the search_for_resource call. The resource identifier can then be used in an IResourceContainer::obtain_interface call to retrieve the ICamera resource interface. The procedure for obtaining resource interfaces has been discussed previously. However, there is one line of code worth discussing:
result = context->get_task_manager()->get_resource_container (&container);

The above line of code shows how an IResourceContainer interface pointer can be obtained from the task context pointer. This method is useful whenever the IResourceContainer interface pointer is required for task development. With an ICamera resource interface pointer in hand, you are ready to take the photos. This is done by the following loop:
// Loop to shoot photos as long as no stop count is // specified or the image count reaches the stop count. while (!context->termination_requested() && (stop_count == 0 || image_count < stop_count)) { // Take a photo with the camera. result = camera_iface->get_image_copy(0, &image); if (result != RESULT_SUCCESS) { ERSP_LOG_WARN ("Failed to get image from camera %s", resource.c_str()); context->set_failed (); goto ReleaseInterface; } // Increment image count. image_count++;

ERSP_LOG_DEBUG ("Got image -- height: %d width: %d", image.get_height(), image.get_width());

// Write out photo to numbered file. char file_name[20]; sprintf(file_name, "image%03d.jpg", image_count); image.write_file(file_name, 0.9);

// Time delay until the next photo.

Getting Started Guide

4-11

Chapter 4

millisecond_sleep(delay); }

The loop runs as long as the task has not been terminated and the stop_count is not specified (i.e. == 0) or it is specified, but still greater than the number of images taken (image_count). During each loop iteration, a photo is taken with the call to ICamera::get_image_copy. Assuming this is successful, image_count is incremented, a numbered file name is constructed with the sprintf call, and the image is written out to this file name with the write_file call. Finally, millisecond_sleep is then called to wait for the specified interval between photos. When the task is done, the following code returns the number of images taken as the task return value:
// Package our output_values into TaskValue. retval = new TaskValue(); retval->set_double((double) image_count);

After this there's just some clean up code that calls IResourceContainer::release_interface to release the camera interface object used in this tutorial. To build the new library containing the PhotoShoot task, just type make in Linux or build the Visual Studio project file libCustomTask.vcproj. The test_shoot.cpp program in the same directory gives an example of how use the PhotoShoot task. This program is pretty much the same as parallel.cpp in the 02-parallel tutorial, except that it uses PhotoShoot instead of GetImage and wait_for_first_complete_task instead of wait_for_all_complete_tasks. In test_shoot.cpp, PhotoShoot is used without the optional stop_count parameter, so it should run forever. This ensures that the DriveMoveDelta task will complete first and terminate the PhotoShoot task when its done. The net effect is that the robot moves as specified by DriveMoveDelta, taking photos at regular interval along the way. When done, the program prints out PhotoShoots return value, which is the number of photos taken. To build the test application on Linux, type make in the 03-custom-task directory. On Windows, build the Visual Studio project file test_shoot.vcproj. Run the test_shoot program in the 03-custom-task directory to see the PhotoShoot task in action. The robot should move forward and write out several numbered JPEG image files.

Summary
This tutorial described the procedures for creating a custom task. Macros used to declare and implement a task were discussed along with the unit conversions that task implementations must perform on task arguments. Examples were also given of how task arguments should be validated and processed. The tutorial code also demonstrated how resource interfaces can be obtained from task codes if necessary. An implementation of the PhotoShoot task was given. Finally, the tutorial code shows how task return values need to be set up.

4-12

Getting Started Guide

Task Tutorials

04-event

Purpose
This tutorial shows how task events can be used to communicate between tasks that are running in parallel. Events are inter-task messages identified by a string name. Events raised in one task can be received by other tasks that are waiting for events. Tasks can specify the events by specifying a wild card pattern instead of a specific name. This tutorial will take the reader through the process of raising and waiting for events, enabling event queuing, and other issues related to the use of events.

Prerequisites
A supported camera must be connected to the robot. The Install_dir /bin directory must be in the system executable path. The ERSP sample code package must be extracted, and the active directory should be Samp_code_dir/tutorial/task/04-event.

Task
In this tutorial you will expand on the PhotoShoot task from the previous tutorial. This tutorial adds the ability for the task to start and stop shooting photos on demand through the use of task events, a mechanism for parallel tasks to communicate with each other. This new task is called PhotoEvent, and the source code for the task is in Samp_code_dir/tutorial/task/04-event/PhotoEvent.cpp. There is also a test_event.cpp file in the same directory. This file runs PhotoEvent along with a test task in parallel. The test task sends events to PhotoEvent to command it to start taking photos, stop taking photos, and eventually to end. You will first look at the PhotoEvent.cpp file. The first apparent change in the code is the task description comment. Instead of parameter description, the comments now describe incoming and outgoing events. The incoming events are Examples.PhotoEvent.Shoot, which starts a new photo shoot according to three familiar properties, delay, count, and name. The next event is Examples.PhotoEvent.StopShoot, which stops any photo shoot in progress. Finally, there is Examples.PhotoEvent.End, that tells PhotoEvent to end. The outgoing events are Examples.PhotoEvent.Done, which indicate that a photo shoot is complete, and Examples.PhotoEvent.Error, which indicates some error condition in the task. The event name strings are assigned to the variables EV_SHOOT, EV_STOP, and EV_DONE. The first thing that the task code does is to retrieve the task manager and task pointer from the context, as shown here:
// *New* TaskManager pointer. TaskManager* manager = context->get_task_manager();

// *New* Task pointer. Task* task = context->get_task();

Getting Started Guide

4-13

Chapter 4

Raising an event requires a call to the task managers raise_event method, while wait_for events require a call to the tasks wait_for_event method. Therefore, these pointers are very useful when using events. The next thing that is done is a call to the tasks enable_event method with the event name pattern that the task is interested in:
task->enable_event("Examples.PhotoEvent.*");

Without the enable_event call, any event that is raised when the task is not explicitly waiting for events is lost. The enable_event call allows events raised to be queued for processing by the next event_wait. Each enable event call requires a corresponding disable_event call when events no longer need to be queued. An asterisk can be used as a wild card to enable a range of events, as in the above code. After the above comments, changes to PhotoEvent.cpp from PhotoShoot.cpp are preceded by the *New* indicator. Searching for successive occurrence of this string will skip to the PhotoEvent specific codes. The first changes in PhotoEvent are some new variables:

// *New* The image count of the current photo shoot. int current_count = 0;

// *New* The file name to write images out. String image_name;

// *New* A flag to indicate if the task is idle (i.e. // no pending command) or is taking photos. bool idle = true;

PhotoEvent can shoot different series of photos, each with a different file name prefix. The variable current_count keeps track of the image count in each series, while image_name stores the file name prefix. The idle flag indicates whether PhotoEvent is

idle with no pending commands or is in the process of shooting photos from a previous command. The next *New* comment indicates that the argument processing code has been taken out of PhotoEvent because the task takes no arguments. All arguments to PhotoEvent will come through task events as event properties. The next change is the while loop, which now has no condition because it will be terminated by task events, not some variable values like in PhotoShoot. Just inside the loop is the first indication of event handling with the following code:
// *New* If commanded to take some photo, then wait // the specified delay. Otherwise, just wait until

// the next command is received (i.e. wait_delay is // ERSP_INFINITE). unsigned long wait_delay = idle? ERSP_INFINITE : delay; Event e = wait_for_event("Examples.PhotoEvent.*", wait_delay);

This code decides how long to wait for the next event. If the value of idle is true, then wait_delay will be ERSP_INFINITE, meaning the code will wait as long as it takes for the next event, because there is nothing else to do. If idle is false, however, there is a photo shoot in progress, so the wait should only be as long as the specified delay between

4-14

Getting Started Guide

Task Tutorials

photos, which is the value of the variable delay. Once this delay is up, the task has to stop waiting and proceed to take the next photo. An asterisk can be used for wild card matching in the event name parameter in the wait_for_event call. In the above code, the string Examples.PhotoEvent.* indicates that this task is only interested in events that start with Examples.PhotoEvent. Next the events, if any, need to be processed, which is done by the following code:
// *New* Event processing. const char* type = e.get_type(); if (strcmp(type, EV_SHOOT) == 0) { // Error event just in case. Event error("Examples.PhotoEvent.Error"); error.set_property("error", "Invalid property type.");

bool properties_ok = true;

// Verify property types. const TaskValue& name_prop = e.get_property("name"); if (name_prop.get_data_type() != TaskArg::TYPE_STRING) { properties_ok = false; manager->raise_event(error); }

...

} else if (strcmp(type, EV_STOP) == 0) { // Got stop event, so just go idle. idle = true; } else if (strcmp(type, EV_DONE) == 0) { // You're done, so break out of while loop. break; }

Here, the incoming event type is compared with the three events that PhotoEvent supports. For the EV_SHOOT event, there are three parameters. Next, the types of the three event properties are verified. All three properties must be of the correct type before the event is accepted. For the EV_STOP event, you just set idle to true. For the EV_DONE event, you break out of the loop and end the task, returning the total count of images taken as in PhotoShoot before. After this the process of taking the photos is done, but only if idle is false. The following code (after the writing out the image) checks to see if the necessary number of images has been taken, and if so raises an Examples.PhotoEvent.Done event to indicate this fact:
// Check to see if you're done. if (current_count >= stop_count) { idle = true; // Raise done event. Event done("Examples.PhotoEvent.Done"); done.set_property("name", image_name.c_str()); manager->raise_event(done); }

Getting Started Guide

4-15

Chapter 4

Before returning the image count and ending the task, a call to disable_event is made to indicate events should no longer be queued for this task:
task->disable_event("Examples.PhotoEvent.*");

Now for a look at the test_event.cpp source file to see an example of how events are used. A custom task named Examples.PhotoEventTest is constructed in test_event.cpp. After the routine acquisition of task manager and task pointers, the task calls enable_event for Examples.PhotoEvent.* to enable event queuing. Then the test task constructs an Examples.PhotoEvent.Shoot task to take 3 photos with names a_xxx.jpg at 1 second intervals with the following code:
// Create an event to shoot 3 photos named a_xxx.jpg Event shoot("Examples.PhotoEvent.Shoot"); shoot.set_property("name", "a_"); shoot.set_property("count", "3.0"); shoot.set_property("delay", "1.0"); std::cerr << "test_event - raising event: " << shoot << " manager->raise_event(shoot);

After raising the shoot event, it waits for the done event that the PhotoEvent task will send back. After receiving the done event, the test task raises an Examples.PhotoEvent.End event to tell the PhotoEvent class to end, and then calls disable_event before returning. The main method sets up the PhotoEvent and PhotoEventTest tasks for running in parallel. It then makes the wait_for_all_complete_tasks method to start the tasks and wait until they both complete. Build both PhotoEvent.cpp and test_event.cpp by typing make on Linux or by building the Visual Studio project files on Windows. Running test_client executable should produce three image files named a_00x.jpg, where x is (1, 2, 3) and the following output:
test_event - raising event: {Event Examples.PhotoEvent.Shoot [time 0] (count: 3.000000) (delay: 1.000000) (name: "a_")} test_event - received event: {Event Examples.PhotoEvent.Done [time 1.05513e+09] (name: "a_")} test_event - raising event: {Event Examples.PhotoEvent.End [time 0]} 3 photos taken.

Summary
This tutorial goes over the process of using events to communicate among parallel tasks. The key methods to using events are the TaskManager::raise_event and the Task::wait_for_event methods. Events can contain additional information in properties, which are name-value pairs where the value is a variant that can contain a variety of data types. Because property values are variants, their types should be checked prior to use. Events can be lost if another task is not waiting for that event when it is raised. Queuing of events can prevent this kind of loss and is enabled with the Task::enable_event call. When events should no longer be queued, like when a task is about the exit, the Task::disable_event call should be used to stop the queuing. When specifying event, an asterisk can be used as a wild card to indicate patterns of events.

4-16

Getting Started Guide

Python Tutorials

Python Tutorials
01-simple

Purpose
The goal of this tutorial is to demonstrate the basics of using the ERSP TEL from Python. It shows how to call tasks supplied in the ERSP task library directly from Python.

Prerequisites
A robot is needed for this tutorial. The program also uses text-to-speech, so it's good if you have that configured and working, but it's not a critical aspect of the tutorial. This tutorial requires basic knowledge of the Python programming language. The O'Reilly book Learning Python can be a helpful resource. The Python website at http://www.python.org/ has a lot of documentation, including a Python tutorial at http://www.python.org/doc/current/tut/tut.html

Exercise
Triangles.py is the python program for this tutorial. When executed, it causes the robot

to move in a triangular pattern, that is, move forward, turn left 120 degrees, move forward, turn left 120 degrees, etc. The robot will complete 3 triangles. To execute the program, one invokes the python interpreter with triangles.py as an argument. From the command line, this is usually done by something like the following:
$ python triangles.py

The robot will begin moving, and will speak (and print to the console) messages about its progress:
Triangle number 1. Beginning the triangle. One third complete. Two thirds complete. 100% complete! Triangle number 2. Beginning the triangle. [etc.]

The first step when using ERSP from python is to import the ersp.task module.
import ersp.task

The ersp.task module contains all the functions and classes that let you use ERSP from Python. In this tutorial you use existing tasks that are provided as part of ERSP. The ERSP task library is divided into several functional categories: Navigation Networking

Getting Started Guide

4-17

Chapter 4

Resources Speech Utility Vision Each sublibrary has a corresponding Python module that can be imported to give access to the tasks in the library:
ersp.task.navigation ersp.task.net ersp.task.resource ersp.task.speech ersp.task.util ersp.task.vision

The program uses the MoveRelative and TurnRelative tasks, which are in the navigation library, to move forward and turn the robot. It also uses the Speak task from the speech library to speak its progress aloud. You must import the appropriate modules to use those tasks:
from ersp.task import navigation from ersp.task import speech

Note that the from <module> import <submodule> syntax of Python is used, so that instead of having to refer to a task like ersp.task.navigation.MoveRelative, you can just say navigation.MoveRelative. Once you've imported the modules you need, and defined a few constants (like the velocity at which the robot should move and the length of a triangle side), triangles.py defines a function called triangle, which actually moves the robot. Before moving the robot however, it first speaks the text "Beginning the triangle." aloud:
speech.Speak("Beginning the triangle.")

You can see that calling a task directly from Python looks identical to calling a normal function. The truth is that the object named by speech. Speak is not a function, but an instance of a class that represents tasks.
[1] >>> print speech.Speak <ersp.task.TaskFunctorPtr instance at 0x81d4a2c>

When you call a task using the normal function call syntax, however, the task acts just like a function--it takes arguments, it returns only once the task is complete, and it can return a value when it's done. The code below first causes the robot to rotate to the left, then to move forward for one side of a triangle:
navigation.TurnRelative(math.pi / 3.0) navigation.MoveRelative(0.0, [length, 0, velocity])

Note that the call to MoveRelative passes it two arguments, 0.0 and a list of length 3 that specifies the distance, the angular velocity and the linear velocity. If you look at the documentation for the MoveRelative task you'll see that the second argument must be a DoubleArray, which is just a C++ class that holds a sequence of floating point numbers.

4-18

Getting Started Guide

Python Tutorials

When you call a task from Python, the ERSP Python interface tries to automatically convert the arguments into the correct form that the task is expecting. Anyplace that it sees a Python tuple, e.g., (1, 2, 3), or a list, e.g., [1, 2, 3], it will convert the argument to a DoubleArray. When a task returns a DoubleArray, it will automatically be converted to a Python tuple. Most of these conversions are handled automatically, but not all. When the documentation for a task says that it takes a boolean argument, because Python (as of this writing) does not have a distinct boolean datatype you will need to supply either ersp.task.TRUE or ersp.task.FALSE. Once you've successfully run the triangles.py program, you may want to try a few modifications to increase your understanding of and familiarity with the ERSP. You could try making the robot move in other patterns. You could take a look at the documentation of the ERSP task library and try using some of the other tasks supplied, whether they're different navigation tasks or something else entirely. The last thing that the program does is shutdown the task manager:
ersp.task.getTaskManager().shutdown()

This is a crucially important step that all programs using ERSP tasks must perform so that proper cleanup occurs.

Summary
This tutorial demonstrates how to use the ERSP Python interface to call tasks directly from Python programs as though they were normal Python functions. 02-parallel

Purpose
The goal of this tutorial is to demonstrate how to run multiple tasks in parallel using the ERSP Python interface.

Prerequisites
A robot is not strictly needed for this tutorial. The program uses only text-to-speech (TTS), so you should make sure you have that configured and working. This tutorial requires basic knowledge of the Python programming language. The O'Reilly book Learning Python can be a helpful resource. The Python website at <http://www.python.org/> has a lot of documentation, including a Python tutorial at <http://www.python.org/doc/current/tut/tut.html>. You should also have read and understood the first Python/ERSP tutorial, 01-simple, which shows how to use the ERSP Python interface and how to use tasks in a sequential, non-parallel manner.

Getting Started Guide

4-19

Chapter 4

Task
Echo.py is the Python program for this tutorial. When executed it waits for you to type a

line of text, then speaks the text back to you. The catch is that if it takes longer than one second to speak the text, the task will be interrupted and will return before all the text is spoken. To execute the program, invoke the python interpreter with echo.py as an argument. From the command line, this is usually done by something like the following:
$ python echo.py

The program will then prompt for input:


The text-to-speech echo system is ready. Type a line of text and it will be spoken. Press ENTER on an empty line to quit. Text to speak:

At this point you can type a line of text and the program will speak it back to you using TTS. If you type a short phrase, the entire phrase will be spoken and then you will see the "Text to speak:" prompt again. If you type a lengthier phrase that takes longer than one second for the speech synthesizer to speak, you will see the following text printed before the synthesizer finishes:
(Oops, that took too long!) Text to speak:

The 01-simple tutorial showed you how to run tasks synchronously, one after another. The important player in running tasks in parallel, asynchronously, is the ersp.task.Parallel class. To run tasks in Parallel, one must first instantiate a Parallel object. You do this in the echo function:
p = ersp.task.Parallel()

One can now add tasks to the Parallel object with its addTask method. When adding tasks and their arguments, the tasks do not begin executing right away.
speakingTask = p.addTask(speech.Speak, [text]) p.addTask(util.Wait, [TIMEOUT])

The first line adds the Speak task, which is in the ERSP task library, to the Parallel object. The second argument to addTask is a sequence (list or tuple) containing the arguments to the task. If no task arguments are being supplied, the second argument to addTask may be omitted. AddTask returns a new instance of the task that was just added, which here you store in the speakingTask variable (which will be explained in a moment). The second line adds a Wait task, again from the ERSP task library, whose only argument in the number of milliseconds to wait. At this point there are two tasks added to the Parallel construct, neither of which have begun executing.

4-20

Getting Started Guide

Behavior Tutorials

There are two ways that you can begin executing tasks in the Parallel construct. They are the same in that with both ways all tasks begin executing immediately, they only vary in when the Parallel is considered to be complete. Parallel's waitForFirstTask method begins executing the tasks, each on in its own thread, and returns as soon as any one of the tasks has finished. Tasks that haven't finished yet are terminated, and then waitForFirstTask method returns the task that finished first. The waitForAllTasks method, on the other hand, begins executing the tasks but doesn't return until all tasks have completed. In this program you want to speak the supplied text aloud, but only wait a maximum of one second for the TTS to complete. Use waitForFirstTask:
firstTask = p.waitForFirstTask()

If the Speak task finishes in less than one second, then waitForFirstTask will terminate the Wait task and return the Speak task instance. But if the Speak task takes longer than one second, then the Wait task will finish first, and the Parallel construct will terminate the Speak task and return the Wait task instance. (Note that the supplied TTS drivers aren't interruptible, so terminating the Speak task will not interrupt the speech.) Since you want echo to return true if the TTS finished speaking in one second, you compare the task instance returned by waitForFirstTask to the one you got when you added the Speak task to the Parallel construct. If they are the same, then Speak completed within the time limit:
return speakingTask = firstTask

Once you have successfully run this tutorial program, you may want to try a few modifications to increase your understanding of and familiarity with the ERSP. One possible change would be to investigate the behavior of Parallel's waitForAllTasks method. Another would be to try the TaskManager's installTask method, which also runs tasks asynchronously, and see how it differs from using Parallel.

Summary
This tutorial demonstrates how to use the ERSP Python interface to run tasks asynchronously, in parallel.

Behavior Tutorials
Behaviors are biology-inspired modules that receive inputs and react by generating some outputs. Task is to behavior as conscious is to subconscious. An example of a conscious task for a person would be to drive to the store. However, along the way, the person always stop at red lights and go on green lights without thinking much about it. The automatic reaction to red and green lights is a behavior. ERSP tasks and behaviors play the same roles for a robot. An example of a behavior is turning to avoid an obstacle. The input to this behavior would be a sensor reading. The output would be an angular velocity to turn away.

Getting Started Guide

4-21

Chapter 4

Each behavior is focused on a very specific functionality or reaction. However, complex autonomous behaviors can be created by connecting multiple behaviors together in a behavior network. Unlike tasks, which typically have a clear beginning and ending, behavior networks run constantly in cycles, so the behaviors in the network are always active and reacting, processing inputs into outputs, just like our subconscious. 01-network

Purpose
This tutorial will demonstrate the process of creating behavior networks using the Behavior Composer tool. The tutorial will take you through the process of laying out behaviors, setting behavior parameters, and connecting behaviors together to form a behavior network.

Prerequisites
ERSP must be installed and working. Sample code package must be installed. The active directory should be Samp_code_dir/tutorial/\ behavior/01-network. The Install_dir/bin directory must be in the executable path.

Task
Open the Behavior Composer by running composer.bat script (Windows) or evolution_java.sh composer (Linux) in the Install_dir/java directory. It should look similar to this:
Load Network Save Network Wiring Mode

Toolbar

Behavior Palette

Network Editor

Property Panel Status Bar

4-22

Getting Started Guide

Behavior Tutorials

If the Network Editor window is not open, open one now. To do this, go to the Viewers menu, and select BehaviorNetwork. Make sure you are in Edit mode by clicking on the Edit mode button in the Network Editor window. From the Drivers tab of the Behavior Palette, drag a Camera block into the Network Editor. If the new Camera block is not selected (indicated by a yellow border), select it now. You need to set the camera_id parameter to the appropriate name of the camera you want to use. This is found in the Install_dir/config/resource-config.xml file. Note that whenever a new value is entered into a parameter field, the Enter key must be used for the change to take effect In the GUI tab, drag an Image Display block into the Network Editor. Click on the Wiring mode button in the Network Editor to enter wiring mode. Connect the output port named raw_image of the camera block to the input port named image of the Image Display block. To find the names of the ports, you can right click on the behavior block to open the XML Tree Editor. Now is a good time to save your behavior network. To save this behavior network, click on the Save network button (with the floppy disk icon) to bring up the Save file dialog box. Save the behavior network as objrec_network.xml. At this point, you can also test the behavior. Make sure the robot's USB cable is plugged into the computer, and the robot's battery is on. Also make sure that the Install_dir/bin directory is in your path for this command line shell. At the console, type behave objrec_network.xml. A window should pop up and show the current camera view. To stop the behavior, you will either have to type Ctrl-C, or use the --duration switch: behave objrec_network.xml --duration=5. Important Note: If you get a number of "Missed desired interval...." errors when you run your behavior, you can change the interval at which the network is run. Make sure you are in Edit mode in the Network Editor, and click in an empty area of the Network Editor to make sure you do not have any blocks selected. Change the invocation_interval parameter to a higher number (the units are seconds). Go back into the Behavior Composer, and drag an ObjRecRecognizer block from the vision tab. Connect the raw_image camera output to the input of the ObjRecRecognizer block. You need to set the modelset parameter for the ObjRecRecognizer block. default.mdl is an example of modelset that is ready for use. From the util tab in the Behavior Palette, drag an Input Collector block into the network. This is useful to print debugging output to the screen. You will need to set the
print_input parameter of the Input Collector to true. For now, connect the model_name and distance ObjRecRecognizer output ports to the input port of the Input Collector.

This will cause the name(s) of currently recognized objects and their estimated distance to be printed. This is another good opportunity to save and check the network. Find an object that is in the modelset you are using (i.e. a one dollar bill if you use default.mdl) and run the behavior. Hold your object in front of the camera. As your object is recognized, its name and its estimated distance will be printed to the screen.

Getting Started Guide

4-23

Chapter 4

After a few seconds, type Ctrl-C to stop the program. The program's output should look similar to the following:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec. [behave.cpp:272] string: dollar_front.jpg

Input behavior4 Input behavior4 Input behavior4 Input behavior4

double: 7.62887 string: dollar_front.jpg

double: 7.52191

The output indicates that the behavior network was successfully loaded and executes at an interval of 0.10 seconds, or 10 times a second. Each time the network executes, the image from the camera is sent to the object recognizer block. The name of the recognized image (dollar_front.jpg above), and the estimated distance is sent to the InputCollector and printed to the screen. For these final steps, we will add to the network to cause the robot to attempt to maintain a specified distance from the recognized object. Note: it is recommended that you use only one object that the robot will recognize. Having multiple recognizable objects in view will cause it to find different distances at each timestep (because of the different objects it recognizes). This will result in extremely jerky robot motion. From the util tab, insert a Constant block into the network. Set the parsed_value parameter of the Constant block to the distance you want the robot to maintain from the object, for example 5.0. Remember you have to press the Enter key for the parameter change to take effect. From the operators tab, insert a Subtraction block. The Subtraction block has 2 input ports. The sum of the inputs on the input_negative (bottom) port are subtracted from the sum of the inputs on the input_positive (top) port. Connect the output of the Constant block to the input_negative port of the Subtraction block. Also connect the distance output of the ObjRecRecognizer block to the input_positive port of the Subtraction block. The output of the Subtraction block will be the linear velocity that we send to the motors. If the robot is closer than the distance specified in the Constant block, the velocity will be negative and the robot will drive away from the object. If it is further that the specified distance, the velocity will be positive and the robot will drive towards the object. From the drivers tab, insert a DriveSystem block. As with the camera, you will have to set the drive_id parameter to the ID of the Evolution.Diff2Drive drive system, which can be found in the resource-config.xml file. Connect the output of the Subtraction block to the linear velocity input port of the DriveSystem block. Again, run the behavior network. Hold your object in front of the camera, and depending on the estimated distance from your object to the camera, the robot will move towards or away from your object. If things aren't working correctly, a final version of the network is located in Samp_code_dir/tutorial/behavior/01-network/objrec_tut_final.xml. You have just created a simple behavior network to maintain a specified distance from an object. The network also prints out the name of the recognized object, as well as the estimated distance to the object. In ERSP, behaviors can vary from the simple, like the constant and Addition behaviors used in this example, to the more complex behaviors

4-24

Getting Started Guide

Behavior Tutorials

that perform obstacle avoidance, visual localization, and other more intricate tasks. A number of behaviors working together in a behavior network can perform diverse and complex tasks. Numerous other behavior networks are shipped with the ERSP sample code, and can be found in the subdirectories under Samp_code_dir/behavior. Feel free to open these up in the Behavior Composer to see how they are put together, and run them to see what they do. Behavior network are stored as XML files. These files can be edited directly by users who are comfortable with how behavior networks are represented in XML.

Summary
In this tutorial youve constructed a simple behavior network using the Behavior Composer. Behaviors are organized into different categories in the composer. The categories are represented as tabs in the Behavior Palette window of the Behavior Composer. Youve used the constant, InputCollector, and ImageDisplay from the Util category, the Subtraction behavior from the Operator category. You've also used the Camera and Drive System from the drivers category, and the ObjRecRecognizer block from the vision category. Youve edited behavior parameters in the Property Panel window, used the XML TreeEditor to find the names of input and output ports, and youve laid out behaviors and connected them together in the Network Editor window. Finally, youve saved out the behavior network files and executed the behavior network using ERSPs behave tool. 02-custom-behavior

Purpose
The purpose of this tutorial is to give broad coverage of how to write a custom behavior. Specific details that will often be overlooked by novice behavior writers will hopefully be fully covered. You create a custom behavior that uses IR sensors to stop the robot before it crashes into an obstacle.

Prerequisites
ERSP must be installed and working. The ERSP sample code package must be extracted, and the active directory should be Samp_code_dir/tutorial/behavior/02-custom-behavior. The Samp_code_dir/config directory needs to be specified in the EVOLUTION_CONFIG_PATH environment variable. Finally, you must have at least one working range sensor.

Exercise
To create a new behavior, start with a template skeleton behavior class which contains the standard methods that all behaviors must have. The skeleton behavior class is contained in the SkeletonBehavior.hpp and SkeletonBehavior.cpp files. These files are simple, consisting of a constructor, a destructor, a compute_output virtual method, a DECLARE_BEHAVIOR macro in the SkeletonBehavior.hpp and an

Getting Started Guide

4-25

Chapter 4

IMPLEMENT_BEHAVIOR macro in the SkeletonBehavior.cpp file. These macros are

necessary to properly initialize and register the behavior and every behavior needs to have them. Copy the SkeletonBehavior.hpp file to StopCrashBehavior.hpp and SkeletonBehavior.cpp to StopCrashBehavior.cpp. Edit the newly created StopCrashBehavior.hpp and .cpp files and replace all occurrences of Skeleton with StopCrash. In StopCrashBehavior.hpp, also replace SKELETON in the header macro with STOP_CRASH. Now we are ready to customize the new behavior. There are three steps to building a new custom behavior from the skeleton behavior code: Define the input and output port counts. Add support for behavior parameters. Implement the compute_output method. We need to determine the number of input and output ports the new behavior will have. Our behavior works as a velocity filter. It takes velocity and sensor readings as input. It outputs either the input velocity or 0 if the sensors detect an obstacle within some specified distance. Therefore, our behavior has two inputs: one for the input velocity and one for the sensor readings. It also has one output, for the resulting velocity. You now have enough information to write the schema file for the behavior, which is already provided in the file StopCrashBehavior.xml. In a public declaration section in StopCrashBehavior.hpp, add the following lines:
// Input ports. static const PortId INPUT_FORWARD_VELOCITY = 0; static const PortId INPUT_RANGE_SENSOR = 1; // Output ports. static const PortId OUTPUT_FORWARD_VELOCITY = 0;

Ports numbers start at 0 and go up. You are implementing the schema definition that input port 0 is the input velocity port, input port 1 is the range sensor input port, and output port 0 is the output velocity port. Specify the correct port counts for StopCrashBehavior. As mentioned before, there are two input ports and one output port. The port counts are defined in the following lines of StopCrashBehavior.cpp:
// Port counts const Evolution::Ibehavior::PortId StopCrashBehavior::INPUT_PORT_COUNT = 0; const Evolution::Ibehavior::PortId StopCrashBehavior::OUTPUT_PORT_COUNT = 0;

Change these lines to:


// Port counts const Evolution::Ibehavior::PortId StopCrashBehavior::INPUT_PORT_COUNT = 2; const Evolution::Ibehavior::PortId StopCrashBehavior::OUTPUT_PORT_COUNT = 1;

Look at the constructor of the StopCrashBehavior. The only thing it does is call the constructor of the BehaviorImpl parent. This is vital, as it sets up the internals of BehaviorImpl to function with the network. The most important thing to pass is the

4-26

Getting Started Guide

Behavior Tutorials

number of input and output ports. The BehaviorImpl constructor takes four parameters: the security ticket, the container reference, the number of input ports, and the number of output ports. In the code above, you are specifying that StopCrashBehavior will have one input port and one output port. Also in the StopCrashBehavior code is the DECLARE_BEHAVIOR macro. Each behavior needs to have a DECLARE_BEHAVIOR macro in the behavior class declaration and an IMPLEMENT_BEHAVIOR in the behavior class implementation. If, as in this example, the behavior class contains both the declaration and the implementation in the same file; so it contains both macros. If the behavior class is declared in a separate header file, the DECLARE_BEHAVIOR macro should goes there. Now, we need to add support for behavior parameters. Our behavior has one parameter, the stop limit parameter, which defines how close the robot can get to an obstacle before stopping. In StopCrashBehavior.hpp, add the following declaration section:
public: // Parameters static const char* const PARAM_STOP_LIMIT = "stop_limit"; /** * Sets the stop limit. */ Result set_stop_limit (TicketId ticket, const char* stop_limit); DECLARE_BEHAVIOR_PARAMS;

In this example, the PARAM_STOP_LIMIT variable stores the parameter name for later use. The set_stop_limit method will be called by the behavior infrastructure or behavior user to set the stop limit value. The DECLARE_BEHAVIOR_PARAMS macro defines generic parameter access methods and must be present if the behavior has one or more parameters. Finally, add a protected variable to store the specified stop limit value by adding the following section of code to StopCrashBehavior.hpp:
protected: double _stop_limit;

This parameter is implemented by using the BEHAVIOR_PARAM macro enclosed in a BEGIN_BEHAVIOR_PARAMS and END_BEHAVIOR_PARAMS macro block in the implementation. The begin and end calls are obvious. The BEHAVIOR_PARAM takes the name of the class, the name of the parameter, and the name to expand to a function call for when the parameter is encountered. The behavior always calls set_<3rd argument> when it sees a given parameter. In this case, it calls set_stop_limit. The parameter parsing function should do the appropriate thing. Usually it just parses a variable from a string and stores it. For StopCrashBehavior, there is a set_stop_limit method to set the value of the default_message parameter into the member data element _stop_limit for later use by the behavior. In the StopCrashBehavior.cpp file, add the following code section:
//////////////////////////////////////////////////////// // Parameters ////////////////////////////////////////////////////////

BEGIN_BEHAVIOR_PARAMS(StopCrashBehavior, BehaviorImpl); BEHAVIOR_PARAM(StopCrashBehavior, StopCrashBehavior::PARAM_STOP_LIMIT, stop_limit);

Getting Started Guide

4-27

Chapter 4

END_BEHAVIOR_PARAMS(StopCrashBehavior);

/** * @brief * Set the stop limit threshold before suppressing forward\ * velocity. */ Result StopCrashBehavior::set_stop_limit(TicketId ticket, const char* stop_limit) { ASSERT(stop_limit); _stop_limit = std::strtod(stop_limit, NULL); return RESULT_SUCCESS; }

The first three macros define the generic set_parameter() call to call set_stop_limit() if the specified parameter name is PARAM_STOP_LIMIT, previously defined as stop_limit() in StopCrashBehavior.hpp. After the macros, the set_stop_limit() function is defined to convert the input value to a double and store it in _stop_limit(). After initialization, each behavior, on each cycle of the network, will have
compute_output() called by the behavior network. Compute_output should do the

bulk of the work of the behavior. Behavior networks run by initializing all their behaviors and then cycling through them, in dependency order, and running them by calling each behaviors compute_output() method. This method processes the behavior inputs in the manner specific to the behavior and generate the resulting outputs each behavior network cycle. Next, you need to implement the compute_output() method. The method is the heart of the behavior because it defines the behavior's functionality. This behavior looks at the sensor inputs to see if there is an obstacle closer to the robot than the stop limit value. If yes, the behavior outputs 0 as the velocity to prevent any forward or backward motion. If not, it passes the input velocity to the output velocity to allow for normal movement. The code to do this looks like this:
// This is the velocity we will be outputting. double output_velocity;

// Obtain velocity from velocity input. PortInputTable& fv_table = get_port_input(INPUT_FORWARD_VELOCITY); PortInputIterator fv_input_iter = fv_table.begin(); BehaviorData* fv_data = fv_input_iter.get_data(); if (fv_data == 0) { // No input velocity, so set output velocity to 0. output_velocity = 0; } else { // Set output velocity to input velocity, for now. output_velocity = fv_data->get_double(); } // Check to see if there's any sensor reading within the // specified stop limit. PortInputTable& rs_table = get_port_input(INPUT_RANGE_SENSOR);

4-28

Getting Started Guide

Behavior Tutorials

for (PortInputIterator input_iter = rs_table.begin (); input_iter != rs_table.end (); ++input_iter) { BehaviorData* rs_data = input_iter.get_data(); if (rs_data == 0) { // Ignore null range sensor readings. continue; } if (rs_data->get_double() < _stop_limit) { // If we get a reading within the stop limit, // set output_velocity to 0. output_velocity = 0; break; } }

// Set the output velocity, which is either the input velocity or // 0 if there's a sensor reading within the stop limit. BehaviorDataWriter* output_data = get_port_output(OUTPUT_FORWARD_VELOCITY); output_data->set_double(output_velocity);

return Evolution::RESULT_SUCCESS;

You can type the above code into the body of StopCrashBehavior's compute_output() function or copy it over from StopCrashBehavior_FINAL.cpp. The code first checks the input velocity port to see if there is a velocity input. If there is, it stores the input velocity into the output_velocity variable. If there is no velocity input, it sets output_velocity to 0. It only expects a single velocity input, so it just looks at the first input by setting fv_input_iter to fv_table.begin(). Next, the code checks all sensor inputs by iterating over the entire sensor input table for a reading that is less than the stop limit parameter value, stored in stop_limit(). If it finds one, it sets output_velocity to 0. Now you can build the StopCrashBehavior class into a behavior library by running the command make -f Makefile.linux in the working directory. To test our newly created behavior, we will run the check_stopcrash.xml behavior. But first, you need to make the new behavior accessible to the Behavior Composer. First open the StopCrashBehavior.xml file and make sure that the library attribute contains the correct location of the newly compiled libStopCrashBehavior.so. Also note the category attribute. This is the name of the tab that the new behavior will appear under in the Behavior Composer. You can make the behavior schema file accessible to the Behavior Composer by copying the StopCrashBehavior.xml file into the Samp_code_dir\sample_code\config\behavior\ directory. You will need to setup the EVOLUTION_CONFIG_PATH as follows by typing:
export EVOLUTION_CONFIG_PATH= "$EVOLUTION_CONFIG_PATH:$SAMPLE_ROOT\sample_code\config

Getting Started Guide

4-29

Chapter 4

Open up the check_stopcrash.xml behavior in the Behavior Composer. There are 5 blocks - Constant, Range Sensor, Input Collector, Stop Crash, and Drive System. You need to check and modify the following: Constant: The output of the constant (set in the parsed_value parameter) will be the linear velocity of the robot. Set this to something reasonable, like 3 for example. Range Sensor: You need to set the sensor_id parameter of this block to refer to a range sensor you have working on your robot. You probably want to pick a sensor that is forward looking, so the robot will actually be able to "see" in the direction it is driving. Input Collector: Set the print_input parameter to true so it will print out the sensor value to the console. Stop Crash: You need to set the stop_limit parameter. This is the parameter that controls the sensor range at which the output velocity of the Stop Crash is set to zero. So the smaller this value is, the closer the robot will come to an obstacle before it stops. Set this to something large, like 60 to start out. Drive System: Set the drive_id parameter to refer to the Diff2Drive. Once you have made the above changes, save the behavior. Verify that the robot is turned on and the USB cable is plugged into the PC. You may want to place the robot on something so the wheels are lifted off the ground. That way if something goes wrong above the robot won't run into anything. Finally, run the behavior by typing:
behave check_stopcrash.xml

Stick an obstacle, such as your hand, in front of the sensor. As the sensor reading changes from being above the stop_limit to below it, the motors should stop. As an advanced exercise, you could add to StopCrashBehavior the capability to discern between front and rear obstacles and selectively disable forward motion for the former and backward motion for the latter.

Summary
This tutorial takes you through the process of writing a custom behavior. Custom behaviors typically derives from the base class BehaviorImpl, which provides basic default implementations of the IBehavior interface. The custom behavior overrides these implementations as necessary. The one method that will always be overridden is compute_output, because this method performs all the work specific to the custom behavior. The DECLARE_BEHAVIOR and IMPLEMENT_BEHAVIOR macros needs to be specified to identify the custom behavior to ERSP. Behavior parameters must then be declared through the use of the DECLARE_BEHAVIOR_PARAMS macro. Each parameter must also have a BEHAVIOR_PARAM macro inside of BEGIN_BEHAVIOR_PARAMS/END_BEHAVIOR_PARAMS macro block and a set_xxx method, where xxx is the third parameter of the corresponding BEHAVIOR_PARAM macro. The compute_output methods must be implemented to handle behavior inputs and generate the appropriate outputs. Finally, a schema file must be created for the custom behavior and placed in a behavior configuration directory.

4-30

Getting Started Guide

Behavior Tutorials

03-teleop

Purpose
This tutorial demonstrate the use of the MalleableBehavior, which allows inter-process and inter-system communication to and from behavior networks. This behavior allows behavior networks to send and receive communication over TCP/IP sockets. The primary use for this functionality is teleoperation, or the remote operation of the robot. However, this feature can be used to facilitate any kind of communication between behavior networks across process boundary.

Prerequisites
ERSP must be installed and working. The ERSP sample code package must be extracted, and the active directory should be Samp_code_dir/tutorial/behavior/03-teleop. The Install_dir/bin directory should be in the executable path. The Samp_code_dir/config directory should be added to the EVOLUTION_CONFIG_PATH environment variable.

Exercise
This tutorial demonstrates how data can be passed to and from behavior networks across process and system boundaries. In the Samp_code_dir/tutorial/03-teleop directory are two XML files, client.xml and server.xml. Open the client.xml file in the Behavior Composer to have a look at the client behavior network. The client.xml file has two Constant behaviors connected to a TutorialClient behavior, whose output is in turn connected to an InputCollector. The TutorialClient behavior is a client that connects to a server over TCP/IP. Inputs to this behavior, in this case the text from client string and the double array (1, 2, 3) from the Constant behaviors, are sent over TCP/IP socket connection to the server. Data received from the server is sent out the output port into the InputCollector. Now open up the server.xml field in the Behavior Composer to see what this network looks like. The TutorialServer behavior acts as a server that clients can connect to over TCP/IP via sockets. Like the TutorialClient behavior, inputs to the TutorialServer behavior, in this case the number 5 from the Constant behavior, is broadcast out to all connected clients. The two output ports send out data received from clients to the two InputCollectors. For a demonstration of how all this works, open up two command line shells on the same computer with ERSP installed and change to the Samp_code_dir/tutorial/behavior/04-config directory in both. Make sure that the Samp_code_dir/config directory have been added to EVOLUTION_CONFIG_PATH in both shells. In one command line shell, type the command behave server.xml (without the single quotes) to run the server.xml behavior network. Then, in the other command line shell, type in the command behave client.xml behavior network. In the shell where the

Getting Started Guide

4-31

Chapter 4

server network was run, the InputCollector should print out the string text from client sent over from the client network, similar to the following output:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec. [behave.cpp:272] DoubleArray.

Input array_received 0: 1: 2: 1 2 3

Input text_received Input array_received 0: 1: 2: 1 2 3

string:

"text from client"

DoubleArray.

Input text_received ...

string:

"text from client"

In the shell where the client was run, the output from the input collector should be the number 5 sent over from the server network, similar to the following output:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec. [behave.cpp:272] double: double: 5 5

Input client_received Input client_received ...

While this communication between the two behavior networks was run on the same computer, the same communication can happen between any two computers with a TCP/IP connection. Open up the client.xml behavior network in the Behavior Composer and select the TutorialClient behavior to display its parameters in the PropertyPanel. Note that the value of the tcp_host parameter is 127.0.0.1, which is the IP address for the local computer. Changing this value to another valid IP address will cause the client behavior to try to connect to a server on the specified computer. The tcp_port parameter specifies the port to which the client should connect. Note that its value is 25,000. The remaining tag_name parameter will be discussed a little later. For now note that its value is tutorial. Now open up the server.xml file in the Behavior Composer and right click on the TutorialServer behavior. Right-clicking on a behavior in the composer is an easy way of opening up the behaviors schema. The XML TreeEditor window that pops up with TutorialServer s schema shows a tcp_port parameter, whose value is set to 25,000. This value specifies which TCP port the TutorialServer should listen on for connections. Note that the port values matched up between the client and server behavior networks. The tag_name parameter for the TutorialServer behavior is also number. There can be multiple client and server behaviors in communicating behavior networks. Multiple client and server behaviors can operate over the same TCP port (e.g. 25000). The tag_name parameter helps match up which client behavior should communicate with which server behavior.

4-32

Getting Started Guide

Behavior Tutorials

Lets take a look at the output from the server again. The server has two output ports. The fact that the text message from the client went to the InputCollector labeled text_received and the double array from the client went to the InputCollector labeled arrayed_received is no accident. Right click on TutorialServer again to bring up its schema. Note that the name of the two output ports are text and array. Open up the client.xml behavior network in the composer. Right-click on the TutorialClient behavior to bring up its schema. Note that the input ports of this behavior are also named text and array. Similarly, the output from the TutorialClient and the input from TutorialServer also share the same name, number. These behaviors automatically matched up inputs and outputs of the same name. To see how these behaviors work and how users can create their own client and server behaviors, you need to look at the schema file for the TutorialClient and TutorialServer behaviors. The schema files are in Samp_code_dir/config/behavior/Examples, but are reproduced below for reference:
TutorialClient.xml:

<BehaviorSchema type="Examples.TutorialClient" driver="Evolution.MalleableBehavior" display_name="Tutorial Client" library="libevobeutil" category="sample_code"> <Parameter name="tcp_host" type="string" default="localhost" <Parameter name="tcp_port" type="number" default="25000" /> /> />

<Parameter name="tag_name" type="string" default="tutorial" <Inputs>

<Port name="text" datatype="string" description="Text to send to server server" semantic_type="Evolution.Generic"/> server" <Port name="array" datatype="doublearray" description="Double array to send to server semantic_type="Evolution.Generic"/>

</Inputs> <Outputs> <Port name="number" datatype="double" description="Number received from the server" semantic_type="Evolution.Generic"/> </Outputs> </BehaviorSchema>

TutorialServer.xml:

<BehaviorSchema type="Examples.TutorialServer" driver="Evolution.MalleableBehavior" display_name="Tutorial Server" library="libevobeutil" category="sample_code"> <Parameter name="tcp_port" type="number" default="25000" /> />

<Parameter name="tag_name" type="string" default="tutorial" <Inputs>

<Port name="number" datatype="double" description="Number to send to client." semantic_type="Evolution.Generic"/> </Inputs> <Outputs> <Port name="text" datatype="string" description="Text received from client" semantic_type="Evolution.Generic"/> <Port name="array" datatype="doublearray" description="Double array received from client" semantic_type="Evolution.Generic"/> </Outputs> </BehaviorSchema>

The type attribute of the BehaviorSchema tag identifies the behavior. The next thing to notice is the driver attribute, which is Evolution.MalleableBehavior for both the client and the server behavior. MalleableBehavior is a special behavior that handles

Getting Started Guide

4-33

Chapter 4

TCP/IP communication. It is called malleable because its behavior is completely defined by schema. MalleableBehavior itself is never used. New behaviors, like TutorialClient and TutorialServer, are created that are driven by MalleableBehavior to perform TCP/IP communication. These new behaviors define the data that they receive and send through the port definitions, like the text and array ports of the above behaviors. As mentioned before, input ports take in data to be sent over TCP/IP, and output ports send data received over TCP/IP to other behaviors in the behavior network. For example, if some numbers in an array needs to be sent from a behavior network to a computer at IP address 192.168.1.21, a behavior defined by the following schema would do the trick:
<BehaviorSchema type="MyBehaviors.SendNumbers" driver="Evolution.MalleableBehavior" display_name="Send Some Numbers" library="libevobeutil" category="sample_code"> <Parameter name="tcp_host" type="string" default="192.168.1.21" <Parameter name="tcp_port" type="number" default="25000" /> /> />

<Parameter name="tag_name" type="string" default="some_numbers" <Inputs>

<Port name="numbers" datatype="doublearray" description="Numbers to send." semantic_type="Evolution.Generic"/> </Inputs> </BehaviorSchema>

If instead the external client will be connecting to the behavior network (i.e. the behavior network will act as a server), then just remove the tcp_host parameter. If this parameter is present, the behavior acts as a client and connects to the specified host. If this parameter is not present, the behavior acts as a server and listens on the specified port for connections. If the behavior needs to receive some data to send out to other behaviors, say a velocity to send to the drive system, just add an output port to the schema. Below is an example:
<Output> <Port name="velocity" datatype="double" description="Incoming velocity." semantic_type="Evolution.LinearVelocity"/> </Outputs>

There is no limit to the number of input and output ports a Malleable-driven behavior can have. As long as the tag_name and port names match up, one Malleable-driven behavior can send and receive data from another Malleable-driven behavior. Can Malleable-driven behaviors send and receive data from clients and servers that are not another Malleable-driven behavior? The answer is yes, but the non-Malleable clients and servers need to conform to the protocol used by MalleableBehavior. This protocol is simply XML snippets terminated by a new line character
(<tutorial><text count=1\n

The presence of the count=1attribute indicates that the receiving behavior should output the data received only once to the behavior network. Without this attribute, the receiving behavior will repeatedly output the data until it receives new data over TCP/IP. The test_client.cpp file in the 03-teleop directory is the source for a custom test client that sends and receives the raw XML snippet to the server.xml network. This file uses a class called TestConnection derived from the TCPConnection class provided by ERSP. The TestConnection overrides two methods of TCPConnection. The first

4-34

Getting Started Guide

Behavior Tutorials

override is the create_protocol method. This method returns an ISocketProtocol object to be used by the connection, which in this case is the StringTerminatedProtocol with \n as the terminator. The second override is for the on_read method. This method is called with any data received through the connection. The implementation of this method makes a copy of the data received, null-terminates the copy, and prints it out to the console. It then constructs an XML snippet as a reply and sends it back to the server.xml network for output on the text port of TutorialServer. Finally, it sets the g_exit flag to exit the program. In the main method of test_client.cpp, the program starts with the creation of a TestConnection object. The init() method of this object is called to establish a connection to the server. The program then just waits in a loop until the TestConnection object receives a message from the server and sets the exit flag to end the program, as described above. To try this program out, type make for Linux or build the Visual Studio project in Windows to create the test_client executable. Run the server first in one shell by typing behave server.xml on the command line. Then run the test_client program in another. It should prints out the XML it receives from the server, similar to below:
Received: <tutorial><number count="1">5.000000</number></tutorial>

The server should print out the reply string it receives from the test_client program, like the following:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec. [behave.cpp:272] string: reply from client

Input text_received

Therefore, custom clients and servers can communicate with Malleable-driven behaviors in behavior networks by sending and receiving the XML snippets used by the Malleable-driven behaviors.

Summary
Behavior networks can communicate across process and system boundaries via TCP/IP with other behavior networks or foreign clients and servers using Malleable-driven behaviors. These behaviors have the driver=Evolution.MalleableBehavior attribute in their behavior schema. The input ports of these behaviors receive data to be sent out over TCP/IP. The output ports of these behaviors send data received over TCP/IP to other behaviors in the network. This tutorial provide a pair of client and server behaviors, TutorialClient and TutorialServer, to illustrate how Malleable-driven behaviors work. Malleable-driven behaviors communicate by constructing XML snippets from the tag_name parameter and the input and output port names. These XML snippets are terminated by a new line character \n. Custom clients and servers can communicate with behavior networks by constructing and parsing these XML snippets. A sample C++ client is provided to illustrate how this can be done.

Getting Started Guide

4-35

Chapter 4

04-primitive

Purpose
This tutorial shows how to create and use behavior networks in a task context as task primitives. A task primitive encapsulates a behavior network for use as a task. This allows, for example, the parallel execution of multiple behavior networks and tasks, with the possible communication among all of them through the use of task events. This tutorial will take the user through the creation of a task primitive and its use in parallel with another task. The tutorial will also implement communication between the two via task events and show how behavior inputs and outputs can be manipulated in a task primitive.

Prerequisites
A supported camera must be connected to the robot. The Install_dir/bin directory must be in the system executable path. The ERSP sample code package should also be extracted, and the active directory should be Samp_code_dir/tutorial/behavior/04-primitive.

Task
The content of the Samp_code_dir/tutorial/04-primitive network should look very similar to the content of the previous tutorial, 04-event. Everything is the same except for three new files: EventServer.hpp, EventServer.cpp, and test_client.cpp. The EventServer.[ch]pp files implement the task primitive for this tutorial. The test_client.cpp file is used in demonstrating how the EventServer task primitive works and will be discussed later. The EventServer.hpp file contains the declaration for the new task primitive class, EventServerPrim. The declaration begins with the now familiar ERSP_DECLARE_TASK_LINKAGE macro. The one notable thing when using this macro for primitives is that the second parameter, the task ID, also has to be the ID of the primitives behavior network file. In this case, the task ID is Examples.EventServer. This means that the primitives behavior network file must be the CONFIG_DIR/primitive/Examples/EventServer.xml file. This file is already in the Samp_code_dir/config/primitive/Examples/EventServer.xml file. If Samp_code_dir/config is already added to the EVOLUTION_CONFIG_PATH as a configuration directory, then all should be well. Next is the declaration of the EventServerPrim class:
class EVOLUTION_EXPORT EventServerPrim : public Evolution::Primitive, public Evolution::CallbackBehavior::Callback

This class derives from both the Primitive class and the CallbackBehavior::Callback classes. The Primitive class is the standard base class for all task primitives. The CallbackBehavior::Callback class provides the behavior input/output manipulation functionality for those task primitives that require them. You will discuss how Callback facilitates the input/output manipulation later.

4-36

Getting Started Guide

Behavior Tutorials

The constructor is next:


EventServerPrim (Evolution::TaskContext* context) : Primitive(context, ERSP_TASK_ID ( EventServer ) ), _task(0) {}

The first parameter of the constructor is the always-useful task context. The second parameter should be the behavior network ID. The ID Examples.EventServer could have been used here, but using the ERSP_TASK_ID macro does the same thing in a way that allows the compiler to validate the value. Misspell EventServer in the macro and the compiler should complain. The _task data member will hold a pointer to the task. For now it is initialized to 0. The rest of the class declarations are overrides for three methods from the Primitive class and the handle_input method of the CallbackBehavior::Callback class. The details of these methods are in the EventServer.cpp file. Before looking at the implementation of these primitives in EventServer.cpp, you should discuss the purpose of the EventServer task primitive. Open up the Samp_code_dir/config/primitive/Examples/EventServer.xml file in the behavior composer or an editor, and the user should see that it is a very simple network containing a single behavior, also named EventServer. This behavior contains a single input port named event_id and a single output port named event. The EventServer behavior is a Malleable-driven behavior in server mode (no tcp_host parameter), as can be easily verified by looking up its schema file, Samp_code_dir/config/behavior/Examples/EventServer.xml. Recall from the behavior tutorial named 03-teleop, that Malleable-driven behaviors send their inputs out to TCP/IP connections and send the data they receive from TCP/IP connections out of their outputs. The EventServer behavior is essentially a bridge that converts task events into a message for sending to remote clients, and converts messages it receives over TCP/IP into task events. Wrapped in a task primitive, the EventServer behavior network extends the reach of task events across system and process boundaries via TCP/IP sockets. This task primitive is implemented in the four methods declared in EventServer.hpp, three derived from Primitive and handle_input from CallbackBehavior::Callback. The first of the three Primitive-derived method is the start method, which is called whenever a task primitive is run. Any run-time initialization of the task primitive should be done in this method. In EventServer.cpp, the start method starts with the retrieval of the task pointer from the context and a call to enable_event to enable event queuing. EventServer is interested in all events, so the wild card asterisk is passed as the parameter to enable_event. The last initialization step is the registration of the event output port for callback handling. The CallbackBehavior essentially passes data from registered behavior connections to a CallbackBehavior::Callback class. By deriving from CallbackBehavior::Callback and implementing handle_input, EventServerPrim is also acting as the callback class to handle data from registered connections. The add_prim_value_callback method of TaskManager is used to register a port. Here is the code that registers the EventServer behaviors event output port for callback handling:
result = _manager->add_prim_value_callback(_network_id, "server", "event", this);

Getting Started Guide

4-37

Chapter 4

The first parameter is the network ID, which you already have from the Primitive base classs _network_id member. The second parameter is the ID of the behavior in the primitive network, which is server. The third parameter is the output port to register, which is event. The last parameter is the Callback class to handle outputs from this port, which is the EventServerPrim class itself; so this is passed as the last parameter. There is no limit to the number of ports that can be registered for callback handling. In this example only one port needs to be registered, so youre done. The second of the Primitive-derived methods is compute. This method is executed each cycle that the task primitives behavior network is run. In this method, the primitive briefly waits for an event, and if one is received, its type is retrieved and put into the EventBehavior s input with the following code:
// Construct event string and put it into the input // port of EventServer for sending back to clients. BehaviorDataWriter input; input.set_string(type); _manager->set_prim_input_value(_network_id, "server", "event_id", &input);

The parameter values to the set_prim_input_value method are the network ID, stored in the _network_id member from the Primitive base class, the ID of the behavior containing the input, the input port ID, and the input data itself, as a BehaviorDataWriter variant type. Because EventServer is a Malleable-driven behavior, the string that containing the event ID that you put into its event_id input port will be sent to all client connections. The third Primitive-derived method is the finish method, which is called once when the task primitive completes. In this task primitive, the only thing that needs to be done is a call to disable_event with * to stop event queuing for this task. Data from registered ports are passed to the handle_input method of the Callback-derived classes for handling. EventServerPrims implementation of the handle_input method is the last method left in this class. Each time a monitored port outputs data, the handle_output method is called with the behavior containing the port, the ports numeric ID, and the data as a BehaviorData variant. The first thing that handle_input does is various data validation checks:
// Check input. if (strcmp (behavior->get_id(), "server") == 0) { if (data == NULL) { // Don't worry about it. return; } if (port == 0) { if (data->get_type() != BehaviorData::TYPE_STRING) { ERSP_LOG_WARN ("Wrong data type."); return; }

The first test is checking the behavior ID to see if the data comes from the EventServer behavior (ID server). Because our network only has a single behavior, this check is not

4-38

Getting Started Guide

Behavior Tutorials

really necessary. However, in task primitives where ports from multiple behaviors are monitored, checking the behavior ID of port data is essential for correct processing. Next you make sure that you have valid data of the string type that youre looking for. Once the data is verified, handle_input parses the data into an event and raises it. The event string encapsulates event information in the following format:
[event id]?[property_name]=[property_value]&[property_name]=[property_value]&...

So, sending an event to tell PhotoEvent to take 5 photographs named b_00x.jpg at 1 second intervals would correspond to the following string:
"Examples.PhotoEvent.Shoot?count=5&name=b_&delay=1"

The rest of handle_input parses strings of this form into an event and raises the event. It also checks to see if the event contains End, which is also its sign to exit, which it does in the following code:
// Check for done message to exit. if (strstr(event_id, "End") != 0) { // End task with success status. set_status(TASK_SUCCESS); }

A task primitive exit by call the set_status with either TASK_SUCCESS or TASK_FAILURE as the parameter. Finally, at the end of EventServer.cpp, there is an ERSP_IMPLEMENT_PRIMITIVE macro just like in any other task. And that is it for EventServer.cpp. The test_event.cpp source file sets up the task primitive and the PhotoEvent tasks to run in parallel and waits for both to complete. The test_client.cpp program creates a client connection to the EventServer task primitive and sends it a shoot event string wrapped in Malleable XML with the following code:
// Create and initialize the client connection. // The default IP address is the local address. TestConnection connection("127.0.0.1", 25000); connection.init();

// Send an event string to tell PhotoEvent to shoot 5 photos. char* msg = "<event_server><event count=\"1\">Examples.PhotoEvent.Shoot?count=5&amp;delay=1&amp;name=b_</event></event_server>"; connection.send_message(msg, strlen(msg));

Note that because the event string is an XML snippet, the & in the string has to be converted to the XML representation &amp;. This message should reach the EventServer primitive, get converted into an Examples.PhotoEvent.Shoot task event, gets raised, and trigger the PhotoEvent task to take five photos. The PhotoEvent class will then raise the Examples.PhotoEvent.Done event, which will be received in EventServer s wait_for_event call in its compute method. The event ID of this event would get pushed into the EventServer behaviors input port, which means it gets packed into Malleable XML and sent back to the client. The TestConnection::on_read method of test_client.cpp checks incoming messages for the Done string. Once it receives a Done event, it sends an

Getting Started Guide

4-39

Chapter 4

Examples.PhotoEvent.End message to end the PhotoEvent task. Recall that theres also a little hack in the EventServerPrim task primitive to end when End is received from the client. Therefore, both tasks end and test_event exits. The on_read method also sets the g_exit flag in test_client.cpp to true, so the test_client program also exits.

Type make on Linux or build all Visual Studio project files in Windows to build the tutorial. Open two command line shells on the same computer and change to the Samp_code_dir/tutorial/behavior/04-primitive directory in both shells. In one shell, run the test_event program first to start the parallel tasks EventServer and PhotoEvent. In the second shell, run the test_client program. The first shell should output the following:
Received event string: Examples.PhotoEvent.Shoot?count=5&delay=1&name=b_ Raising event: {Event Examples.PhotoEvent.Shoot [time 0] (count: 5.000000) (delay: 1.000000) (name: "b_")} Received event string: Examples.PhotoEvent.End Raising event: {Event Examples.PhotoEvent.End [time 0]} 5 photos taken.

The above output shows that the EventServer task primitive received the shoot event and raised it. It then received the end event and also raised it. Finally, all tasks exit and the test_event program prints out the return value of PhotoEvent, indicating that 5 photos was taken. There should also be 5 JPEG images named b_00x.jpg in the same directory as the test_event program. The test_client program in the second shell should have the following output:
Received: <event_server><event_id count="1">&quot;\ Examples.PhotoEvent.Done&quot;</event_id></event_server>

This output shows that the done event raised by the PhotoEvent task was received by the EventServer and passed along to the client program.

Summary
A task primitive is a task that wraps a behavior network. This tutorial discuss the creation of a simple task primitive that forward events over TCP/IP connections using a Malleable-driven behavior. The tutorial takes the user through the declaration of a task primitive and how the declaration relates to the primitives behavior network file. The role of the start, compute, and finish methods are also discussed, along with their implementations for the example primitive in the tutorial. One unique aspect of task primitives is their ability to generate behavior inputs and intercept behavior outputs through the use of the CallbackBehavior::Callback::handle_input method. The tutorial provides examples of how to perform these manipulations of behavior inputs and outputs. Finally, the tutorial provides the test_event program to start the PhotoEvent task and the EventServer task primitive in parallel, along with the test_client program to communicate with the above two tasks over TCP/IP to demonstrate the task primitive functionalities.

4-40

Getting Started Guide

Resource Tutorials

Resource Tutorials
01-config-camera

Purpose
The resource definitions for a robotics platform are stored in the resource-config.xml file. The goal of this tutorial is to show the user how to change resource behavior by modifying parameters in the resource-config.xml file. The user will be changing the camera resolution in this tutorial.

Prerequisites
A supported camera must be connected to the robot. The bin subdirectory of the ERSP install directory, hereafter referred to as Install_dir, must be in the executable path. The ERSP sample code package should also be extracted, and the active directory should be Samp_code_dir/tutorial/resource/01-config-camera.

Task
Open up the resource-config.xml file in the Samp_code_dir/tutorial/resource/01-config-camera directory and note the configuration for the single camera device. The resource-config.xml file contains the resource definitions of the robotics platform. Resources can be hardware devices such as motors and sensors, or software components such as speech recognizer or synthesizer. In this case, the resource is a camera device. The type attribute of the Device tag defines the ERSP resource driver appropriate to this device. On Linux, the type is Evolution.LinuxCamera. On Windows, the type should be Evolution.DirectXCamera. In the 01-config-camera directory, there is the take_photo.sh script (Linux) or the take_photo.bat script (Windows), which runs the test_camera diagnostic tool to capture some images from the camera. While in the 01-config-camera directory, run the take_photo.[sh/bat] script. The test_camera diagnostic tool should write out five images to file. Open these images in an image editor. Note that the resolutions of the images are all 320 by 240. Note also that there is no indication of resolution in the resource-config.xml. When a resource parameter, like camera resolution, is not specified, a default value for the parameter is used. The default values are specified in resource specification files. These are located in the Install_dir/config/resource directory. To see which resource spec file is used for the camera, look in the Device tag for the camera in the resource-config.xml file. On Linux, the camera type is Evolution.LinuxCamera, so the spec file for the camera resource should be Install_dir/config/resource/Evolution/LinuxCamera.xml. On Windows, the camera type is Evolution.DirectXCamera, so the spec file for the camera resource should be Install_dir/config/resource/Evolution/DirectXCamera.xml. Opening

Getting Started Guide

4-41

Chapter 4

up the either of these spec files will show that the default camera resolution, as specified by the hres and vres parameters, is 320 by 240. To change the camera resolution, do not edit the spec files. Edit instead the resource-config.xml file and insert the hres and vres parameter as shown below, with the bold text denoting additions to the file:
<Device id="camera0" type="Evolution.[platform]Camera"> <Parameter name="device" value="[platform-specific]"/> <Parameter name="hres" value="640"/> <Parameter name="vres" value="480"/> </Device>

Run the take_photo.[sh/bat] script again to generate five new images. Look at the new images in an image editor. The resolutions of the new images should be 640 by 480, as newly specified in the resource-config.xml file.

Summary
This tutorial shows how to modify resource parameters to change resource behavior. It also shows how to look for default parameter values in resource spec files. 02-config-ir

Purpose
Any ERSP software component that interacts with resources will make use of the information contained in the resource-config.xml. This tutorial shows how changing positional and orientation configurations of resources in the resource-config.xml file can effect the operation of a behavior network.

Prerequisites
The USB IR sensors (the set of three sensors with blue lights attached to goosenecks) must be plugged into a USB port on the computer used to run this tutorial. The bin subdirectory of the ERSP install directory, hereafter referred to as Install_dir, must be in the executable path. The ERSP sample code package should also be extracted. The active directory must be Samp_code_dir/tutorial/resource/02-config-ir. All references to files in this tutorial will refer to files in the 02-config-ir directory.

Task
The config-ir.xml file in Samp_code_dir/resource/02-config-ir directory is a behavior network file. The test_ir.[sh/bat] script in this directory runs behave config-ir.xml to execute this behavior network for testing the sensor read outs. Open up this config-ir.xml file in the Behavior Composer.

4-42

Getting Started Guide

Resource Tutorials

This behavior network consists of a FuzzyRangeSensorRing behavior connected to a DoubleArraySplitter, which is in turn connected to four InputCollectors. The InputCollectors are labeled for the four sectors front, left, back, right to which sensors will be categorized. The FuzzyRangeSensorRing behavior outputs the shortest range sensor readings of each sector in a four-element double array. The DoubleArraySplitter behavior splits this double array into four different double values, and feeds each of the four values into an InputCollector for printing to the console. This tutorial will demonstrate how the changing the orientation configuration of an IR sensor causes the behavior FuzzyRangeSensorRing to change its notion of sensor coverage. When the orientation configuration in the resource configuration indicates that the IR sensor is pointing forward, the software will believe that it has forward sensor coverage. When the resource configuration indicates that the IR sensor points to the left, the software will believe that it has sensor coverage to the left, etc. While position is less important than orientation in determining where a sensor is pointed, it still plays a role. A primary point of this tutorial is to emphasize the importance of accurate position and orientation configuration for devices that require these parameters, like range sensors. In Linux, the test_ir.sh script will be used for running the tutorial. On Windows, there should be a test_ir.bat script instead. Open up the resource-config.xml file in the directory containing this tutorial. It should have a dimension specification with a shape tag and a single device configuration of a USB IR sensor, with ID Test_IR. The address parameter's value is 1, indicating that this device configuration refers to the sensor attached to the middle cord of the sensor set. Note that among the parameters in this resource configuration are position configurations (the x, y, and z parameters) and orientation configurations (yaw, pitch, and roll). The default value for all of these parameter is 0, except for the x value, which is 20, indicating that the sensor is placed at the front center of the robot and pointing straight forward, as shown in the figure. This probably does not correspond to where the actual middle sensor is physically placed on the robot, but disregard this discrepancy for now. Put an object in front of the middle sensor in the sensor group (i.e. the sensor attached to the middle cord), so that this sensor's blue sensor light blinks. In the 02-config-ir directory, run the test_ir.[sh/bat] script and pipe the result to a file (or just hit CTRL-C soon after the script is run and scroll back up the shell or command line window to see the beginning of the program's output). The program should output something like the following:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec.

[behave.cpp:272] Input back Input front Input left Input right Input back Input front Input left double: double: double: double: double: double: double: 1.79769e+308 23.0796 1.79769e+308 1.79769e+308 1.79769e+308 19.9482 1.79769e+308

Getting Started Guide

4-43

Chapter 4

Input right Input back

double: double:

1.79769e+308 1.79769e+308

In the sensor readings that follow (i.e. the output lines starting with Input [front/back/left/right]), there seems to be a valid reading for the front side, whereas for all the other sides there is only a very large number representing positive infinity which means no valid range sensor readings was received on those sides. Now edit the yaw parameter of the Test_IR device configuration to contain the value pi/4. Roll, pitch, and yaw values are specified in radians. The value pi/4 correspond to 45 degrees. ERSP uses a right handed coordinate system where the positive x-axis points forward, the positive y-axis points to the left, and the positive z-axis points straight up. In this coordinate system, positive yaw is to the left, so a yaw of pi/4 is 45 degrees to the left of straight ahead, as shown in the figure. Run the test_ir.[sh/bat] script again, and there is still a valid sensor reading for the front side, but no readings for the other three sides. This is because the sensor is still pointing in the front quadrant. The quadrant borders are shown in by the diagonal lines in Figure 2. Next, try adjusting the sensors position. The original x, y, z parameter values are (20, 0, 0), indicating that the sensor is placed right at the along the center line of the robot. Change the y parameter's value to 10 and rerun test_ir.[sh/bat]. This moves the sensor 10 cm to the left of center, as shown in figure 3, below. Run the test_ir.[sh/bat] again, and note that this time there is valid sensor readings for both front and left. This is because the sensors direction now lies right on the border separating the front and left quadrants. This last run shows how both the position and orientation configurations combine to determine how a resource is used. Feel free to experiment with other sensor positions and orientations to see what sensor coverage they produce. In addition to sensor position and orientation, one other factor affect the sensor readings returned by FuzzyRangeSensorRing. The lx, ly, and lz parameters of the Shape section of the resource-config.xml file defines a bounding box surrounding the robot. The distance reading returned by FuzzyRangeSensoRing is not relative to the sensor position but relative to the nearest edge of the robots bounding box, as shown by the square around the robot in the diagram below:

4-44

Getting Started Guide

Resource Tutorials

Object FuzzyRangeSensorRing Reading Actual Sensor Reading

Robot with bounding box

This is because FuzzyRangeSensorRing is concerned with how far an object is from the robot, not from the sensor itself. To see this feature, put an obstacle in front of the middle USB IR sensor (i.e. attached to the middle cord), and in the Samp_code_dir/tutorial/resource/02-config-ir directory, run the test_ir.[sh/bat] script. Take note of the readings, which should be similar to the following:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec. [behave.cpp:272] double: double: double: double: double: double: double: double: 1.79769e+308 33.396 1.79769e+308 1.79769e+308 1.79769e+308 32.8438 1.79769e+308 1.79769e+308

Input back Input front Input left Input right Input back Input front Input left Input right ...

Now edit the resource-config.xml file to change the value of the lx parameter in the Dimensions section from 48 to 68 and rerun the test_ir.[sh/bat] script with the exact same sensor and obstacle set up. The sensor readings should be around 10 cm less, or similar to the following:
Initializing...ok INFO INFO - Loading network file. [behave.cpp:181] - Running network with an interval of 0.10 sec. [behave.cpp:272] double: double: double: double: double: double: double: double: double: 1.79769e+308 21.7395 1.79769e+308 1.79769e+308 1.79769e+308 22.2916 1.79769e+308 1.79769e+308 1.79769e+308

Input back Input front Input left Input right Input back Input front Input left Input right Input back

There will be some variations in terms of the difference because the USB IR sensor readings can fluctuate a bit. However, the effect of our modification of the lx parameter should be clear. The lx parameter defines the size of the bounding box along the x-axis. Recall that the positive x-axis points forward, so the configuration change of the lx value

Getting Started Guide

4-45

Chapter 4

from 48 to 68 results in an elongation of the bounding box along the x-axis, as shown in figure 2, below:
Object

Note that by lengthening the bounding box by 20 cm from 48 cm to 68 cm, the distance from the front edge of the bounding box to the obstacle is now 10 cm shorter, as reported by FuzzyRangeSensorRing. In the resource-config.xml file, the x, y, and z parameters of the Dimensions section determine the offset from the center of the robot to the center of the bounding box. This could also affect the reading returned from FuzzyRangeSensorRing. For example, say the robot is towing a cart, so that the bounding box is as shown in figure 3:

Robot with elongated bounding box

Object

Center of bounding box is behind center of robot

To account for this you will have to move the center of the bounding box back by specifying a negative x offset. In the resource-config.xml file, change the value of the x parameter in the Dimensions section from 0 to -10. The sensor readings should change by around 10 cm to reflect the longer distance between the obstacle and the leading edge of the bounding box.

Summary
Cart

This tutorial discusses how ERSP software components make use of and are affected by device configuration information from the resource-config.xml file. The tutorial uses as an example the FuzzyRangeSensorRing behavior, which is used to map IR sensors to four different sectors around the robot and to adjust sensor readings to take account of the bounding box around the robot. The effect of changing the sensor position, orientation configuration and the bounding box configuration on sensor readings returned by the FuzzyRangeSensorRing is illustrated with a couple of examples. 03-camera

Purpose
This tutorial demonstrate the use of an ERSP resource interface in C++.

Prerequisites
ERSP should be installed and working, and a camera should be connected to the computer where the tutorial will take place. The ERSP sample code package should also be extracted, and the active directory must be Samp_code_dir/tutorial/resource/03-camera. In Linux, the configure script in Samp_code_dir must be run so that the tutorial make files are properly generated.

4-46

Getting Started Guide

Resource Tutorials

Task
On Windows, open up the camera.vcproj project file in Visual Studio .Net. On Linux, simply type make. Build the camera project using the method appropriate for the platform to make sure that there is no error. Open up the camera.cpp source file. This file contains the bare skeleton of a program that you will fill out during this tutorial. This file should contain the following code:
#include <evolution/Resource.hpp>

using namespace Evolution;

ResourceManager* resource_manager; IResourceContainer* resource_container; ICamera* camera;

void clean_up() { std::cout << "Deactivating resources.\n;

// Release allocated interfaces if (resource_container = NULL && camera != NULL) { resource_container->release_interface (0, camera); camera = NULL; }

// Clean up resource manager.

This will also release the

// resource_container interface. delete resource_manager; } int main (int argc, char** argv) { return (0); }

Because you are only working with resources here, you only need to include the evolution/Resource.hpp composite header file. By specifying that you are using the Evolution namespace, you avoid having to prepend Evolution:: to the various ERSP type names in this example. Next you define three pointers, resource_manager, resource_container, and camera. These pointers will be set to the appropriate objects when you write out this program. After the pointer declarations, you have a clean_up method. This method takes care of the cleaning up the above pointers after the program is done. In this tutorial you will be adding code to the main method. Our objective in this example is to make use of a resource interface, ICamera in this particular case, in C++ by calling its methods directly. In ERSP, resource interfaces are obtained via the obtain_interface method of the IResourceContainer interface. An implementation of this interface is available via the get_resource_container method of the ResourceManager class. Therefore, you begin by creating an instance of ResourceManager. In main, right before the return (0); statement, add the following code:

Getting Started Guide

4-47

Chapter 4

// Variable to hold result from function calls. Result result;

// Create a resource resource_manager. resource_manager = new ResourceManager (NULL, result); if (result != RESULT_SUCCESS || resource_manager == NULL) { std::cout << "ERROR: Failed to load resource resource_manager.\n; clean_up(); return 1; }

First you create a result variable to hold the result that all ERSP methods return. You then create a ResourceManager instance and assign the pointer value to the resource_manager variable. In this tutorial you will do some minimal error checking to alert us to potential problems. Next you check the value of result and the resource_manager pointer value to see if you've succeeded. If you failed, you just call clean_up and return an error value 1 from main (i.e. exit from the program). Having created an instance of ResourceManager, you next get the IResourceContainer interface. This is done with the following code:
// Obtain the resource container interface from the // resource resource_manager. Also check to make sure

// that the resource container is obtained. if (resource_manager->get_resource_container (0, &resource_container) != RESULT_SUCCESS || resource_container == NULL) {

std::cout << "ERROR: Failed to get resource container.\n; clean_up(); return 1; }

You call ResourceManager's get_resource_container to get a pointer to the IResourceContainer interface and do some error checking to make sure that you are successful. Once you have an IResourceContainer interface, you get an interface pointer to the camera that you are interested in using the obtain_interface method, as shown below:
// Obtain the camera interface from the resource container. const char* camera_id = "camera0";

if (resource_container->obtain_interface (0, camera_id, ICamera::INTERFACE_ID, (void**)&camera) != RESULT_SUCCESS || camera == NULL)

{ std::cout << "ERROR: Failed to obtain camera.\n; clean_up(); return 1; }

4-48

Getting Started Guide

Resource Tutorials

IResourceContainer's obtain_interface call takes four parameters. The first parameter is reserved for a security ticket, which you can just pass 0 for now. The second parameter is the camera ID; this is the same ID given to the camera device in the resource-config.xml. In most cases this would be camera0, indicating the first detected camera device. The third parameter is the ID of the interface that you would like to obtain. In this case you would like to obtain the ICamera interface, so you pass the parameter ICamera::INTERFACE_ID. In ERSP, every interface stores its ID in a publically defined INTERFACE_ID variable, and ICamera is no exception. The last parameter is an ICamera* pointer variable to hold the ICamera interface pointer to be passed back by the obtain_interface call. Recall that ICamera* camera was defined statically at file scope near the beginning of the source file.

The ICamera interface pointer can now be used to grab an image and write it to disk as a JPEG. This is done with the following code:
// Obtain an image from the camera. Image* image = NULL; if (camera->get_image(0, (const Image**) &image) != RESULT_SUCCESS) { std::cout << "Could not capture an image with the camera.\n; clean_up(); return 1; }

// Write out the captured image as a jpeg. const char* filename = "camera_image.jpg"; double quality = .90; if (image->write_file(filename, quality) != RESULT_SUCCESS) { std::cout << "Failed to write camera image to file.\n; clean_up(); return 1; } else { std::cout << "Wrote image " << filename << ".\n; }

clean_up();

The above code grabs an image with the get_image method of ICamera and writes the image file out as a JPEG with quality set to 0.90 using the write_file method. After this, you end the program with a call to clean_up, defined just above the main function, to clean up the various interface pointers you created and used in this program. In this tutorial, you've only used two methods of the ICamera interface. This interface is particularly rich. Please refer to the Doxygen documents in Install_dir/doc/ERSP-API/html/index.html for more information on this interface. ERSP supplies a variety of resource interfaces, and they all can be used from C++ in the same manner that the ICamera interface is obtained and used in this tutorial.

Getting Started Guide

4-49

Chapter 4

Summary
In this tutorial, you've shown how to obtain and use an ERSP resource interface, ICamera. A resource interface must be obtained from an IResourceContainer interface, which is in turn obtained from an instance of ResourceManager. Any other ERSP interface can be obtained and used in the same way. 04-drive-system

Purpose
This tutorial shows how to move the robot using the IDriveSystem resource interface.

Prerequisites
ERSP should be installed and working, and the robot provided with ERSP should be plugged into the computer that runs the tutorial. Make sure that the robot has at least a meter square of clear space in front and behind it. The ERSP sample code package must also be extracted, and the active directory should be Samp_code_dir/tutorial/resource/04-drive-system. In Linux, the configure script in Samp_code_dir must be run so that the tutorial make files are properly generated.

Task
On Windows, open up the drive_system.vcproj project file in Visual Studio .Net. In Linux, simply type make. Build the drive_system project using the method appropriate for the platform to make sure that there is no error. Open up the drive_system.cpp source file. This file contains the bare skeleton of a program that you will fill out during this tutorial. This source file is similar to the skeleton source file from the previous tutorial. However, the process of obtaining the drive system interface has already been done in the helper method initialize_interfaces. The main method begins with a call to initialize_interfaces to properly initialize the IDriveSystem interface pointer. Therefore, you can begin right away with using this interface. The first method of this interface that you will use is set_idle_power, which determines how much power should be driven to the drive system even when the robot is idle. Having power to the drive system allows the robot to maintain position on sloped surfaces, but also drains the battery faster. Before doing this, try manually rotating the robots wheels. They should turn easily because no resistive power is applied. Now, add the following lines of code right after the comment // Tutorial code starts here in the main method:
ResourceCallbackImpl callback; int CB_ID = 0;

4-50

Getting Started Guide

Resource Tutorials

std::cerr << "Set the idle power.\n; result = drive_system->set_idle_power(NO_TICKET, &callback, CB_ID, 60); if (result != RESULT_SUCCESS || callback.wait_for_result() != RESULT_SUCCESS) { std::cout << "Failed to set idle power.\n; clean_up(); return 1; } Platform::millisecond_sleep(5000);

Note the ResourceCallbackImpl callback declaration. Several methods of IDriveSystem take time to complete but are asynchronous, in the sense that they return right away even though the task these methods perform are still in progress. In order to know when these tasks are completed, ERSP provides a callback notification mechanism. The set_idle_power method is one of these. The signature of these methods is that they take a pointer to an IResourceCallback interface along with a callback ID. These parameters typically follow immediately after the TicketId parameter. The above code demonstrates a typical use of this callback mechanism to turn the asynchronous methods into synchronous blocking calls, which halt program execution until the asynchronous task is complete. First, you have a call to set_idle_power to set the drive system power to 60% (the last parameter of the call) when the robot is idle. The method immediately returns a result, and if the call was successful, the wait_for_result method of the callback object is called to wait for the power setting to complete. This wait_for_result call effectively blocks until the new power setting is fully complete and the callback object is notified of that event. If the call to wait_for_result also succeeds, then the new power setting has been set successfully. Right after this error checking is a call to Platform::millisecond_sleep(5000). The purpose of this call is to prevent the program from exiting right away after setting the new idle power to give the user time to manually test the wheel. The parameter to this call is in milliseconds, as the name of the method suggests, and so the user has five seconds after running the program to try to manually rotate the wheel. Build the tutorial program with the above new code now and run the resulting executable. Try to manually rotate a wheel while the program is running. There should now be substantial resistance to moving the wheel at all due to the new idle power setting. The program should exit after five seconds. Methods like millisecond_sleep that are static methods of the class Platform are part of ERSPs platform abstraction layer. Platform-specific versions of these methods are implemented for each platform that ERSP supports. ERSP users are encourage to use these methods instead of the native versions to keep their code portable across the various ERSP-supported platforms. Remove the line Platform::millisecond_sleep(5000); from the source file. You wont need this artificial delay any further in this tutorial. The next thing you will do with IDriveSystem is to set the move power, or the power the drive system will have available when actually moving the robot. Add the following lines of code to the source file right after the previous code block:

Getting Started Guide

4-51

Chapter 4

std::cerr << "Set the move power.\n; result = drive_system->set_move_power(NO_TICKET, &callback, CB_ID, 60); if (result != RESULT_SUCCESS || callback.wait_for_result() != RESULT_SUCCESS) { std::cout << "Failed to set move power.\n; clean_up(); return 1; }

Notice the same callback pattern in this method call, which sets the move power level to 60%. Now that you have power to the drive system, you can start to move the robot around. The first thing you will do is to move the robot forward for half a meter. Make sure that the robot has at least a meter square of free space directly in front of it. Then type in the following lines of code, right after the previous code block:
double move_delta = 50; // cm. double linear_velocity = 20; // cm/sec. double linear_acceleration = 40; // cm/sec^2.

std::cerr << "Move forward half a meter. "; result = drive_system->move_delta(NO_TICKET, &callback, CB_ID, move_delta, linear_velocity, linear_acceleration); if (result != RESULT_SUCCESS || callback.wait_for_result() != RESULT_SUCCESS) { std::cout << "Failed to move forward.\n; clean_up(); return 1; } std::cerr << "ok\n;

First you set some variables to hold the motion parameters. The move_delta variable stores how far the robot should move. The linear_velocity variable determines how fast the robot should move, and the linear_acceleration variable determines how fast the robot should accelerate to its desired velocity. Note the comments after the variable assignment indicating the units of these parameters. At the resource level where you are working, all methods operate on a single set of default units, which are centimeters for linear distance, radians for angular distance, and seconds for time. At a higher level of abstractions in ERSP, like at the TEL, a rich set of unit conversion facilities allows the use of a variety of different units. Now you are ready to call the method move_delta of IDriveSystem to move the robot forward. The move_delta method is another one that supports the callback pattern of completion notification. The use of std::cerr to print out an ok string after the completion of the move is used here to illustrate the blocking nature of the above code. That is, the ok is not printed out until after the robot has fully moved the specified 50 cm.

4-52

Getting Started Guide

Resource Tutorials

Build the program with this new code and run the program to verify that this is the programs behavior. Now add the following code to have the robot turn a specified angular distance:
double turn_delta = M_PI_2; // 90 degrees. double angular_velocity = M_PI/6; // 30 deg/sec. double angular_acceleration = M_PI/3; // 60 deg/sec^2.

std::cerr << "Turn to the left. "; result = drive_system->turn_delta(NO_TICKET, &callback, CB_ID, turn_delta, angular_velocity, angular_acceleration); if (result != RESULT_SUCCESS || callback.wait_for_result() != RESULT_SUCCESS) { std::cout << "Failed to turn left.\n; clean_up(); return 1; } std::cerr << "ok\n;

This code turns the robot 90 degrees to the left using the turn_delta method of IDriveSystem. This code also executes in a blocking manner like the previous move. Build the program, but before running make sure there is enough room in front of the robot for forward motion. Codes are added to this tutorial, not replaced; so each time the program is run, the new motion and all previously entered motions will execute. For example, when running the program with the above code, the robot will move forward 50 cm from the previous code block, then turn 90 degrees to the left from the current code block. If there is no room for all this motion, wheel the robot back to its original starting position after each program execution. Once the program exits, the drive system is deactivated and idle power is no longer applied to the drive system; so wheeling the robot around should be no problem. The next method of IDriveSystem that will be demonstrated is move_and_turn. Add the following code to the main method of drive_system.cpp after the previously entered code blocks:
std::cerr << "Free turning.\n; result = drive_system->move_and_turn(NO_TICKET, 0, linear_acceleration, -angular_velocity, angular_acceleration); if (result != RESULT_SUCCESS) { std::cout << "Failed to move and turn.\n; clean_up(); return 1; }

std::cerr << "Slight delay.\n; Platform::millisecond_sleep(3000);

std::cerr << "Stop smooth.\n;

Getting Started Guide

4-53

Chapter 4

result = drive_system->stop(NO_TICKET, &callback, CB_ID, IMotorCommand::STOP_SMOOTH); if (result != RESULT_SUCCESS || callback.wait_for_result() != RESULT_SUCCESS) { std::cout << "Failed to stop smoothly.\n; clean_up(); return 1; }

Note that the move_and_turn method does not take any callback parameter. This method simply specifies the linear velocity, linear acceleration, angular velocity, and angular acceleration to the drive system and returns. There is no target specified by this method, and so the specified motion parameters will continue to apply to the drive system until countermanded. This is done in the above code with the stop method after a wait of 3 seconds, made with the call to millisecond_sleep. The stop method is one of those that support the now-familiar callback paradigm. Its final parameter specifies how the drive system should stop; smoothly in this case, with quick but gradual deceleration. In this particular case, the linear velocity parameter is set to 0, so the move_and_turn method only turns. In this next block of code, you use move_and_turn to move linearly only, with the angular velocity set to 0, and you also use a new stop type:
std::cerr << "Free moving backward.\n; result = drive_system->move_and_turn(NO_TICKET, -linear_velocity, linear_acceleration, 0, angular_acceleration); if (result != RESULT_SUCCESS) { std::cout << "Failed to move and turn.\n;

clean_up(); return 1; }

std::cerr << "Delay for backward move.\n; Platform::millisecond_sleep(3000);

std::cerr << "Stop abrupt.\n; result = drive_system->stop(NO_TICKET, &callback, CB_ID, IMotorCommand::STOP_ABRUPT); if (result != RESULT_SUCCESS || callback.wait_for_result() != RESULT_SUCCESS) { std::cout << "Failed to stop abruptly.\n; clean_up(); return 1; }

4-54

Getting Started Guide

Resource Tutorials

Note that you use a negative linear velocity value in this call, so that the robot will move backward. Now instead of stopping smoothly, you will stop abruptly, using the STOP_ABRUPT value in the stop method call. Here is one more demonstration of move_and_turn, specifying non-zero values of linear and angular velocities to move in an arc:
std::cerr << "Moving and turning to the right at the same time.\n; result = drive_system->move_and_turn(NO_TICKET, linear_velocity, linear_acceleration, -angular_velocity, angular_acceleration); if (result != RESULT_SUCCESS) { std::cout << "Failed to move and turn.\n; clean_up(); return 1; }

Platform::millisecond_sleep(3000);

std::cerr << "Stop using move and turn.\n; result = drive_system->move_and_turn(NO_TICKET, 0, linear_acceleration, 0, angular_acceleration); if (result != RESULT_SUCCESS) { std::cout << "Failed to move and turn.\n; clean_up(); return 1; }

The second call to move_and_turn in the above code stops the robot by specifying zeroes for both linear and angular velocities. The acceleration parameters determines how fast and smooth the robot will stop in this case. In practice, ERSP uses move_and_turn almost exclusively to contour the robots velocity based on current objectives and surrounding obstacles. Different values of linear and angular velocities are constantly fed to the drive system to react to obstacles, to follow a designated target, or just to execute a user directive. The move/turn_delta methods are good for very accurate linear or angular motion. However, these commands very often have to be aborted if theres any obstacle in the way.

Summary
This tutorial illustrates the use of the various methods of the IDriveSystem interface. This interface is the primary method for robot locomotion in ERSP. Using this interface, the user can control the robot using simple motion commands with easy to understand parameters such as velocity and acceleration, without the need to delve into the specifics of the underlying drive system.

Getting Started Guide

4-55

Chapter 4

05-custom-driver

Purpose
In the previous tutorials you have looked at the configuration and use of resource interfaces in detail. However, there will be cases where there is no existing ERSP interface that properly describes a new device or resource. In this case, a new resource interface must be created, along with at least one implementation. In this tutorial you will go over the development of a new ERSP resource interface and the implementation of that interface. There is quite a bit of code in this tutorial; so instead of having the user typing in the code snippets at a time, all the necessary source files have been provided in full. The tutorial will reproduce certain parts of the source code for illustrative purpose as necessary.

Prerequisites
ERSP must be installed and working. The ERSP sample code package must be extracted, and the active directory should be Samp_code_dir/tutorial/resource/05-custom-driver.

Exercise
Resource interfaces are a key element of ERSPs Hardware Abstraction Layer. They provide a uniform, platform neutral way for higher level robotics code to interface with resources. For example, all code that interacts with cameras do so through the ICamera interface. Therefore, all calls to camera resources are the same regardless of the underlying platform or the type of camera being used. The implementation of the resource interface is provided in the type attribute of the device tag in resource-config.xml. In ERSP there are a variety of resource interfaces already defined, along with one or more implementations. However, there will inevitably be resources that havent been covered by the defined interfaces. One of these is generic data streams. In this tutorial you will be creating a resource interface for reading and writing to data streams. You will also create an implementation of this interface for file streams to read and write data streams from and to files. The resource interface that you will create will be called IStream. In the Samp_code_dir/tutorial/resource/05-custom-driver directory, this interface has already been defined in the IStream.hpp file. You will go over this file almost line by line to identify the various components of a resource interface definition and point out a few elements of ERSP coding conventions. The first element in IStream.hpp, aside from the necessary includes, are the following lines:
#ifdef EVOLUTION_PLATFORM_WIN32 #ifdef DLL_BUILD #define DLL_API __declspec(dllexport) #else #define DLL_API __declspec(dllimport) #endif #else

4-56

Getting Started Guide

Resource Tutorials

#define DLL_API #endif

These lines define dynamic library import and export macros required by the Visual C++ compiler for the use of C++ classes in dynamic libraries. Resource interfaces are C++ abstract classes with one or more implementations done as C++ classes held in dynamic libraries, so these macros are necessary for the use of these classes on the Windows platform. Here are what comes next in IStream.hpp:
namespace Examples { typedef Evolution::Result typedef Evolution::String Result; String;

ERSP code makes use of C++ namespaces to prevent name collision. All ERSP code belongs to the Evolution namespace. Because the IStream resource interface is part of the example code, the namespace that the IStream class belongs to is Examples. This would mean that in IStream code, the namespace qualifier Evolution:: would have to precede all ERSP types, such as Result or String. To save from all this extra typing, a couple of typedefs are used to create versions of Result and String in the Examples namespace. Next up you have the IStream class declaration, starting with:
class DLL_API IStream { public: // Types

enum SeekOperation { SEEK_OP_ABSOLUTE, SEEK_OP_RELATIVE, SEEK_OP_END }; public: // Constants

/// Callers should request the interface using this ID string. static const char* const INTERFACE_ID;

The IStream resource interface will have three different seek operations, as defined by the above enumeration. Like all resource interfaces, the IStream resource interface is identified by a string identifier. This identifier is stored in a static variable named INTERFACE_ID. In previous tutorials, this very INTERFACE_ID variable has been used to identify the desired resource interface in the call to IResourceContainer::obtain_interface.
IStream, like all resource interfaces, will be an abstract class. The interface it defines

will consist of a number of virtual methods which are set equal to 0, with no implementation. It will be up to the underlying implementations to implement these methods. Therefore, its constructors and destructors need not do anything:

Getting Started Guide

4-57

Chapter 4

public: // Structors

/// Empty constructor. IStream () {}

/// Empty destructor. virtual ~IStream () {}

Finally, you come to the methods of the interface. These are defined as pure virtual methods for later implementation. They simply define the operations of the resource at this point:
public: // Commands

/// Reads the specified number of bytes from the stream /// to the given buffer. virtual Result read (unsigned long* bytes_to_read, String* data) = 0;

/// Writes the given string to the stream. virtual Result write (const char* data) = 0;

/// Seeks to the specified location in the stream, if the /// stream supports it. virtual Result seek (SeekOperation op, long offset) = 0;

/// Seeks to the specified location in the stream, if the /// stream supports it. virtual Result get_position (unsigned long* offset) = 0;

}; // end class IStream

The four methods of this interface specify simple stream operations such as read, write, and seek, along with obtaining the current stream position. In the IStream.cpp file, the variable INTERFACE_ID is defined to be Examples.IStream. Having defined a basic interface to access streams, you can implement this interface for a variety of streams. In this tutorial, you will go over an implementation of this interface for files, to read and write data streams from and to files. The implementation is done in a platform neutral way that should work on both Linux on Windows. This is possible here, but there will be many cases where a separate implementation is necessary for each platform. For example, ERSP ships with two different implementations for ICamera, one for Linux, called Evolution.LinuxCamera, based on the Video4Linux effort on Linux, and one for Windows, called Evolution.DirectXCamera, based on the DirectShow camera device abstractions on Windows. The files FileStreamDriver.cpp and FileStreamDrive.hpp in Samp_code_dir/tutorial/resource/05-custom-driver contain the implementation of IStream for files. FileStreamDriver.hpp starts with enclosing everything in the Examples namespace and a few typedefs to redefine some Evolution types in the Examples namespace:
namespace Examples {

4-58

Getting Started Guide

Resource Tutorials

// Import specific types to our namespace. Doing a complete import // of a namespace, e.g. "using namespace Evolution", is very bad // in a header file, because it effectively nullifies the // namespace for all other files that include the header file. typedef Evolution::ResourceDriverImpl typedef Evolution::TicketId typedef Evolution::ResourceConfig typedef Evolution::IResourceContainer typedef Evolution::Result typedef Evolution::IResource ResourceDriverImpl; TicketId; ResourceConfig; IResourceContainer; Result; IResource;

The FileStreamDriver class declaration immediately follows:


/** * @brief * This sample resource driver implements reading, writing, and * seeking in a file. * * It is intended to show the basics of writing a resource driver * correctly to ensure reliable performance. **/ class DLL_API FileStreamDriver : public ResourceDriverImpl, public IStream {

Note that FileStreamDriver derives from both ResourceDriverImpl and IStream. In addition to implementing its particular resource interface (IStream in this case), resource implementation must also implement the IResourceDriver interface. This interface is defined in the Install_dir/include/evolution/resource/IResourceDriver.hpp file. However, instead of inheriting directly from IResourceDriver, FileStreamDriver inherits from ResourceDriverImpl, which provides a basic implementation of IResourceDriver. For more information on ResourceDriverImpl, see Install_dir/include/evolution/resource/ResourceDriverImpl.hpp. The next parts of the FileStreamDriver class declaration are the declaration of the constructor and destructor. This is pretty standard and so youll move to the next part in the class declaration, which is the DECLARE_RESOURCE macro:
/* * The driver attribute in the XML specification file must be * equal to this string. **/ DECLARE_RESOURCE (FileStreamDriver, "Examples.FileStream");

This macro declares the existence of the FileStreamDriver as a resource implementation to ERSPs resource manager and has to be present in the declaration of all resource implementations. The string Examples.FileStream uniquely identifies this resource implementation. To make use of this resource, the Examples.FileStream string must be specified as the value of the type attribute in the resources Device tag in the resource-config.xml file. See the sample resource-config.xml file in this same directory for an example. Next in the declaration you have the IResourceDriver method declarations. IResourceDriver have four methods: activate, deactivate, is_active, and

Getting Started Guide

4-59

Chapter 4

obtain_interface. The activate method is called to initialize the resource. The

implementation of this function must do all that is necessary to get the resource ready for use in software. The deactivate method is just the opposite, and its implementation must clean up and shutdown the resource. These methods should be implemented in a way that allows repeated calls to the activate/deactivate pair during the course of a programs execution. When a call to activate succeeds, the resource is considered active and ready for use, and the is_active method should return true. When a call to deactivate succeeds, the resource is considered not active and cannot be used, and the is_active method should return false. These methods should therefore properly set the resource active state, so that the is_active method returns the correct value when called. The obtain_interface call is used by external code to request resource interfaces implemented by this implementation. The key parameter pass to this method is an interface ID, which this method should check and return the pointer to the interface in the resource_interface output parameter if the interface is implemented or an error code if it is not. A resource implementation class may implement more than one resource interface. These methods are followed by the IStream method declarations. Next, you will review the implementations of these methods in FileStreamDriver.cpp next. The only thing left to note in FileStreamDriver.hpp at this point is the declaration of two data members, _file_path to contain the path the file to stream from/to, and _fp to contain the handle to that file. The FileStreamDriver.cpp starts with the definition for the constructor and destructor. The constructor just sets the _fp member handle to NULL. The destructor calls deactivate to make sure the resource is rendered inactive before the destruction of the resource instance. Right after these methods is the macro:
IMPLEMENT_RESOURCE2(FileStreamDriver);

This method implements the various methods declared by the DECLARE_RESOURCE macro in the FileStreamDriver.hpp file to complete the work of identifying this resource implementation to the ERSP resource manager. This macro is then followed by the implementations of the IResourceDriver methods, starting with activate:
Result FileStreamDriver::activate () { if (is_active ()) { return (Evolution::RESULT_SUCCESS); } _resource_config.get_parameter ("file_path", &_file_path); String should_truncate; _resource_config.get_parameter ("truncate", &should_truncate); _fp = std::fopen (_file_path.c_str (), ((std::strcmp (should_truncate.c_str (), "true") == 0) ? "w+" : "a+")); std::fseek (_fp, 0, SEEK_SET);

4-60

Getting Started Guide

Resource Tutorials

return (is_active () ? Evolution::RESULT_SUCCESS : Evolution::RESULT_FAILURE ); } // end activate()

This method starts by checking to see if the resource is already active, in which case it needs not do anything and just returns a RESULT_SUCCESS value. If the resource driver is not yet active, it then makes a call to the get_parameter method of the _resource_config data member. _resource_config is a member of ResourceDriverImpl, which is why it was not declared in FileStreamDriver.hpp. This data member contains the configuration information defined for this resource in the resource-config.xml file. This resource expects two parameters, file_path and truncate, so it uses the get_parameter method to get the parameter values. Look in the resource-config.xml file in the 05-custom-driver directory to see an example of how these two parameters are defined. The parameter values are then used in the opening of the file for streaming with the use of the standard C librarys fopen call. If the file is successfully opened, the file handle is stored in the _fp method, and the resource is considered to be activated. The deactivate method is next and basically closes the _fp file handle with the fclose call:
Result FileStreamDriver::deactivate () { if (!is_active ()) { return (Evolution::RESULT_SUCCESS); } std::fclose (_fp); _fp = NULL; return (!is_active () ? Evolution::RESULT_SUCCESS : Evolution::RESULT_FAILURE); } // end deactivate()

The is_active method returns true if _fp contains a valid, non-null file handle:
bool FileStreamDriver::is_active () { return (_fp != NULL); } // end is_active()

The obtain_interface method is next:


Result FileStreamDriver::obtain_interface (TicketId owning_token, const char* interface_name, void** resource_interface, unsigned* reservation_count) { if (std::strcmp (interface_name, IStream::INTERFACE_ID) == 0)

Getting Started Guide

4-61

Chapter 4

{ IStream* ptr = this; *resource_interface = (IResource*) ptr; } else { return Evolution::RESULT_NOT_IMPLEMENTED; } add_ref (); return Evolution::RESULT_SUCCESS; } // end obtain_interface()

It compares the interface ID parameter with the interface ID for the IStream resource interface, and if there is a match, it returns a pointer to itself because it implements the IStream methods directly. The returned IStream pointer can be used to call the IStream methods. These methods are implemented using the well known buffered IO calls from the standard C library. The next thing you need to do is to create a resource spec file for the new resource driver. This file contains the basic information about the resource driver, such as which library contains the resource, along with default parameter values. Here is the spec file for the FileStreamDriver resource:
<ResourceSpec id="FileStream" library="./libFileStream.so" driver="Examples.FileStream"> <Parameter name="file_path" value="file_stream_output.txt"/> <Parameter name="truncate" value="false"/> </ResourceSpec>

The attributes in the ResourceSpec tags help ERSP to find the code for the resource driver implementation. The library attribute specifies the dynamic library containing the code, and the driver attribute specifies the ID of the resource driver implementation. The resource spec file also contains default values for the file_path and truncate parameters. The truncate parameter indicates whether you should truncate the stream file each time the resource implementation is run and defaults to false, indicating that the file should be appended to with each use of this resource. The name of the spec file, not including the .xml extension, should be the same as the last string after the . in the value specified to the DECLARE_RESOURCE macro. Recall that this value was Examples.FileStream; therefore, the resource spec file name should be FileStream.xml. The first part of string, Examples, identifies the subdirectory under <config-path>/resource that contains the resource spec file. For example, in the Samp_code_dir/config/resource directory, there should already an Examples directory which contains the FileStream.xml file. Adding the Samp_code_dir/config directory to EVOLUTION_CONFIG_PATH will cause ERSP to look in the Examples subdirectory under Samp_code_dir/config/resource for resource spec files, and should allow it to find the FileStream.xml file. At this point the FileStream resource is ready to be used. Build both libFileStream and tutorial_test.cpp by typing make under Linux or building both Visual Studio projects on Windows. Run the tutorial_test program and it should create an output.txt file with the string Testing inside.

4-62

Getting Started Guide

Resource Tutorials

Summary
In this tutorial you create a new resource interface and a new resource driver implementation. You create a resource spec file for the new resource driver, and you make use of the new resource driver in a test program. First you define the new resource interface in the IStream interface. You then create an implementation for this interface for file streams, which inherits from ResourceDriverImpl and IStream. You establish to the Resource Manager that this class is a resource implementation with the DECLARE_RESOURCE and IMPLEMENT_RESOURCE2 macros. You then implement the IResourceDriver methods activate, deactivate, is_active, and obtain_interface and implement the methods specific to IStream. Finally you create a resource specification file for the new resource driver. The specification file is placed in the Examples subdirectory of Samp_code_dir/config, and you included this config path in the EVOLUTION_CONFIG_PATH environment variable to make sure that this directory is scanned by ERSP for configuration files.

Getting Started Guide

4-63

Chapter 4

4-64

Getting Started Guide

Chapter 5

Sample Code

The ERSP ships with extensive sample code, showing how to use and extend its capabilities. This chapter describes the overall layout of the sample code.

Directory Layout
The sample code is organized to show the various components of the ERSP: the Hardware Abstraction Layer (HAL), the Behavior Execution Layer (BEL), and the Task Execution Layer (TEL), and the Vision SDK, along with the tutorial project files and source code for the Tutorials chapter. The sections that follow discuss the non-tutorial sample code.

Hardware Layer
Examples of using the Hardware Layer are under $SAMPLES_ROOT/driver. The driver sample code is organized in the following subdirectories: base - Simple diagnostic programs to access various hardware components. face - Using the morphed face driver. speech - Using the speech recognition (ASR) and text-to-speech (TTS) drivers.

Getting Started Guide

5-1

Chapter 5

Behavior Layer
Examples of using the Behavior Layer are under $SAMPLES_ROOT/behavior. The behavior sample code is organized in the following subdirectories: emotion - Using the morphed face and emotion models. navigation - Using obstacle avoidance and tracking behaviors. network - Using networking facilities, especially teleop. resource - Using various hardware resources. speech - Using speech facilities. unit_test - Using miscellaneous standard behaviors. vision - Using vision facilities.

Task Layer
Examples of using the Task Layer in C++ are under $SAMPLES_ROOT/task and, in Python, under $SAMPLES_ROOT/python. The task sample code under these directories demonstrates: Using existing tasks in simple and complex programs, both in C++ and Python. Writing custom tasks, including launching tasks in parallel and raising and receiving events. Writing task primitives in C++, allowing the Task Layer to use the feedback control loops of the Behavior Layer.

Vision SDK
There is sample code demonstrating the use of the Object Recognition library in the $SAMPLES_ROOT/objrec directory. The example shows how to create and train modelsets, set model attributes, and then recognize against that modelset from a camera or image files.

5-2

Getting Started Guide

Index
A
API documentation 3-1

C
Camera test 2-7 Camera troubleshooting 2-8 Customer support 2-2

D
Doxygen 3-1 Drive test 2-6

E
enable_event 4-14

G
getDefaultUnits 3-4

H
Hardware requirements 2-1 Hardware tests camera test 2-7 drive test 2-6 Hazard avoidance 1-9

L
Linux version 2-3

O
Obstacle avoidance 1-9

Getting Started Guide

I-1

R
Resource configuration file 3-4

S
setDefaultUnits 3-3 Speech recognition 1-10

T
Teleoperation 1-9 Text to speech 1-10 TTS 4-19

X
X, Y coordinates 3-1

I-2

Getting Started Guide

Você também pode gostar