Você está na página 1de 3

Emotion in Behavioural Architectures

Ruth Aylett
Centre for Virtual Environments
University of Salford
One of the consequences of the application of behavioural architectures to robotics and the focus on
situated agents that accompanied it from the mid 1980s was a move away from the cognitivist approach to
agenthood in which reflection and internal complexity drive an agent. Behavioural architectures and the
situated approach emphasise interaction between agent and environment instead and argue that complexity
is an emergent behaviour. Agents need not be internally complex if complexity arises through the
interaction of individually simple components and the environment.
We argue that this change in focus has consequences for all aspects of agent architecture, among them the
role of emotion. In particular, it is interesting to explore how an interactionist view of emotion, which sees
it as a behaviour – as process rather than state – can be integrated into this view of agenthood. We should
note here that work in virtual agents has given a fresh impetus to the whole field of emotion in agents,
perhaps for two reasons. Firstly, an embodied virtual agent in a virtual environment provides many more
external channels for the representation of emotional state - gaze, facial expression, gesture and overall
body language - than was the case with disembodied intelligent agents, where language content was just
about the only means of expression. Secondly, many virtual agent domains are those in which the
expression of emotional state is essential to the application. Here, the use of avatars in distributed multi-
user environments has provided a driving pressure.
The emphasis of those working at the cognitive end of the agent spectrum is emotion as a cognitive state,
while for those working at the more physical end it is emotion as a bodily state. Note that by this we mean
the internal modelling of emotion, rather than its external expression. These two approaches reflect a long-
standing debate within psychology itself [Picard 97] and could be traced back as far as the separation of
body and mind by Descartes.
The more long-standing approach is cognitive modelling, from [Ortony 88, Frijda 87] onwards. The
advantage is that the agent is always in an explicitly defined emotional state or states giving a clear
relationship with the external manifestation of that emotion. This may be - and usually is - linked into
symbolic reasoning processes allowing an emotional state to be produced by complex chains of reasoning
about the state of the world or about the motivations of other agents. In turn the emotional state may then
influence the goals an agent sets for itself or the means by which it tries to achieve these goals.
Emotion as behaviour
The simplest - but rather crude - way of modelling emotion at a lower level than the cognitive is to equip
the agent with meters. These are then incremented or decremented according to its interaction with the
environment, with other virtual agents or with a human user [Aylett et al 99]. Thresholds can be set which
allow external behaviour switching of the type discussed below.
However, viewing emotion itself as behaviour, that is, as process, suggests a different view. Here,
emotion can be seen as internal behaviour within the agent. This internal behaviour then interacts with the
agent’s external behaviour. This may modify the external behaviour in a simple way - an agent may run
faster if it is frightened - or in a more complex way through the production of gesture or facial expression.
Thus emotional communication can itself be seen as an emergent property arising from the interaction of
internal and external behaviour rather than necessarily from a chain of reasoning based on a cognitive state.
In other words, the agent does not have to explicitly drive the external display of emotion since this
happens as a natural result of the interaction between the internal emotional process and all the other
behaviours active within the agent. This can all happen at a sub-symbolic level and does not require any
explicit manipulation of goals.
This view of emotion emphasises its functional character as an essential part of an agent’s internal
processes and of its coupling with its environment rather than as a state superimposed onto an essentially
rationalist architecture. Of course an agent can only have internal behaviours if it has internal structure
which supports them, just as external behaviours require appropriate sensors and actuators (leg-moving
behaviour requires legs). One of the difficulties in implementing this approach on a robot is that most
robots have very little internal structure in the animal sense and are little more than computers on wheels.
The biological analogy suggests the approach of modelling an endocrine system, as in Creatures [Grand
& Cliff 98], with chemical emitters and receptors [Canamero 97]. Emotion can then be manifested as part
of the overall interaction of the agent with its environment rather than being modelled as a cognitive state.
This is of course far easier to achieve with a virtual agent than with a real-world robot.
Apart from scientific interest, one might ask why it is worth creating this kind of process-based emotional
system within a behaviourally driven agent. One possible answer lies in the well-known problems created
by the undesirable or counter-productive interaction between behaviours seen in most behavioural
architectures. Behaviours can be thought of as interacting on two scales. Micro interaction occurs at any
given time as the emergent behaviour is produced by interaction between all active behaviour patterns in
the agent. Longer-term interaction occurs as radically different emergent behaviours are required from the
agent for successful adaption to and activity within its environment.
At the micro level, behavioural interaction can be controlled by the inhibition/excitation hard-wiring of the
Brooksian subsumption architecture, or by utility functions which weight the output of behaviour patterns
synthesised together as in the Behavioural Synthesis Architecture (BSA) applied here at Salford [Barnes
96]. However, one can view the internal emotional processes of an agent as a way of modifying such local
connections between active behaviours. For example, one may implement micro interaction through a
neural net as in the case of Creatures. Emotional processes can then be modelled as a chemical
environment in which such a neural net sits with variable concentrations of particular substances
modifying the operation of the neural net’s system of weights. This provides a mechanism through which
local interactions can be modified by global information.
At the macro level, an agent needs the ability to sequence emergent behaviour over longer periods of time
in a coherent manner. For example, if a bird has a set of nesting behaviours and a set of hunting for food
behaviours, not only does it need a way of switching from one set of behaviours to the other, it needs to
undertake each group for reasonable periods of time. Without a certain amount of persistence in hunting,
the bird will starve to death. If it is constantly leaving the nest, its young may never hatch. Whether we call
the internal behaviours discussed here ‘emotions’ or ‘drives’, they appear to provide a mechanism for
producing this type of coherent sequencing. For example, a drop in the level of blood sugar can be
modelled as an agent uses up its food resources with this triggering the switching in of food-seeking
behaviours which will be maintained until enough food has been ingested to raise this level, when nesting
behaviour can be resumed.
Future work
A short project has been carried out at Salford in creating ‘Virtual Teletubbies ‘ – the children’s TV
characters [Aylett et al 99]. The BSA, which had previously been applied to co-operating mobile robots
[Aylett et al 97], was reapplied to these virtual agents. Unlike the robots, which were task-driven,
Teletubbies required a different mechanism for switching between groups of behaviours through the BSA
structure known as a behaviour script. To achieve this, small behaviour scripts – behaviour sub-scripts in
effect – were queued at the four conceptual levels within the architecture of individual, environment,
species and universe oriented behaviours. The drives hunger, curiosity and fatigue were used to alter the
priority of stored sub-scripts in order to produce the sequencing of groups of behaviours discussed above.
These drives were however implemented as simple meters rather than as internal processes, and a more
principled reimplementation seems highly desirable.
A further issue is that the actual Teletubbies engage in social behaviour such as hugging which requires
some emotional architecture as motivation. In addition, it is interesting to explore how far the user is able
to interpret the emergent behaviour of a Teletubby in emotional terms; arguably of great importance in
creating empathy with the agents and a sense of their believability.
The BSA defines behaviour patterns as a pair of functions: one mapping sensor stimulus to actuator
response, and one mapping sensor stimulus to response utility. These behaviour patterns are then
synthesised together, with the actuator response of each pattern weighted by its corresponding utility. This
framework is of sufficient generality to allow the definition of internal behaviours in the same form. Future
work is required to establish what internal emotional behaviours should be incorporated into the BSA and
how these should interact with the externally directed behaviours just mentioned. For example, one
possibility is to allow emotional behaviours to interact with the utilities of externally directed patterns,
another is to allow them to affect actuator response more directly via the synthesis mechanism. The internal
structure required to support such emotional behaviours also requires analysis.

References
Aylett, R.S; Coddington, A.M; Barnes.D.P. & Ghanea-Hercock, R.A. (1997) Supervising multiple
cooperating mobile robots. Proceedings, 1st International Conference on Autonomous Agents, Marina Del
Ray, Feb 1997
Aylett, R.S; Horrobin, A; O'Hare, J.J; Osman, A. & Polyak, M. (1999) "Virtual teletubbies: reapplying a
robot architecture to virtual agents" Proceedings, 3rd International Conference on Autonomous Agents (to
appear).
Barnes, D.P. (1996) A behaviour synthesis architecture for cooperant mobile robots. Advanced Robotics
and Intelligent Machines, eds J.O.Gray & D.G.Caldwell, IEE Control Engineering Series 51.295-314
1996
Canamero, D. (1997) "Modelling Motivations and Emotions as a Basis for Intelligent Behaviour".
Proceedings, 1st international Conference on Autonomous Agents, pp 148-55, ACM Press, Feb 1997
Frijda, N. (1987) "Emotions".Cambridge University Press, 1987.
Grand, S. & Cliff, D.(1998) "Creatures: Entertainment software agents with artificial life". Autonomous
Agents and Multi-Agent Systems, 1(1):39--57, 1998.
Ortony, A; Clore, G.L. & Collins, A. (1988) "The Cognitive Structure of Emotions". Cambridge
University Press, 1988.
Picard, R. (1997) “Affective Computing”.MIT Press 1997

Você também pode gostar