Escolar Documentos
Profissional Documentos
Cultura Documentos
Daniel Fishman
Historically games have used Finite State Machines to handle much of dialogue and character
insight into this question can be found in Character Based Interactive Story Systems [15]. Goals
for game design are separated into two broad categories: artistic and technical. Artistic goals
are seeking to emulate in games the goals of great works of art and literature. Technical goals
are more quantitative, and offer direction into achieving the artistic goals.
The artistic goals are described as Joy, Rapture, and Enlightenment. Joy is described as
from the real world. Enlightenment, finally, is a measure of improvement that the participant
gains from the activity. These same goals are traditionally achieved in the media of books, film,
and theatre.
More treatment is given to technical design. The first maxim is that the story should be
designed around characters, and the interaction between them. These interactions should take
the form of “directed Improvisation,” whereby behaviors are improvised in real-time according to a
dynamic state, but the possible behaviors are bounded. Directions for purely autonomous actions
user input, an external director process, or any combination of agent, user, and external process.
The principles outlined are intended to guide the creation of character-based interactive
stories. Players, it is argued, should direct the main character and the interaction with other
characters along some plot arc. This interaction should be meaningful over the natural course
1
of events through that arc. The world should be densely populated with other semi-autonomous
characters, and the interactions with those characters should occasionally be initiated by those
other characters in addition to the player character. The overall story, it is hoped, should be
constructed around some underlying narrative arc, with the experience being monitored and
shaped by an adaptive story master, which would modulate the other characters.
In [9] we see different approach to delineating the process of game design. The first discussion
addresses user interfaces. For games, it is defined here into 3 subsets: game interface, game
mechanics, and game play. The methods of input and the on-screen display are collectively
addressed by the former. Game mechanics, then, would consist of the game rules, physics, and
other functionality. Finally, the game play constitutes the ways in which the game encourages a
player to strive to overcome challenges, and the tasks and subtasks involved in doing so.
Having considered the interface, the games are then divided into four genres. The first,
Adventure games, revolve around navigating a world and solving puzzles necessary to progressing
further. Action games focus on motor skills and tactics necessary to move around and act in
a world, defeating antagonists along the way. Strategy games are defined by commanding or
controlling a virtual army, where strategic thinking is required for victory. A last category is left
for games that don’t fall cleanly into any of the above.
The remainder of the article revolves again around adages for crafting entertaining games.
The first piece of advice is to quickly establish an easily stated goal so that the player knows
appeals to a wider range of audience than just experts. Scalable difficulty, then, establishes a
contract with whatever level of player as to how much antagonism they wish to experience. These
first four are key in attracting players, but there is also advice on keeping players “hooked” on
the game. Spreading clues so that finding them is neither trivial nor infrequent is one method.
Applying pressure to the player can be another. Also useful are giving the player hints “(but not
2
answers),” and a non-linear structure so that the player can feel in control of their experience.
Many of these principles (and some of the vocabulary) are phrased in terms of usability.
[16] examines the commonalities between issues in games and human-computer interaction
interface (defined here as the UI) for The Sims is un-cluttered and intuitive which in turn allows
This example is an exception, however, as until recently in these heavily overlapping fields
there were few instances of games attending to HCI principles, or usability studies taking lessons
from game design. One such more recent advent is the rapid iterative testing and evaluation
On the other hand, Game Design – Theory and Practice is given as an example where game
design had valuable contributions to HCI. Among the sixteen principles therein are descriptions of
players in concrete terms, by designers who have experience of design on both sides. Furthermore,
a study on interface features of modern games found four key areas that apply.
First, commercial games commonly have an “effortless community”, whereby an easy means
to form, join, and participate in communities revolving around the game are released as part
of or in close tandem with the game itself. Second, players of these games often learn how
to better play the game by watching higher level players. Interfaces in most games are highly
customizable, allowing players not only to modify but often to share their changes with others as
well. Finally, the interaction in these games is generally unobtrusive and does little to interfere
In one instance, players were involved in a “playability” study, where they were asked their
opinions of interfaces in a variety of popular games. From these results were derived a game
reference model that breaks down game features into entities, scenarios, and goals, with each
3
[17] offers yet another approach to designing games, one focusing on characters and ways to
influence them. Offered as a premise is the idea that, regardless of their rules, all games share a
similarity in revolving around action. More than objects, the focus should be on designing the
user’s experience of time and space. In this way, rather than “’narration and description, we may
be better of thinking about games in terms of narrative actions and exploration.’ ” Live-action
role-playing games are offered as a model of character interaction. In these action is largely
character-driven. “Intrigue” in the setting emerges from goals for characters, some of which are
mutually exclusive. In pursuing these goals, characters create both the action of the games and
providing goals for the character, the player is given a set of needs that then gives her orientation
in the environment. Needs that conflict with those of other actors is the basis of conflict, forms
[17] also offers a treatment of genre, but not in terms of subdividing games. Instead, it is
suggested that games can take advantage of existing genre indicators to strengthen characters
in games. Genre can be used to create expectations, where the player implicitly anticipates the
character to act in a certain way out of convention. The theory of intermediality suggests that
On the other hand, incoherencies between expectation and what is experienced cause a player
frustration. Incoherencies can be in causality, where tools that seem to apply in a specific case
(as they may have applied in the real world and in similar cases in-game) do not. They may
also be in function, where cases that seem to be similar are handled differently. Finally, the
incoherencies may be a factor of space, where for instance what seems to be an exit is not able
to be travelled through.
Space is given further treatment in regards to creating a positive game experience. Spatial
4
archetypes (such as mountains or isolated buildings) may serve to draw the eye, and thus the
interest of the player. Space can also create for the player a sense of familiarity for the world, or
The NICE system [13] features a variety of types of characters, personalities, and conversa-
tion. For conversation, the rudimentary groups are verbal and non-verbal. Verbal communication
has two channels, one from the speaker whereby information is delivered, and another from the
listener for feedback. Non-verbal communication is also broken down into phonemic, intona-
tional, and emotional channels. The phonemic channel communicates redundant information in
parallel with what is being spoken aloud. The intonational channel includes facial expressions
and nods, and can be used to facilitate smooth interaction. The emotional channel increases the
Personalities are modeled according to the OCEAN system of traits: Openness, conscien-
tiousness, extraversion, agreeableness, and neuroticism. The output behavior of characters has
been designed to match these traits. Characters are separated into feature and support. Feature
characters are those characters who serve a key function in the plots. Support characters may
have pieces of information needed to progress the plot, but don’t pursue objectives of their own.
For the NICE system, a further “helper” class has more of a key role than the support character,
but again lacks their own motivation, guiding the user through the game instead.
Characters are able to converse about the state of the game, their goals, and also make
small talk. They also have access to regulatory speech acts for use in any scene. These include
plan regulation such as consenting to a player problem and requesting help, and error handling
like asking for clarification from the player. Other conversational upkeep functions such as
turn management (of who is accorded speaking privileges); markers in response to unexpected
utterances are also provided. The manager for dialogue is separated into a kernel which lays
down common functionality for various modules, and scripting modifying the dialogue behavior
5
in regard to specific characters.
[7] also features a character-based method, and uses it in conjunction with Unreal Tourna-
ment to emulate the sitcom genre, which features equal relevance between the story ending and
intermediate situations. They compare the character-based approach to a more traditional plot
with their unified principle for story generation and interactivity. Characters are able to act upon
unexpected events observed throughout the constantly changing game state, as well as predict
future outcomes.
The latter is done via hierarchical task networks, which consist of various decompositions
of the task, any of which may be further decomposed into a series of sub-goals necessary to
successfully achieve the goal. Planning is done taking advantage of the total ordering of the
HTNs. Searches are executed depth-first, from left-to-right, with the ability to backtrack up
a level in the case of failure. The search method attaches heuristic values to subtasks as it
Interactions generated dynamically from random encounters, on the other hand, constitute
bottom-up approach, and necessitate situated reasoning in determining behaviors. This occurs
due to discrepancies between actor expectations and action preconditions. The system is set up so
as not ignore situations of narrative relevance, even if they arise outside of planned interactions.
The other feature for handling unscripted events is action repair. When agent behavior affects
The underlying plot is similar in between instances in that the characters have the same
goals. However, it varies by the position of those characters which is randomly determined. It
can further vary via player interaction, which consists either of acting upon the environment
directly with the mouse, or giving suggestions to the main character via speech recognition. The
speech recognition features a flexible grammar designed with habitability principles or “syntactic
6
and lexical variants” via templates to provide remove the need for a user to memorize specific
commands.
[8] addresses another challenge facing AI-generated dialogue, which is scalability. A previously
declared challenge involved creating a scene of 10 minutes in length with at least 1 story-beat
per minute. To this end, a project building upon the sitcom model was designed to attempt this
challenge.
Situations are emergent from non-scripted interactions between the characters. They cor-
respond “metaphorically speaking [to the] ‘cross-product’ ” of character goals. Situations are
defined as a main character S, a performed action V, and a narrative object O upon which the
character takes action, and this S-V-O syntax is used for situation recognition. In the case of
action failure, the situation can also be the negative but displayed consequence of character’s
actions.
To measure progress in the scene these S-V-O idioms were formalized so that they could be
linked with “elements of narrative value” that then served to control the camera. The progression
of the story can then be shown on a graph as a distribution of idioms over time. For the purposes
In an attempt to prolong the scene, a variety of techniques were applied, one of which was the
introduction of secondary characters. This led to an increased complexity, with more situations
created and thus a higher pace. In some instances it also corresponded to increased duration.
However, the practical number of secondary characters introduced to increase length was limited,
as too many caused interference (as primary character goals were sidetracked by an increasing
Another technique was increasing the complexity of the planning. This was done in the both
by adding AND nodes either on the top level or inside sub-tasks, and by deepening OR nodes.
Increasing complexity in this fashion also increased idioms faster than increasing duration.
7
Where other articles on this subject frequently discuss interaction between two parties at a
time, [19] considers the case of dialogue more than two people. In this case, assumptions that
are trivial in a two-party arrangement require further investigation. In a two-party dialogue the
role of speaker and hearer are implicit. In multi-party dialogue, instead the role of hearer breaks
down into who can receive the utterance, and to whom the utterance is addressed.
between a party responsible for the task (such as in a coordinating role) and a party performing
the task. Turn management becomes even more complicated with the number of additional
participants, necessitating verbal and non-verbal indicators of turn, which would need to be
modeled by an agent.
[5] uses task-driven dialogue to measure communicative success by the performance of the
task to elucidate Processes that Shape Communication. An early observation, and one that
colors much of the document, is that while fluent communications are more often studied, speech
is generally disfluent. In fact, disfluencies are arguably the natural form of speech, and are shown
to contain information.
Another observation is that referring expressions used in speech are used provisionally until
agreed upon by the partner in the exchange. Over the course of conversation (as they are
agreed upon) the expressions are clarified, and then shortened as mutual understanding occurs
progressively faster.
Another speech trope, grounding, was shown to occur differently across different medium.
There are many activities that fall under this category, including (in brief) recognizing lack of
comprehension, initiating dialog repairs in this case, and determining the nature of a pause in
speech. While these expressions are intuitive when face-to-face, however, when denied visual
confirmation subjects took more time to engage in grounding before making a decision and
acting.
8
The meaning in these results as far as a dialogue model is concerned was also discussed.
References used by agents should be provisional at the moment of introduction, and flexible
to accommodate mutual understanding with the user. These references should also be tracked
throughout the conversation for consistency. Fillers should be used to denote difficulty (such as
One issue facing human computer spoken interaction is the vocabulary problem. Namely,
that “people frequently have trouble discerning, remembering, or guessing the grammar and
vocabulary that a system expects and then limiting themselves to it.” [6] To this end a framework
for speech interaction is proposed and a prototype implemented using Lisp and Apple PlainTalk
For this framework grounding is defined as the coordination of “knowledge states moment
by moment, by systematically seeking and providing evidence about what has been said and
understood.” The parties must collaborate in making effort to ground, with the amount of effort
In a state-based language system, it was determined that partners were either not attending
(speech had not yet begun), attending, hearing, parsing (while in the process of mapping an
utterance onto a plausible interpretation), interpreting, intending (to act on their interpretation),
The framework used a system of adaptive feedback, where the grounding criterion begins
at a high level, with much time and effort spent on confirming a mutual understanding, until
multiple consecutive exchanges have been made successfully. At this point the grounding criteria
continues to decrease unless a repair is necessary, at which point it may rise again.
[]allwood examines the need for such management techniques in spoken dialogue. It begins
with a consideration of the properties of communication, including who the participants are,
what they are doing, and where it occurs. This is not entirely dissimilar to the S-O-V model
9
discussed previously. In addition, the background of the communicators also plays a role.
Some of the management techniques are delineated “own communication management”, where
a sender or receiver of utterances is responsible for their end. Senders are understood to have
obligations of sincerity (communicating honestly with the receiver), having a valid reason for the
speech act, and having an intention not counter to the receiver in speaking. Receivers, on the
other hand, are expected to give the sender’s speech due consideration and, if appropriate, to
management, and feedback. Sequences occur when it is impractical for the entire act to be
negotiated at one time, and is instead broken down into phases (initiation, maintenance, change,
and ending). Turn management is the alternation of which partner is acknowledged as having the
right to speak. Feedback, as discussed above, is important to the continuance of the conversation,
and can be negative (in the case of lack of comprehension), or positive (affirmation of agreement
or understanding).
It is proposed that management methods are necessary for two-way communication. Reasons
for this include rational constraints such as the inefficiency of simultaneous conversation, and
[11] also considers virtual agent dialogue, but from the perspective of tactical questioning
rather than in-game interaction. Tactical questioning “is when military personnel, usually on
patrol, hold conversations with individuals to receive information.” In this case some dialogue
models, being capable of negotiating complex actions, are unnecessary in a domain consisting of
question-answer pairs. There are pairs, however, for linked to varying compliance levels of the
One problem encountered with this system was intermediate coherence for obligations or
conditional obligations. For instance, if the character being questioned requested assurance, the
10
agent should be capable of emulating the human function of giving information without being
explicitly asked the same question again. Addressing this issue would also allow the agent to
The domain knowledge was constructed such that characters were entities that know about
objects. Objects in turn have attributes, which can take on values. This was the domain from
which the virtual character’s dialogues were designed and mapped to utterances. To make usage
easier on the user, a graphical integrated authoring environment was also created.
In addition to the domain knowledge tool, a dialogue manager was responsible for generating
dialogue content. It was built from acts, which included assertions, questions, emotional appeals,
deferred obligations, motivations, grounding and other subdialogues. On the textual level uti-
lized natural language modeling, with anaphora tracking via machine learning to track changing
[2] also addressed the issue of agent behaviors, with an emphasis on real-time embodied agents,
such as those seen in many commercial games. Their project, the Expressive Motion Engine
(or EMOTE) derives a combination of facial expressions (via a subsystem called FacEMOTE),
gestures, and gate from a world state filtered by OCEAN personality types described above.
For the latter they employed Laban movement analysis, which is composed of body, space,
effort, shape, and relationship factors. Of these, EMOTE focuses on effort, involving the “’dy-
namic’ qualities of the movement and the inner attitude using energy,” and shape, which consists
Effort in the system is further decomposed into space, weight, time, and flow. In terms of
effort, each factor is measured between indulging in or fighting against the element. The eight
extremes are indirect and direct, light and strong, sustained and sudden, and free and bound.
Shape is considered in terms of horizontal, vertical, and sagittal dimensions, which are associated
11
with length, width and depth respectively. These factors are mapped with a parameterized action
Visual and dialogue modeling do a great deal to contribute to the immersive feel of good
video games. Immersion, then, is a good measure of the entertainment value of said games
[3]. Another contributing factor to the level of immersion experienced by the player is realistic
AI. Near-optimal AI has been seen as the best way to approximate realistic behavior of human
competitors.
One function useful to AI (and mentioned above) is difficulty scaling. The goal of scalable
difficulty is usually an “even game” where the computer is playing at approximately the same
skill level as human players. Adaptive AI is a way to approach such an even game.
Adaptive AI can be based on the current state of a dynamic world, but this requires either
high quality of utilized domain knowledge, or a large number of adaptation trials. These methods
are often computationally intensive, and are less useful as a result. Instead rapidly adaptive AI
can be used. In this case a large amount of domain knowledge is gathered automatically, and is
then immediately utilized (without two above drawbacks) to evoke effective behavior.
The model proposed does this via a feedback loop. Therein an adaptation mechanism handles
offline processing of previously observed game strategies. Having done this, it initializes the game
AI according to the predicted strategy of the opponent. Finally, during the game strategies with
minimal fitness difference are checked for the most common action leading to a desired outcome.
This setup was tested in the SPRING game environment, for both adaptation and difficulty
scaling. It was found to be effective in the former against both the original (non-adaptive) AI
it improved, as well as random selection of AI. As far as difficulty scaling, the rapidly adaptive
AI was able to uphold a tie against both AI for upwards of half an hour, which is considered
relatively long.
[14] too investigate game planning agents, with the constraint of using an anytime agent
12
for resource reasons similar to above. For intelligent agents in modern games a combination
of deliberative reasoning (the potential to formulate a plan for a future state) and a reactive
mechanism whereby the current, dynamic state, can be responded to are both necessary.
It is with respect to the former requirement that a planner was designed. Amongst the
benefits this offers, with an explicit representation of future actions comes the ability to provide
other parts of the agent with information about the expected future state. As mentioned above,
though, this process is often slow and resource intensive due to a combinatorial explosion.
Implementation as an anytime agent reduces this impact, although it also imposes additional
restrictions. An anytime agent is one that can be interrupted at any time and will still return
a usable result. Many planning algorithms are unable to meet this demand, as both forward-
chaining and backward chaining implementations could be interrupted with only a partial path
One solution is a complete plan, modified iteratively by searching through the plan space.
The approach selected is a Hierarchical task network, where abstraction is reduced iteratively.
Since the plan should be assumed to execute immediately, the possibility exists that partial plans
may waste resources. For this project a weighted A* search is used, where less abstraction is
considered better. The search stores the best plan so far after each search cycle, which leaves
the algorithm monotonic in time. This is essential for agent to be able to accurately calculate
The planner is designed such that it can be interrupted, for a variety of reasons. Criticality
interrupts could occur if the planner is calculating a goal when a new more critical goal is
proposed, or if the planner is executing such a goal when a more critical goal arrives. There
are also interrupts specific to a given goal that may arise, based on time or interrupt conditions
[4] offers another model of implementing game AI with an expert system. Expert systems
13
are comprised of an inference engine, production memory, and working memory. In MYTHIC,
Drools was selected for use as the inference engine. The rules were defined as evaluation of an
agenda, which could result in an update of that agenda. Facts come in two varieties, standard
and init facts. Both were populated at the beginning of a session, but facts are updated (and
deleted) at various points in the game, but can be repopulated from the set of init facts.
In the MYTHIC system, AIs were used in a variety of circumstances. They were used to
process spells and effects in game, where everything but which effect and what target are handled
by AI. AI was also used (as in other cases above) to handle dialogue with non-player characters.
In the game each entity had 3 AI Sessions attached to them. During update phases the status
check and main AI were used, with the status check session happening when no actions were
allowed, and the main occurring when more were permitted. The activate session occurred when
[20] also used an intelligent agent, but in tandem with the commercial game software Unreal
Tournament(as in [7] and [8]). In Mimesis the tournament server provided low-level environment
access, for processing managing and control, while the project handled High-level interaction.
This was integrated using a socket-based communications module which contacted the tourna-
ment server. It also featured a central controller process, and various intelligent support modules.
[10] is another example of dialogue emulation, but in the context of an NSF training appli-
cation. The goals were a life-like embodiment of real person, in this case Dr. Alex Schwarzkopf.
Accompanying animation was handled in real-time using the free graphics package OGRE.
The model incorporated the use of non-verbal cues (expressions) and user tracking to facilitate
real-world interaction, attempting to give users a life-like sense of immediacy. The motions were
gathered via motion tracking performed with Dr. Schwarzkopf. It also featured a knowledge-
14
driven backend to respond intelligently to questions asked by users. This worked in combination
with a decision support system and speech recognition for audio-to-text conversion. The gram-
mars generated via relational database. Finally a context-based dialog manager was implemented
using conversational goals as end states that agent desires to reach. Output was delivered with
consideration in realistic and effective language modeling. Rule-based AI are a potential avenue
for dialogue in interactive visual applications, although the computational needs for games may
necessitate approaches such as anytime methods to be effective. Lifelike characters can be rep-
resented by mapping OCEAN personality traits onto visualizations. Hopefully these techniques
will pave the way for more open-ended speech interactions in future applications.
References
[1] Allwood, J. Reasons for management in dialog. In R Beun, M Baker, and M Reiner, editors,
Dialogue and Instruction, pages 241–50. Springer-Verlag, 1995. http://www.ling.gu.se/
~jens/publications/docs051-075/074.pdf.
[2] Badler, N, Allbeck, J, Zhao L, and Byun, M. Representing and parameterizing agent be-
haviors. In Proceedings Computer Animation, pages 133–143. 2002. http://citeseerx.
ist.psu.edu/viewdoc/download?doi=10.1.1.19.5823&rep=rep1&type=pdf.
[3] Bakkes, S, Spronck, P, and van den Herik, J. Rapid Adaptation of Video Game AI. In
P Hingston and L Barone, editors, Proceedings of the IEEE 2008 Symposium on Computa-
tional Intelligence and Games (CIG’08), pages 79–86. http://sander.landofsand.com/
publications/CIG08Bakkes.pdf.
[4] Ballinger, CA, Turner, DA, and Concepcion, AI. Mythic: Artificial Intelligence Design in a
Multiplayer Online Role Playing Game. California State University San Bernadino, Dept.
of Computer Science and Engineering, 2010. http://www.csci.csusb.edu/turner/pubs/
ballinger{_}2010.pdf.
[5] Brennan, S. Processes that Shape Conversation and their Implications for Computational
Linguistics. In Proceedings, 38th Annual Meeting of the ACL., page 8. Hong Kong, 2000.
http://acl.ldc.upenn.edu/P/P00/P00-1001.pdf.
[6] Brennan, S and Hulteen, E. Interaction and feedback in a spoken language system: a
theoretical framework,. Knowledge-Based Systems, 8:143–151, 1995. http://citeseerx.
ist.psu.edu/viewdoc/download?doi=10.1.1.3.3421&rep=rep1&type=pdf.
15
[7] Cavazza, M, Charles, F, and Mead, S. Character-based interactive storytelling. IEEE
Intelligent Systems Special issue on AI in Interactive Entertainment,, pages 17–24, 2002.
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1024747.
[8] Charles, F and Cavazza, M. Exploring the scalability of character-based storytelling. In
Proceedings ACM Joint conference on autonomous agents and multi-agent systems, pages
870–877. New York, 2004. http://portal.acm.org/ft_gateway.cfm?id=1018839&type=
pdf&coll=GUIDE&dl=GUIDE&CFID=104652438&CFTOKEN=77550917.
[9] Clanton, C. An interpreted demonstration of computer game design. In Proceedings
of the conference on CHI 98 summary: human factors in computing systems, pages
1–2. http://portal.acm.org/ft_gateway.cfm?id=286499&type=pdf&coll=GUIDE&dl=
GUIDE&CFID=101949120&CFTOKEN=87884749.
[10] DeMara, R, Gonzalez, A, et al. Towards Interactive Training with an Avatar-based Human-
Computer Interface. In The Interservice/Industry Training, Simulation and Education Con-
ference, pages 1–10. 2008. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.
1.1.150.4992&rep=rep1&type=pdf.
[11] Gandhe, S, DeVault, D, Roque, A, Martinovski, B, Artstein, R, Leuski, A, Gerten, J, and
Traum, D. From Domain Specification to Virtual Humans: An integrated approach to
authoring tactical questioning characters. In Proceedings of Interspeech. Brisbane, 2008.
http://people.ict.usc.edu/~traum/Papers/tacq08.pdf.
[12] Grosz, BJ and Sidner, C. Attention, Intentions, and the Structure of Discourse. Comptu-
tational Linguistics, 12(3):175–204, 1986. http://acl.ldc.upenn.edu/J/J86/J86-3001.
pdf.
[13] Gustafson, J, Boye, J, Fredriksson, M, Johanneson, L, and Königsmann, J. Providing
Computer Game Characters with Conversational Abilities. In Proceedings of Intelligent
Virtual Agent (IVA05), pages 37–51. Kos, 2005. http://www.springerlink.com/content/
4nmvy3c5qa3r6709/.
[14] Hawes, N. An Anytime Planning Agent for Computer Game Worlds. In Proceedings of the
Workshop on Agents in Computer Games at the 3rd International Conference on Computers
and Games, pages 1–14. 2002. http://www.cs.bham.ac.uk/research/projects/cogaff/
nick.hawes.cg02.pdf.
[15] Hayes Roth, B. Character-based Interactive Story Systems,. IEEE Intelligent Systems and
Their Applications, 13(6):12–15, 1998. http://ieeexplore.ieee.org/stamp/stamp.jsp?
tp=&arnumber=735997.
[16] Jorgenson, AH. Marrying HCI/Usability and computer games: a preliminary look. In
Proceedings of NordiCHI ’04, pages 393–396. 2004. http://portal.acm.org/ft_gateway.
cfm?id=1028078&type=pdf&coll=GUIDE&dl=GUIDE&CFID=101949474&CFTOKEN=85886675.
[17] Lankosk, P and Heliö, S. Approaches to Computer Game Design – Characters
and Conflict. In CGDC Conference Proceedings, pages 311–321. Tampere Univer-
sity Press, 2002. http://share.auditory.ru/2006/Ivan.Ignatyev/DiGRA/Approaches%
20to%20Computer%20Game%20Design_Characters%20and%20Confl%20ict.pdf.
[18] Taylor, M, Gresty, D, and Baskett, M. Computer game-flow design. ACM Computers in En-
tertainment (CIE), 4(1), 2006. http://lsc.fie.umich.mx/~sadit/Clases/proyectospp/
p3a-taylor.pdf.
16
[19] Traum, D. Issues in multi-party dialogues. In F In Dignum, editor, Advances in agent
communication, Lecture notes in artificial intelligence 2922, pages 201–211. Springer-Verlag,
2004. http://people.ict.usc.edu/~traum/Papers/multi.pdf.
[20] Young, R. An Overview of the Mimesis Architecture: Integrating Intelligent. Narrative
Control into an Existing Gaming Environment. In Working notes of the AAAI spring sym-
posium on Artificial intelligence and interactive entertainment. 2001. http://citeseerx.
ist.psu.edu/viewdoc/download?doi=10.1.1.16.5529&rep=rep1&type=pdf.
17