Você está na página 1de 8

Biologically Inspired Cognitive Architectures (2012) 1, 100 107

Available at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/bica

INVITED ARTICLE

On a roadmap for the BICA Challenge


Alexei V. Samsonovich

Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive MS 2A1, Fairfax, VA 22030-4444, USA

Received 26 May 2012; accepted 29 May 2012

KEYWORDS Abstract
Human-level AI; The BICA Challenge is the challenge to create a general-purpose, real-life computational equiv-
Cognitive architectures;
alent of the human mind using an approach based on biologically inspired cognitive architec-
Turing test;
tures (BICA). To solve it, we need to understand at a computational level how natural
Newell list;
Critical mass intelligent systems develop their cognitive, metacognitive and learning functions. The solution
is expected to lead us to a breakthrough to intelligent agents integrated into the human society
as its members. This outcome has the potential to solve many problems of the modern world.
The article starts from the roadmap proposed by Dr. James Albus for a national program unify-
ing artificial intelligence, neuroscience and cognitive science. The BICA Challenge is introduced
in this context as a waypoint on the expanded roadmap. The gap between the state of the art
and challenge demands is analyzed. Specific problems and barriers are identified, an approach
to overcoming them is proposed, and an ultimate practical criterion for success is formulated.
It is estimated that the BICA Challenge can be solved within a decade.
2012 Elsevier B.V. All rights reserved.

Introduction: The BICA Challenge other hand, computers, no matter how big or how small,
are still generally clueless and require human assistance at
The goal of the field of artificial intelligence at its onset was every step beyond programmed behavior. While many nar-
to create a general-purpose computational equivalent of row superhuman capabilities have been achieved in artificial
human intelligence. The same challenge remains to be the intelligence, the general compatibility of artificial agents
greatest challenge today, while the criteria for reaching it with the human society remains well below the human
continue to be analyzed, criticized and re-defined (Adams level.
et al., 2011; Anderson & Lebiere, 2003; Korukonda, 2003; A new powerful approach emerged recently in the fields
McCarthy, Minsky, Rochester, & Shannon, 1955; Newell, of cognitive modeling and artificial intelligence: biologically
1990; Turing, 1950). On the one hand, computational pow- inspired cognitive architectures (BICA: e.g., Samsonovich,
ers available today are approaching or already at the level 2010a). A cognitive architecture is a computational frame-
of the human brain, if measured by the number of principal work for designing intelligent agents (Gray, 2007; Newell,
computing elements and operations per second. On the 1990), and we call it biologically inspired when it aims
to reproduce functional properties of the human mind (of
course, cognitive architectures can be called biologically
E-mail addresses: alexei@bicasociety.org, samsonovich@cox.net

2212-683X/$ - see front matter 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.bica.2012.05.002
On a roadmap for the BICA Challenge 101

inspired for other reasons as well: e.g., when they mimic who find this argument a poor motivation are referred to
structures, functions and/or dynamics of the brain). New the works of Jim Albus (2008, 2011). This article is not about
element added with the concept of BICA to the original arti- motivation: it addresses the questions what and how.
ficial intelligence challenge is in the focus on internal mech- Jim Albus provided the ideological basis for many revolu-
anisms of the human mind, which extends the challenge tionary DARPA programs, including the BICA Program
beyond the field of artificial intelligence. It is also believed (20052006). Later he became an active Founding Member
that BICA is the key to its solution, hence the name BICA of the BICA Society. In 2010 at the Annual BICA Society
Challenge (Chella, Lebiere, Noelle, & Samsonovich, 2011). meeting he led a discussion on a roadmap to human-level
The BICA Challenge can be formulated as the challenge artificial intelligence, following his keynote (the video is
to create a general-purpose, real-life computational equiv- available at http://www.bicasociety.org/videos). The par-
alent of the human mind. To solve it, we need to understand ticular version of his draft of the roadmap to human-level
at a computational level how natural intelligent systems de- artificial intelligence (Table 1) is extracted from his slides.
velop their cognitive, metacognitive and learning functions. The left column is the Draft of the Plan proposed by
A solution to the challenge would be an intelligent agent (a Jim Albus, and the right column expands this plan into spe-
cognitive architecture) that implements a functional replica cific goals.
of the essential human mind, and therefore can be per- The overarching goal put forward by Jim Albus is to make
ceived by humans as a mind of a human kind, can learn from a breakthrough to human-level artificial intelligence within
humans as a student, can become a useful member of a a decade, using an approach centered on the human mind.
team as a collaborator, and more generally, can be a mem- Based on Table 1, the five components of the plan proposed
ber of the human society who enjoys human rights and takes by Jim Albus can be characterized as: (i) reverse-engineer-
human responsibilities. With its links to social, legal, ethi- ing of the brain, (ii) a scientific theory of mind, (iii) a ladder
cal, philosophical, engineering, technological, and other as- of practical steps to integration of human-level artificial
pects, the BICA Challenge emerges as a new all-scientific intelligence into the society, (iv) metrics and tests for intel-
mainstream challenge of our time that deserves a multi-na- ligence and biological fidelity, and (v) related social, ethi-
tional program, as explained below. cal, legal and philosophical issues.
Jim Albus saw the key to solution in development of a com-
The Jim Albus legacy putational theory of mind (ii). This theory should allow for an
implementation of the functionality of the human mind at a
On April 17, 2011, the BICA community lost its great leader: higher level of abstraction than the level of synapses, ion
Dr. James S. Albus. His work on computational theory of channels, even neurons. Yet, the implementation should be
mind (Albus, 2008; Albus & Meystel, 2002) is continuously sufficiently detailed to capture the specifics of human cogni-
guiding us in our everyday scientific research. Among his tion. This level of modeling corresponds to the level of BICA.
many life accomplishments is the Decade of the Mind Accordingly, the critical point of Table 1 is 1b: a minimal sci-
(DoM) initiative (Albus et al. 2007): a proposal for a multibil- entific theory of mind implemented as a BICA that would suf-
lion national program with the potential to create human- fice for a breakthrough to human-level intelligent agents.
level intelligent agents broadly integrated into the human Therefore, in this interpretation, the BICA Challenge is not
society within a decade, altering the look and feel of the hu- the ultimate goal by itself, but a critical waypoint on the
man civilization as we know it. roadmap to a human-level artificial intelligence.
Why do we need human-level intelligent agents? The very
possibility may sound frightening (Kurzweil, 2005; Vinge, Expanding the roadmap
2008), but what many people are missing is its positive va-
lue. Remarkably, the DoM initiative was proposed by Jim Al- The purpose of this section is to provide a further under-
bus at the time of a global economic crisis. In the historical standing of the roadmap, together with the BICA Challenge
moment we are living, many countries are experiencing the and the ultimate overarching goal in its context. A possible
biggest social and economical crisis after World War II. Peo- view of the essential part of the roadmap is represented in
ple become poor and desperate. The number of suicides is Fig. 1. Elements of Fig. 1 are listed alphabetically with
growing every day, the fight for basic rights, revolutionary explanations below (the list extends the figure caption and
movements and wars are uprising in many nations. In this is not intended as a glossary of terms).
situation we cannot limit our attention to sci-fi and mythical
threats associated with the future artificial intelligence. In- Accept as a partner: Human team members need to be
stead, we should take seriously the opportunity to create ready to accept a virtual agent as an equal team mem-
new artificial intelligence as a universal remedy against ber, which includes establishing relations of trust and
imminent threats and a radical solution to many problems mutual understanding, sharing responsibilities and rely-
of the society. Jim Albus had a great social conscience ing on the agent performance.
regarding this point. He was writing about a future world, Agent side: Human-readiness requirements for the
where everybody enjoys prosperity and economic justice human-compatible BICA agent.
(Albus 2011). He argued that human-level artificial intelli- Attention, awareness, intentionality: In order to be
gence can help us to achieve this dream, and explained believable in behavior, the agent needs to demonstrate
how this would happen. This is why we need the DoM and re- sensible voluntary attention control, awareness of the
lated initiatives, and this is why the overarching goal and a situation, and self-consistent, intentional voluntary
roadmap to it need to be studied and worked out. Those behavior.
102 A.V. Samsonovich

Table 1 The Roadmap proposed by Jim Albus.


Plan Goals
1. Theory and fundamental science (a) To develop theoretical models of the brain that describe the inputs and outputs of all of
the major neural modules and systems of the brain, and specify the functional
transformations that take place therein
(b) To develop computational models of the mind that generate the functional equivalent
of the phenomena of perception, cognition, intention, imagination, memory, learning,
feeling, emotion, and behaviors of manipulation, locomotion, and language
2. Experimental test environment (a) To develop experimental models of the brain that mimic the inputs and outputs
of functional modules in the brain, and mimic the functional transformations that take
place therein.
(b) To demonstrate the performance of brain models in controlling systems applied to
real-world tasks of locomotion, manipulation, imagination, reasoning, and natural
language conversation.
3. Practical applications To apply intelligent systems technology to social and economic problems such as:
(a) Manufacturing autos, appliances, planes, drugs, textiles
(b) Construction roads, bridges, homes, businesses, factories
(c) Transportation trucks, cars, buses, planes, trains
(d) Agriculture planting, harvesting, tending, aquaculture
(e) Mining and drilling digging, hauling, undersea ops
(f) Recycling and environmental restoration
(g) Education and entertainment
(h) Aids to handicapped and elderly
(i) Medical and nursing care
4. Performance metrics (a) To develop metrics and methods for verifying, validating, and evaluating models
of mind and brain
(b) To develop metrics and methods for measuring the performance of intelligent
machines and systems
5. Social, ethical, and legal issues (a) To confront the social, ethical, legal, and philosophical issues related to investigating
the human mind, including the implications for mental health
(b) To provide a forum for public debate of the potential costs, risks, and benefits
of understanding the mind, including possible religious and civil liberties objections
(c) To address issues of unemployment, economic growth, and environmental implications
of intelligent machines

Believable behavior: The agent needs to be believable in (memory for past experiences) and prospective (memory
its behavior (behave like a conscious being), transparent for plans, goals, dreams, values, imagined possibilities,
and trustworthy. Specifically, for this purpose the agent etc.).
should demonstrate voluntary control of attention and General knowledge: The agent needs to have human-
action, motivation, intentionality, goal-setting abilities, level general knowledge of facts about the world, includ-
emotional intelligence, and self-regulation. ing the commonsense knowledge.
BICA Challenge: A human-level artificial team member Goal setting and intentionality: The agent needs to pos-
implemented as a general-purpose computational equiv- sess teleological capabilities, as to be able to develop
alent of the human mind (e.g., as a virtual agent). and select new personal goals, also to form a personal
Co-presence: The agent should induce in human partners system of values, and to generate plans and intentions
a sense of its co-presence, or third-person presence; in consistent with its goals and values.
other words, an intuitive belief that the agent is some- Human side: Agent-readiness conditions that must be
body intelligent who is present here. met in the human society for the BICA Solution to be
Creativity: The agent must be capable of design, gener- achieved.
ation and evaluation of new concepts, rules, strategies, Human-compatible: The BICA agent needs to be compat-
hypotheses, behaviors, goals, etc. ible with humans as a team member and as a society
Domain expert: Expert-level intelligence in a specific member.
domain. The BICA agent needs to provide intelligent Human-extending: The agents must be active carriers of
capabilities at an expert level in its domain of expertise. the human mentality, human spirit, human values,
Emotional intelligence: The agent needs to be able to human culture, and in a remote future human minds
recognize emotional motivations in human behavior, themselves.
understand human emotions, reason in terms of emo- Imagery: The agent must be capable of mental simula-
tions, and generate appropriate behavioral emotional tions of possible worlds and scenarios using multiple
reactions at a human level, involving basic and com- modalities.
plex/social emotions. Learner critical mass: The agent should possess the
Episodic memory: The agent needs to have advanced, critical mass of a universal human-level learner
human-level episodic memory, both retrospective (Samsonovich, 2007, 2010b), meaning that it should be
On a roadmap for the BICA Challenge 103

Fig. 1 A possible view of a partially expanded, extended roadmap. Elements are explained in the list in text. Fat ovals show
critical waypoints. Ovals filled with dots represent partially available capabilities. Arrows represent dependencies. The BICA
Challenge is an intermediate critical waypoint on the roadmap.

able to learn what a human student can learn under Social and communicational capabilities: The agent
equivalent conditions from a given new domain. needs to be human-level socially-intelligent, friendly
Metacognitive abilities: The agent needs to possess and cooperative with humans, needs to be able to com-
human-level metacognitive abilities, including episodic municate freely using natural language and other means
memory, Theory of Mind, self-regulation, complex and (e.g., gestures) at a level that would allow it to establish
social emotions, the ability to generate goals, etc. relationships of mutual understanding and trust.
Self-regulation: This term is understood here in the Social basis: The society needs to provide a basis for the
sense of educational science (Zimmerman, 2008), where new hybrid teams (humans plus virtual agents), including
it applies to self-regulated learning, self-regulated prob- the necessary policies for virtual agents as team mem-
lem solving, etc. Elements of self-regulation include self- bers, and funds.
awareness, self-analysis, self-rewarding, self-monitor- Society component: A part of the human society repre-
ing, self-instructing, self-motivation, self-observation, sented by artificial intelligent agents, understood as
self-control, self-judgment, self-reaction, self-imagery, the ultimate overarching goal (discussed in the last
self-experimentation, self-attribution of outcomes, section).
self-adaptation, etc. Theory of mind: The agent needs to possess an
Self-sustainable: The new society component as a whole advanced, human-level Theory of Mind, i.e., be able to
(including humans in the loop) should be able to take simulate human minds in various situations. This notion
care of its own persistence and growth, including devel- extends beyond the mere representation of states of
opment, production, and market, in a way that ensures beliefs, desires and goals of others.
an open-ended scenario of its evolution and integration
into the society. The BICA Challenge (Fig. 1) is, in essence, interpretable
Sensemaking: The agent should demonstrate common- as the challenge to have virtual agents accepted by the soci-
sense awareness of the situation, which involves the abil- ety as its useful members and treated on equal with human
ity to put together disparate, multimodal, possibly noisy members of the society, initially within limited domains,
and limited sensory inputs and general knowledge to teams and environments. At the point when the challenge
form a clear understanding of the gist and goals in the will be solved, large investments and rapid progress in the
current situation. field can be expected, that will result in rapid development
104 A.V. Samsonovich

of the new component of the society. Examples of agent do- from each other and from solutions found in biology. Dis-
mains of expertise may include, in addition to Table 1 item jointed scientific communities may speak different languages
3, the following examples: personal computer assistant, and pursue independent, limited goals. In this situation,
computer administrator, IT specialist, web master, com- broadly advertised public discussions of the overarching BICA
puter programmer, personal secretary, accountant, graphic Challenge should play an integrating role. The BICA Society
designer, computer modeler, personal tutor, videogame is planning to continue these discussions on Videopanels
partner, synthetic character virtual actor or talk show (http://www.bicasociety.org/vp) and on conference venues.
host, storyteller, virtual babysitter, business manager, The list of recent and ongoing mainstream research pro-
nightwatch, virtual correspondent, research assistant, re- jects and initiatives in computer science and technology,
search advisor, teaching assistant, grader, interviewer, vir- cognitive science and neuroscience that directly or indi-
tual salesman, travel agent, receptionist, architect, artist, rectly address the BICA Challenge is quite long and includes
etc. In other words, virtual agents should be able to do al- the following examples:
most everything that people do on computers, and more.
Elements of the challenge include, on the agent side, a  The Blue Brain Project
minimal agency that is sufficient to provide a solution to  The Human Brain Project
the challenge. In other words, the agents must be hu-  US Army Research Lab Robotics Collaborative Technol-
man-ready, which implies achievement of the following ogy Alliance
conditions.  DARPA SyNAPSE
 DARPA Bootstrapped Learning
(a) Critical mass of a universal human-level learner: i.e.,  IARPA ICARUS
the ability to grow naturally to a human level of intel-  EU FACETS
ligence, virtually in any practically interesting domain  EU POETICON
of human intellectual expertise. Subgoals include: (i)  EU Cognitive Systems and Robotics (ICT) Programme
identifying the critical mass; (ii) designing a critical  The beyond traditional CMOS computing challenge
mass; and (iii) implementing a critical mass.  US DoM Initiative
(b) A sense of co-presence associated with the agent,  Convergent Science Network
resulting from (i) human-like common sense; (ii)  EUCOG
believable autonomous behavior; (iii) human-like  Robot Companions for Citizens
metacognitive abilities, including episodic memory;  Brains in Silicon
(iv) human-like value system and personality (not  Neuro-Bio-Inspired Systems (NBIS)
shown in Fig. 1).  EU HUMANOBS
(c) Human compatibility in collaborative work (the ability
to work productively with human partners), which Many necessary lower-level capabilities are already par-
includes (a) and (b) and implies in particular that tially available (ovals filled with dots in Fig. 1, while this
the agent (i) is capable of independent decision mak- interpretation may be ambiguous) and other should become
ing, while being subordinate to humans; (ii) is able to available soon. Nevertheless, their current implementations
communicate using natural language as well as other may not satisfy the needs of the challenge. A few examples
means; and (iii) can respond on the human time scale. illustrating the gap between the state of the art and the
challenge demands are considered in the next section.
On the human side, the challenge includes reaching,
e.g., the following conditions.
Challenge criteria vs. the state of the art
(a) The society is taking the BICA Challenge seriously, as a
top-priority goal, with an understanding of the value A general problem with neuroscience and cognitive psychol-
of solving the challenge. The society needs to create ogy in the context of the BICA Challenge consists in limita-
and fund a major (inter)national program to solve tions of their theoretical paradigms. E.g., system-level
the challenge. computational neuroscience is stuck at the level of tradi-
(b) Human team members are ready to accept new arti- tional connectionism for many decades. This level makes
facts as teammates treated on equal with human it very difficult to describe sophisticated symbolic informa-
team members: (i) overcoming social and psychologi- tion processing at a higher level of abstraction, that is typ-
cal barriers; (ii) giving legal rights and responsibilities ical for human cognition. While specific studies identify
to new artifacts; (iii) creating new policies and initia- neurons that may represent higher abstract concepts
tives (financial, infrastructural, administrative) for (e.g., Rees, Kreiman, & Koch, 2002) and modeling ap-
hybrid (human-artifact) units at all social levels, to proaches tackle separate elements, it was not possible so
support the emergence, development and prolifera- far to build a computational model of a complete agent
tion of hybrid units in military, business, industrial, (e.g., a complete virtual rat) of these elements described
educational and research institutions, etc. at the neuronal level. At the same time, existing research
programs are currently pursuing this sort of a goal. It is also
There are currently a number of research programs around unclear whether this solution would be necessary or a
the world that explicitly or implicitly address the BICA Chal- much simpler solution can be found.
lenge. Yet, despite impressive successes and growing interest A similar connectionist paradigm also dominates a large
in BICA, wide gaps continue to separate different approaches part of cognitive psychology for decades, resulting in
On a roadmap for the BICA Challenge 105

limitations in the field. As a consequence, experimental and computer science is frequently confused with reference
modeling studies are addressing isolated phenomena or lim- memory, which in psychology belongs to the category of
ited aspects like cognitive biases, that together do not pro- semantic memory (Nadel & Hardt, 2011): this is an example
vide a complete picture of information processing in the of communities speaking different languages.
brain. Cognitive modeling goes beyond this level in building The big problem with mainstream cognitive architectures
complete cognitive architectures (Gray, 2007), yet in most is, however, not in these details and limitations, which in
cases their implementations are limited to basic cognitive principle one can fix. The problem is that it takes a huge
functions and/or special cases and toy problems. Extensions amount of human labor to implement any new feature in
of the mainstream cognitive architectures that include fea- these architectures, which ideally at some point should be
tures like episodic memory, emotions, metacognition, self- done by agents themselves. Typically in cognitive modeling
awareness, etc. appear to be limited, as illustrated by fol- and in artificial intelligence, a system can do only the kind
lowing examples. of learning for which it was designed, and it takes more
Soar (Laird, Newell, & Rosenbloom, 1987) and ACT-R and more sophisticated designs to increase learning capabil-
(Anderson & Lebiere 1998) are the two most widely known ities. Learning architectures that once were claimed to be
and used cognitive architectures. There were recently universal learners in fact have severe limitations (e.g.,
numerous works on implementation and study of higher cog- Huffman & Laird, 1995). In contrast, the challenge of the
nitive functions in Soar and ACT-R, including emotional cog- learner critical mass (Samsonovich, 2007, 2010b) requires
nition, episodic memory, imagery, metacognition, etc. It is the opposite: finding a simplest possible design that enables
relatively easy to implement some aspect of a higher cogni- virtually unlimited learning without re-programming. It is
tive feature in an architecture, but this does not necessarily possible to imagine that the nearest practical solution of
solve the problem. For example, the recently extended ver- this sort will be found as a humongous structure in Soar or
sion of Soar (Laird 2008; Mariner, Laird, & Lewis 2009) ACT-R. More likely, however, alternative principles will be
implements an appraisal theory of Scherer in its Appraisal utilized to achieve general human-level learning capabilities
Detector. A limitation here and in related works is that ap- with a relatively simple design.
praisal is evaluated as one global characteristic of the situ- Similarly, models of complex and social emotions that
ation of the agent: the model does not support several get implemented computationally are currently designed
active appraisals at the same time. In contrast, in human manually (e.g., Steunebrink, Dastani, & Meyer, 2007). It
cognition, emotional characteristics are attributed to virtu- seems that a true explanatory theory of emotions should
ally all individual elements of awareness at all levels, that provide a natural mechanism for the emergence of complex
are processed in parallel. Other limitations are in the low le- emotions starting from a minimal emotional embryo, in-
vel (missing complex emotions) and the limited usage stead of offering modelers to design representations of
(mostly for reinforcement learning) of implemented complex emotions manually in each special case, using their
appraisals. While these are impressive works, there is still intuition or available phenomenological models. Here one
a long way to go from here to the BICA Challenge. simple and universal solution should be valued more than
The same work (Laird 2008) describes an implementation hundreds of labor-consuming, detailed case-specific model
of episodic memory in Soar as time-stamped stored snap- implementations.
shots of working memory contents, which appears to be The situation looks similar in modeling of various forms
easy from the authors point of view. Episodic memory of metacognition (Cox & Raja, 2011) as well as many other
is not easy in biology, where it combines phenomena of elements of Fig. 1. The bottom line here is that there ap-
(re)consolidation, the multiple trace formation (Nadel & pears to be a general barrier in virtually every dimension
Moscovitch, 1997), prospective memory creation, traces of of the roadmap when researchers attempt to move toward
imagery, Theory-of-Mind thinking and the like (Nadel & the BICA Challenge from more traditional research para-
Hardt, 2011). While the book of Tulving (1983) is cited, in digms. The nature of it was captured a long time ago by
Tulvings view, episodic memory involves the notion of the John von Neumann: If you will tell me precisely what it
self in the past, and its retrieval is compared to a mental is that a machine cannot do, then I can always make a ma-
time travel of the self. Unfortunately, there is no concept chine which will do just that. There always will be found
of self in Soar or in ACT-R. an efficient traditional special-case solution for a given
One evidence of episodic memory in humans is that in problem, and yet the problem is to find one general BICA
many cases a person would not say or do the same thing solution for all cases. This problem will never be solved if
twice: e.g., would not take a medication 5 min after it instead of trying to solve it we will continue looking for lo-
was taken. Also, a human episodic memory by definition is cal minima: most efficient traditional case-based solu-
always created by one-time learning from a single episode tions, adding them together step by step. Instead of
of experience. In contrast, the usage of episodic memory incremental accumulation and integration of new capabili-
in Soar (Gorski & laird, 2011; Laird, 2008) and in related ties, which has been a tradition in artificial intelligence
works occurs in paradigms in which a repetition of previ- for decades, a simple and universal model is needed that
ously successful action is considered a success of learning, can enable virtually unlimited, quasi-autonomous cognitive
and successful learning typically takes not one, but hun- growth: possibly with human instruction and scaffolding,
dreds or thousands of experienced episodes. This way of but without the need for a programmer intervention, and
usage makes the boundary between episodic and semantic hopefully based on BICA rather than on genetic algorithms.
memory in computer science unclear. At the same time, It appears that this step can be accomplished if we take the
episodic memory in ACT-R is not available even at this level logic of BICA to a meta-level and allow the architecture to
(Gorski et al., 2011). In general, episodic memory in construct itself (Samsonovich, 2009).
106 A.V. Samsonovich

Concluding remarks: An ultimate criterion for the same time. Many science-fiction movies and related
success modern writings frame future artificial intelligence as a po-
tential enemy of the mankind. Based on the above, its
We live in a very special time, when people can create what anticipated role is quite opposite: a savior, a remedy against
they could only dream of creating in the past (Nietzsche, imminent threats, and a radical solution to human
1885/1993, Zarathustras Prologue) and what might be a problems.
useless exercise to re-create in the future. This is the possi- How long would it take? My guess is that the step from
bility to create something similar to us, something that can the first BICA Challenge solution to the Society Component
extend the human mind and spirit beyond the human body may be fast it could take less than a year, and may indeed
and beyond the human imagination. And yet a problem is to look like an explosion. What happens next depends on the
understand precisely what is that we want to create, before organization of the process. First steps through the roadmap
we may know how. Many definitions, tests, concepts and cri- (Fig. 1) starting from the bottom are happening as we speak.
teria were proposed through decades and continue to be de- Solving the BICA Challenge, even within a limited domain, is
signed today (Adams et al., 2011; Anderson & Lebiere, 2003; the hardest part of the path. Nevertheless, if taken seri-
McCarthy et al., 1955; Newell, 1990; Turing, 1950). The old- ously, it may take only about a decade, as estimated by
est of them remains to be the most popular, despite a grow- Jim Albus. The bottom line: it can be done.
ing consensus that it is not good (Korukonda, 2003). Does Meanwhile, the battle of man and machine is unfolding
the idea of a big breakthrough in artificial intelligence make today: not as a battle of physical or virtual bodies, but as
sense at all? How long before it may happen? Here is one a battle of mentalities, ideas and paradigms. People be-
possible view. come dependent in their thinking on smart electronic re-
The ultimate, overarching goal, to which the BICA Chal- sources and intelligent artifacts, while those resources
lenge is only a step, was described above as a situation in and artifacts acquire features of the human mind. What
the world when an essential component of the human soci- we need and hopefully will find in future artificial intelli-
ety will emerge, represented by human-like intelligent gence is not an invisible enemy that tacitly takes control
agents integrated into human teams. This will be not a com- over our minds, but a continuation of the human spirit,
ponent of the infrastructure of the human civilization, and ideas and values. There should be no doubt that machines
not a new category of tools and technology, because these will take larger and larger part in the ordinary human life.
agents will have rights and responsibilities of society mem- What is uncertain is whether those machines will eventually
bers. At the same time, they should be a subordinate exten- become humans and our friends or will humans turn into
sion of the human society, not an alternative to us. This machine parts until they become obsolete. This is for us
extension will help people to solve many problems of the to decide in our lifetime.
human civilization, and in general will make our world a
better place for us. Created initially as virtual intelligent
agents, they will at some point acquire physical embodi- Acknowledgements
ment. This goal includes three main conditions. The intelli-
gent agents must be: (1) human-compatible and useful as I am grateful to my friends and colleagues as Directors of
society members, (2) self-sustainable, and (3) human- the BICA Society: Drs. Kamilla R. Johannsdottir and Antonio
extending. These conditions are explained again below. Chella, for their useful comments on the manuscript that
helped me to improve it. My greatest thanks go to Dr. James
(1) Human-compatible and useful as society members S. Albus, with whom I had interactions during the last years
means, on the one hand, human-level generally-intel- of his life, primarily in the context of the DARPA BICA Pro-
ligent, capable of communication and learning, and gram, the Decade of the Mind initiative, and many initia-
generally useful as experts and workers to humans, tives of the BICA Society. I remember Jim as a visionary
and on the other hand, believable, transparent and and a great teacher, whose ideas always inspired me to
trustworthy, as judged by humans. think about the future of the human civilization; as a good
(2) Self-sustainable means that the new component as friend, whose positive energy and creative thinking contin-
a whole (including humans in the loop) should be able uously empowered everybody who was in touch with him,
to take care of its own persistence and growth, and I was lucky to be among those.
including development, production, and market, in a
way that ensures an open-ended scenario of its evolu- References
tion and integration into the society.
(3) Human-extending means that the agents must be Adams, S. S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B.,
active carriers of the human mentality, human spirit, et al (2011). Mapping the landscape of human-level artificial
human values, human culture, and possibly, in a general intelligence. AI Magazine, 33(1), 2542.
Albus, J. S. (2011). Path to a better world: A plan for prosperity,
remote future human minds themselves.
opportunity, and economic justice. Bloomington, IN: iUniverse
178p. ISBN 978-1-4620-3533-5.
It remains to say that these three conditions together can Albus, J. S., & Meystel, A. M. (2002). Engineering of mind: An
be taken instead of the Turing test as a practical criterion introduction to the science of intelligent systems. John Wiley
for achieving a true human-level artificial intelligence. In and Sons.
other words, they constitute the ultimate goal for our imag- Albus, J. S. (2008). Toward a computational theory of mind. Journal
inable future and the ultimate practical success criterion at of Mind Theory, 1(1), 138.
On a roadmap for the BICA Challenge 107

Albus, J. S., Bekey, G. A., Holland, J. H., Kanwisher, N. G., Newell, A. (1990). Unified theories of cognition. Cambridge, MA:
Krichmar, J. L., Mishkin, M., et al (2007). A proposal for a Harward University Press.
decade of the mind initiative. Science, 317(5843), 1321. Nietzsche, F. W. (1885/1993). Thus Spake Zarathustra (Also sprach
Anderson, J. R., & Lebiere, C. (1998). The atomic components of Zarathustra). Translated from German by Thomas Common,
thought. Mahwah: Lawrence Erlbaum Associates. revised by H. James Birx. Amherst, NY: Prometheus Books.
Anderson, J. R., & Lebiere, C. (2003). The Newell test for a theory Rees, G., Kreiman, G., & Koch, C. (2002). Neural correlates of
of cognition. Behavioral and Brain Sciences, 26, 587640. consciousness in humans. Nature Reviews Neuroscience, 3,
Chella, A., Lebiere, C., Noelle, D. C., & Samsonovich, A. V. (2011). 261270.
In K. R. Johannsdottir & A. V. Samsonovich (Eds.). Biologically Samsonovich, A. V. (2010a). Toward a unified catalog of imple-
inspired cognitive architectures 2011: Proceedings of the second mented cognitive architectures (review). In K. R. Johannsdottir,
annual meeting of the bica society. Frontiers in artificial A. V. Samsonovich, B. Goertzel, & A. Chella (Eds.). Biologically
intelligence and applications (Vol. 233, pp. 453460). Amster- inspired cognitive architectures 2010: Proceedings of the first
dam, The Netherlands: IOS Press. annual meeting of the bica society. Frontiers in artificial
Cox, M. T., & Raja, A. (Eds.). (2011). Metareasoning: Thinking intelligence and applications (Vol. 221, pp. 195244). Amster-
about thinking. Cambridge, MA: The MIT Press. dam, The Netherlands: IOS Press.
Gorski, N. A., & laird, J. E. (2011). Learning to use episodic Samsonovich, A. V. (2010b). Toward a large-scale characterization
memory. Cognitive systems research, 12(2), 144153. of the learning chain reaction. In S. Ohlsson & R. Catrambone
Gray, W. D. (Ed.). (2007). Integrated models of cognitive systems. (Eds.), Proceedings of the 32nd annual conference of the
Series on cognitive models and architectures. Oxford, UK: cognitive science society (pp. 23082313). Austin, TX: Cognitive
Oxford University Press. Science Society.
Huffman, S. B., & Laird, J. E. (1995). Flexibly instructable agents. Samsonovich, A. V. (2009). The constructor metacognitive archi-
Journal of Artificial Intelligence Research, 3, 271324. tecture. In A. V. Samsonovich (Ed.), Biologically inspired
Korukonda, A. R. (2003). Taking stock of Turing test: a review, cognitive architectures ii: Papers from the AAAI fall symposium.
analysis, and appraisal of issues surrounding thinking machines. AAAI technical report FS-09-01 (pp. 124-134). Menlo Park, CA:
International Journal of Human-Computer Studies, 58, AAAI Press.
240257. Samsonovich, A. V. (2007). Universal learner as an embryo of
Kurzweil, R. (2005). The singularity is near. New York: Penguin computational consciousness. In: A. Chella, & R. Manzotti (Eds.),
Group. AI and Consciousness: Theoretical Foundations and Current
Laird, J. E., Newell, A., & Rosenbloom, P. S. (1987). SOAR: An Approaches. Papers from the AAAI Fall Symposium. AAAI
architecture for general intelligence. Artificial Intelligence, 33, Technical Report FS-07-01 (pp. 129134). Menlo Park, CA: AAAI
164. Press.
Laird, J. E. (2008). Extending the Soar cognitive architecture. In P. Steunebrink, B. R., Dastani, M., & Meyer, J.-C., (2007). A logic of
Wang, B. Goertzel, & S. Franklin (Eds.), Artificial general emotions for intelligent agents. Proceedings of the 22nd Con-
intelligence 2008: Proceedings of the first agi conference ference on Artificial Intelligence (AAAI 2007) (pp. 142147).
(pp. 224235). Amsterdam, The Netherlands: IOS Press. Menlo Park, CA: AAAI Press.
Mariner, R. P., Laird, J. E., & Lewis, R. L. (2009). A computational Tulving, E. (1983). Elements of episodic memory. Oxford: Claren-
unification of cognitive behavior and emotion. Cognitive systems don Press.
research, 10, 4869. Turing, A. M. (1950). Computing machinery and intelligence. Mind,
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955/ 59, 433460.
2000). A proposal for the Dartmouth summer research project on Vinge, V. (2008). Signs of the singularity. IEEE Spectrum, 45(6),
AI. In R. Chrisley & S. Begeer (Eds.). Artificial intelligence: 7682.
Critical concepts (Vol. 2, pp. 4453). London: Routledge. Zimmerman, B. J. (2008). Investigating self-regulation and motiva-
Nadel, L., & Hardt, O. (2011). Update on memory systems and tion: Historical background, methodological developments, and
processes. Neuropsychopharmacology, 36, 251273. future prospects. American Educational Research Journal,
Nadel, L., & Moscovitch, M. (1997). Memory consolidation, retro- 45(1), 166183.
grade amnesia and the hippocampal complex. Current Opinion
in Neurobiology, 7, 217227.

Você também pode gostar