Escolar Documentos
Profissional Documentos
Cultura Documentos
Contents
1 Introduction & Background 2
6 Sensitivity Analysis 14
6.1 Comparing Affective States with and without Learning Objectives . . . . . . . . . . . . . . . . 15
6.2 Parameter sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.3 Structural sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
8 Conclusion 19
Instructors face many kinds of uncertainty while preparing to teach a course: Will the students understand
the materials presented? Which topics and in what order should they be presented? How will student
individual backgrounds and interests affect the direction of the course? Can this diversity be leveraged to
Following research about best teaching practices1 , classrooms are increasingly being transformed from
places where students listen passively into places of activity and interaction. Teachers are perhaps further
challenged with the appearance of social media: these tools are presented as a means to increase the
degree of interaction among students, both inside and outside the classroom. But, how?
Suppose ten students all have a different comment or question about an article posted by the teacher.
The experience in this situation would be arguably different than within a class where only a single student
1 In [Bain, 2004] [2], an outstanding teacher is said to be one who has made a sustained, substantial, and positive influence on how
their students think, act and feel. Bain and colleagues performed a study to pinpoint techniques used by the best teachers to create a
“natural critical learning environment”. One of many techniques is to involve students in class or small group discussions.
shared a reaction to the same article. By studying the behaviour around the creation and re-use of Learning
Objects (i.e. anything that a student can use for their learning) we can start to see how to use technology
to scaffold the learning environment with supportive activities, suggestions or chunks of knowledge at just
In this project, we developed a simulation model to begin to explore such questions. The model repre-
sents the materials in an online course, a social network of students, and some rules about an individual
students affective state and how this affects their actions: browsing, creating and interacting with course
content. The goal of the model is to help instructors predict how students might navigate through the learn-
ing environment that the instructor is designing. The context of our work is within Artificial Intelligence in
Education (AIED) as we seek to build individualized, adaptable educational environments. Findings from
this model’s results could inform the design of future Intelligent Tutoring Systems.
We emphasize that the objective of our simulation model is not to seek to control the behaviour of
learner-agents, but, rather to learn how to best scaffold their experiences for maximum significant learning,
and to show quantitative measures for a “natural critical learning environment” [Bain, 2004] [2]. For these
measures, we present a series of measurements based on Fink’s Taxonomy of Significant Learning, dis-
Our simulation model’s baseline scenario shows learning outcomes when students are given a “struc-
tureless” curriculum, that is, an empty set of learning objectives. Different intervention scenarios can provide
quantitative learning outcome measures for different assignments of learning objectives. Teachers might
ultimately decide to focus on a set of learning objectives in their class with maximum learning outcome
measures in the simulated model. We explore the balancing act between individual learning goals and
overall learning goals of the group. We also see that the knowledge artifacts created by learners can impact
Although it was not possible to explore all of the questions uncovered in this modelling experience, we
can use the model structure that we have developed to give form to our future work. The model gives us
the ability to quantify a multitude of subtle questions that could not be measured in a real study, and might
be used to narrow down the important, specific questions that warrant a future real-world study.
In this report, will share the insights we gained during the model development process. Course planning
can and should take into account the personal backgrounds and interests of each student, while providing
group activities that allow each person to discover more than they could on their own. Our work has helped
us ask how to plan the best group activities that have have great meaning at the individual level. Although
these questions are not new themselves, to ask them in a precise computational manner gives exciting
AIED researchers commonly strive to give their systems the ability to reason over the knowledge that a
learner is trying to absorb themselves while using the system. The term, “task domain” refers to the con-
cepts that the learner is studying; the term was introduced in [VanLehn 2006] [15], and many different
knowledge representation techniques have been used to encode task domain knowledge. In our model,
the task domain ontology is only indirectly referenced through their association with Learning Objects. Our
Learning Objects are considered agents, much like the ”Application Agents” in [Vassileva et al., 2003] [16].
In order to simulate different teaching strategies in the future, we expect that we will need to have more ex-
plicit knowledge representation of the task domain ontology. Similarly, modelling human teaching strategies
gives insight on different ways an Intelligent Tutoring System (ITS) and a learner can interact. Boulay et al.
present a survey paper in this area [du Boulay et al., 2001] [6]. Our model drastically abstracts the teaching
In contrast to the deep modelling of task domain ontology and specifics of human teaching, the Eco-
logical Approach suggests that meticulous metadata is not necessary, rather, to quote from the abstract in
[McCalla, 2004] [10], “In a phrase, the [Ecological] approach involves attaching models of learners to the
learning objects they interact with, and then mining these models for patterns that are useful for various
purposes.” It is not the metadata markup that is useful, but rather how the object ends up getting used
after all. Throughout this project, we have certainly asked ourselves whether we could and should leverage
knowledge from educational psychology and instructional design to build effective eLearning environments:
Our model looks at the macro scale in terms of many learners, but work has gone into modelling a
Learner’s internal understanding of a subject, such as in[Conati et al., 2002] [4]. We ask whether under-
standing within an individual’s mind could impact decisions at the macro level, like how to structure group
activities over the course of a term. To guide future investigation about how agent-specific interactions
might inform more effective group learner activity, we note two techniques from AIED that show how a
system could interact with an individual learner over time. First, Model Tracing [8] pre-programs a rela-
tively specific course plan and encourages the learner to follow expected behaviour in order to complete
an activity. Second, the Constraint Based Tutoring Approach [Mitrovic et al., 2001] [12] matches a stu-
dent’s solution during a tutoring session to a set of constraints that describe a correct solution. Tailored
feedback can be given to the student based on the particular constraint violated in their answer. Student
answers that do not violate any constraints are correct solutions and thus do not require hints or corrections.
Our model does not have any representation for the individual steps in a particular problem, but in future
work we are interested in asking whether a student’s experience at a fine-grained level could or should
impact the system’s responsiveness to their personalized curriculum at a broader level. Additionally, we
note that research into the concept of curriculum in an Intelligent Tutoring System prior to 1990 is surveyed
This section will describe the major components of the model: The network of Learners, the network of
Learning Objects, the attributes of these two types of agents, and the data sets used to measure dimensions
The key structural elements of the Main part of our model are shown in Figure 1. Our model uses two envi-
ronments: The Learning Object environment houses a population of Learning Object agents, connected by
content relations such as is-a, part-of, prerequisite, etc., discussed further in [Wasson, 1991] [18]. Content
relations are important considerations when making pedagogical decisions about what options to recom-
mend to a learner given their past experience. The Learning Object Agents reside in a continuous environ-
ment using a Spring Mass layout. We selected this configuration because we wanted the system to show
the course content on the screen such that concepts that share epistemic relationships are shown close
together.
Like traditional classrooms, some eLearning classrooms have a synchronized curriculum where all stu-
Figure 1: - Main structural elements The learnerAgents population is associated with the learners environment
and the learningObjectAgent population is associated with the objects environment. The goal of our model is to help ex-
plore the relationship between content, knowledge of teaching processes, and individual states like Focused, Stressed,
Confused. All of these things are reflected in the creation and consumption of Learning Objects by Learners.
dents in the class are expected to be working on the same topic at roughly the same time. However, we
also wish to explore the pedagogical possibilities around a self-directed learning environment, where learn-
ers may come-and-go, starting the material at different times and working through it at different paces.
We discovered that the modelling process has helped to refine the questions around alternative curricu-
lum scheduling, and to uncover questions about pedagogical techniques. A“time synchronized” curriculum
might take the form of a series of Events in AnyLogic, where an event corresponds to an assignment of the
same Learning Objective to each student, perhaps with an associated deadline. The opposite situation is
when learners are not coordinated at all. Even at the individual level, [Vassileva et al., 1996] [17] describe
the “Locus of Control”, whether the Learner has most control over the direction of the lesson, vs. whether
the computer takes control. However, we are uncertain whether this has been explored in a quantitative
manner.
The second environment in our model represents relationships between Learners. In Figure 2, Learn-
ers are represented with circles and Learning Objects are represented with rectangles. We designed our
system to be able to represent relationships like “follows the research of”, or “is a student of”, or “holds
authority over”” although we have not yet leveraged this structure in our analyses. These types of relation-
ships could factor into our algorithm for Learner behaviour that determines whether they choose to share
material with a particular person, or whether they would follow the recommendation of a particular person.
A study was conducted by [Daniel et al., 2008] [5] that unveiled the circumstances under which students
share materials. For example, it was found that people are mostly unwilling to share knowledge with people
they hardly know, and that people would not share knowledge in an environment where competition instead
of cooperation is encouraged and one in which the notion of “knowledge is power” is maintained. Putting
social relationships into our model could also help the system cluster students together for group activities.
Figure 2: - Two EnvironmentsWhen a Learner (circle) is ”browsing” a Learning Object (rectangle), then the Learner
icon is shown on top of the Learning Object icon. As the simulation runs, you see the circles appearing and disappearing
on top of content nodes. If a Learner creates a new artifact such as a blog entry or Twitter (www.twitter.com) message,
you will see the underlying Learning Object network grow a new node from the Learner’s current location.
Our model has two types of agents: Learner Agents and Learning Object Agents. The Learners represent
students taking a class (or, trying to learn roughly similar things in roughly similar space). It would have
been interesting to explore sub-typing of Learners, for example: those taking a specific class vs. “Lay
Person” learners who would share interactions and be influencing the environment themselves, but not be
subject to the curriculum goals set by a teacher. We acknowledge Sterman’s advice: Model the problem,
not the system. [Sterman, 2000] [14] (p. 79) However, it remains to be explored how Lay Persons can
indeed become an important part of the Education process. Our model did not explore the concept of
Teacher agents, either. Our model considers a single course; it may be interesting to consider a more
macro scale - such as the planning of “The First Year experience” at a particular institution, with different
Each Learner Agent has two sets of learning objectives: those that are “imposed” by a teacher and
those that are self-selected by the Learner. In our final analysis, we only used the first category, but think
that further exploration of both is warranted because of the strong impact of internal vs. external motivators
in an educational setting.
Figure 3: - Affective States A Learner’s current affective state is relative to a particular learning Object. Further
exploration is required around whether these should indeed be relative to a learning object, or should refer more
generally to the current learning objective, or perhaps both at the same time. For example, a person might feel confused
about the concept of subtraction, but be wildly enthusiastic about a particular YouTube video that illustrates subtraction.
This might inform the selection of the next tutoring action to be taken with the student, or, in simulation model, could
yield a high score in a significant learning measure.
Emotion is an important part of learning, and researchers are finding ways to model emotion in computer
tutors (examples: [Woolf et al., 2007] [19] and [Afzal et al., 2007] [1]. In our model, we based the Learner
state chart on a selection from the emotions taxonomy of Baron Cohen (as presented in [1]) as well as the
The emotional states are relative to the current-topic-of-study in the environment. For example, when an
individual learner is ”Stressed”, this will be in relation to a particular topic. If we were living up to our aspi-
ration of implementing Ecological approach, whenever we add a Learning Object to an individual’s history,
we would also save a copy of the emotional state at the time. This would allow pattern-matching later on,
for example, ”Give me all of the Learner Agents who were feeling Angry-frustrated when they were visiting
At each Learning Object visit, each Learner could interact with the environment in three ways, depending
on their state:
1. browse: following links, traversing the landscape by content-connection. This is achieved with the
2. warp to a Learning Object that corresponds to one of their goals (i.e. Learning Objectives).
3. create new learning object: remain stationary and produce a new learning object
The combination of activities above that a Learner takes at a given moment is impacted by their affective
state (Figure 3). For example, if a Learner is Attentive-Confident, then they will create a new learning object
and warp to the next objective. A Learner who is Bored-MentalFatigue will wander around without producing
Attentive / Confident Default starting state - Further study is needed to more accurately state conditions
under which learners could transition to attentive confident, and we expect this would vary among
individuals. Currently, we say that a Learner remains in this state so long as learning objects aren’t
Confused / Unsure - When visiting a new Learning Object, we say that learners will become Confused /
Frustrated / Angry Learners become Frustrated/Angry if there are no learning objects in the system that
suit their learning objective. This check is performed using the expression, (getLearningObject-
TaskDomain() step.
Bored / MentalFatigue A Learner becomes Bored if they have completed all of their learning objectives.
We may say that a Bored person is more likely to follow the Learning Objects being created by their
friends than those created by the teacher. Learners will transition this state if the following expression
is true: if (getNewLearningObjective()==null).
Stressed Not used - However, [Daniel et al., 2008] [5] have shown that Stressed Learners are highly
unlikely to share learning objects. In the future, if we introduce time limits on each learning objective,
this state might trigger if the Learner Agent doesn’t have enough time to complete all their objectives.
At each Learning Object visit, there is a 75% probability the Learner will ”learn” the topic covered by that
Learning Object. Future versions of this model could include a function that would use a different probability
The second type of agent in our model, the Learning Object Agent, is used to represent anything that
a person can use, interact with, or create during their studies. Examples of Learning objects include: web
pages, RSS feeds, blog entries, assignments, discussions, an intelligent tutoring system. We distinguish a
Learning Object (an artifact, and an agent in our model), from a Learning Objective (a goal), which is an
When new Learning Object Agents are created, we do not know which previous Learning Object Agents
are related, and thus how to set up the Agent Connections in the system. Should the new learning object
inherit some parent attributes from the learning object that the learner is currently sitting on? These are
stochastic characteristics because we do not know what the learner is thinking or writing. However we do
have some context: We know the learners current goal and their browsing history, and have some sense of
the input they are receiving from their friends and the teacher.
A practical challenge faced during this project was the visual synchronization of our two networks. When-
ever a Learner Agent created a new Learning Object Agent, the Spring Mass Layout for Learning Objects
would update, while the Learner Agents would not move. We got this working, but only if there are not too
many Agents moving around and when the populations do not grow too quickly. For example, if Sally were
visiting ”Introduction to Algebra”, the circle representing Sally would be displayed on top of the rectangle
representing Introduction to Algebra. Suppose Sally creates a new learning object, say, she writes down
the equation 3x = 9 and solves it. This chunk of math is a new Learning Object that gets added as a child
node of ”Introduction to Algebra”. AnyLogic recalculates the Spring Mass Layout, causing the ”Introduction
to Math” node to shift to one side, leaving Sally floating in the air. We had to decide if we should move Sally
to sit on top of the ”chunk of math” node, or, if we ought to leave her on the ”Introduction to Math” node.
Theoretically, we wanted to choose the former, but, practically, we could only get the latter to work!
Figure 4: - Online Course in iHelp Our model can take any subset of the pages within an iHelp course using a
MySQL query. The structure of the overall course is shown in the left-hand column in the Figure above. Students can
click to any topic to view the details in the larger pane on the right of the screen. Every page in the course is represented
in our model as a Learning Object agent, discussed in Section 3.2.
Data from an online course in iHelp is used to load our model with Learning Objects. The navigational
links within the online course are used to create content relations between the objects (i.e. agent network
connections). In Section 3.1, we discussed that content could contain different types of relations like is-a
or part-of. However, in this first version of our model, we have only an uncategorized, directionless link be-
tween learning objects. [Breuker et al., 1999] [3] showed multiple dimensions of content links at increasing
levels of abstraction. These could help a system retrieve a highly relevant learning object, for example, a
system may know that a particular student could benefit from being provided with an example of a ”general-
ization” of a particular concept. Having content links that are labelled with these dimensions would enable
the system to retrieve the relevant learning object for the particular student in a particular context.
Figure 5 shows the MySQL query we used to load Learning Objects into our model. Luckily for us, we
could use a shortcut to map between any Learning Object and its Course Module by looking at the code for
the chat window for that Module. Options include: M1 EvolComp, M2 HwSw, M3 Internet, M4 BegXHTML,
move chunks of Learning Objects by Module by modifying the subject codes being imported.
SELECT
DISTINCT(contentid), title, parent_contentid, chat_forumid
FROM course_content
WHERE courseid=3
AND chat_forumid IS NOT NULL
AND NOT title=’Exercises’
AND NOT title=’Quiz’
AND NOT title=’Conclusion’
AND NOT title=’References’
AND NOT title=’Assignment’
AND NOT title=’Videos’
AND (chat_forumid=’M1_EvolComp’
OR chat_forumid=’M3_Internet’
OR chat_forumid=’M6_ProgLang’
OR chat_forumid=’M9_JSConcepts’
OR chat_forumid=’M4_BegXHTML’
);
Figure 5: - MySQL query to iHelp Database to obtain Learning Objects Each record in the ResultSet
corresponds to a Learning Object Agent in our AnyLogic model. This is done in the Function, initializeAgentAttributes in
AnyLogic’s Main. For cosmetic reasons, we excluded several learning objects with repetitive titles. If we were calibrating
against actual student browsing behaviour, we should put these learning objects back in.
4.2 Calibration
iHelp has valuable data about learner quiz scores, dwell times and enter/exit times on each learning object,
information about when learners posted questions in iHelp Disussions. Although our model did not make
use of this rich data, Calibrations or initial Parameterization of the model would be a fruitful direction for
future work.
Seeing as our goal is to help a teacher maximize the learning of their students, we had to find a way to mea-
sure these experiences. In our model, we used Fink’s Taxonomy of Significant Learning[7], which places
kinds of learning into five categories. Below, we explain each category according to Fink’s definition, and
then we describe how we have interpreted how to apply this definition within our model as a quantitative
measurement. We propose that the following significant learning experiences could be represented in a
model, and, that a teacher can arguably predict that their students will have a higher quality learning expe-
We were only able to implement one measure in our final model: Foundational Knowledge. Within the
Learner Agent population, a Statistic calculates the Average of each Learner Agent’s countFoundational-
Knowedge() function, which counts the number of the learning objects that this agent created that match
the topics covered in their learning objectives. In other words, we are saying that if a Learner creates
Learning Objects that match their learning objectives, then they are meeting the Foundational Knowledge
criteria. This is perhaps too large of an assumption because it may incorrectly give credit to a learner who,
for example, produces an article containing errors or mistaken information about the topic.
Learning How To Learn To measure whether a student is becoming a self-directed learner, or is mastering
a particular kind of inquiry (example: Scientific Method), we could use Messages in AnyLogic between
agents, and characterize types of messages (example: asking a question, stating a fact) and match
patterns. Does an experienced Learner exhibit different behaviour than a novice Learner?
Foundational Knowledge A student who understands and remembers facts will create new learning ob-
jects that embody those facts. Therefore, we can measure a simulated student’s degree of founda-
tional knowledge by counting the number of Learning Object Agents created by them that match the
given category.
Application Application learning allows new kinds of foundational knowledge to become useful. In the
model, this could be reflected in the longevity of the Learning Objects created by the Learner, and by
Integration When students make connections between ideas, integration has occurred. Within our sim-
ulation model, we might describe an abstract pattern for taking two ideas from a student’s past (i.e.
looking at the history of Learning Objects visited by a student), and create links between these parent
objects and a new Learning Object created by that student. We might say that this particular pattern
occurs under certain circumstances (for example, if a student has a certain level of experience with a
Human dimension Is it possible to measure or predict a student’s learning about one’s self and others?
The Human Dimension is perhaps the most challenging category for a modeller. For a teacher using
this simulation model for course planning, perhaps it is enough to leave each student to ask these
questions for themselves. On the other hand, in this very young field of research, it is probably too
early to judge!
Caring When students show desire from within to pursue a particular topic beyond the requirements set
by a teacher, they are showing the ”Caring” category of learning. This can be measured by counting
the number of learning objectives in a Learners ”subscribed” list of Learning Objectives (as opposed
We started to build in a measure for Integration, but did not finish. We say that a Learner has achieved
Integration if they have created a Learning Object that covers more than one topic. Unfortunately, at this
time we have not adequately explored how multiple ideas (i.e. “Concept” objects in our model) might come
to be attached to a particular learning object. This is a fruitful area to explore the relationship between
Since our model is primarily a course planning tool, we wanted to have a baseline scenario that would
represent the expected behaviour of people in a learning environment without any structure, course plan,
or any external motivations from a teacher. In our model, this means that each Learner Agent’s list of
“prescribed” learning objectives would start empty. The idea is to let a teacher input several different course
plans and observe the measures for Fink’s significant learning experiences. A teacher may decide to design
a course plan that maximizes learning outcome measures in the simulation model.
6 Sensitivity Analysis
Focusing on the number of learners, the the addition and removal of learning objectives, and the starting
set of topics in the learning environment, we describe expected and unexpected results.
While keeping the number of Learners constant at 10 Learner Agents and using the same starting set of
Learning Objects, we compared the Affective States (i.e. the percentage of Learners who were Confused,
Attentive / Confident, Bored / Mental Fatigue, Frustrated / Angry) for when we put zero learning objectives
into the system as compared to when we did assign a starting set of learning objectives to each learner.
In both cases we observed, as expected, that at the beginning 100% of the learners started at the
Attentive / Confident state. However, in the case where there were zero learning objectives, the Learners
gradually all transitioned to Bored. Figure 6 shows the simulation when learners have zero learning objects
Figure 6: - Snapshots through time with No Learning Objectives This figure shows the same pie chart with
screenshots taken a few seconds apart. It shows that at the start of the baseline run, all learners started in an Attentive
state, and it did not take long for them all to transition to Bored. A Learner Agent transitions to Bored when they have
zero unaccomplished Learning Objectives left (or, in this case, none assigned at all).
In the case where the Learners had been assigned learning objectives, we observed that none of the
learners were ever Bored. Rather, the entire population was divided into Confused, Attentive / Confident
and Frustrated /Angry. The percentage of Learners in each category fluctuated quite a lot. For example,
the Confused category would disappear altogether with 40% at Attentive / Confident and 60% at Frustrated
/ Angry.
We also kept an eye on the new Learning Objects created by the Learner Agents, and which topics saw
the most growth. When there were Learning Objectives assigned, we observed that this could cause one
category to be ignored by the learners altogether. For example, in Figure 7, none of the learners are paying
any attention to the chapter about Module 6 - Programming Languages. This is observable by the cluster
Figure 7: - Learners have Learning Objectives Assigned The pie chart on the left is the same as Figure 6 on
a different run; the only change made between runs is that this one shows the results of Learner Agents having learning
objectives assigned to them.
of sand-coloured nodes shoved down in the lower right with no Learner Agents (i.e. circles) sitting on them.
Given the current functionality of our model, other parameters we could vary in new experiments include:
The number of learners, the starting “Learning Object Visited” of each Learner, the starting affective state
of each learner, the starting set of learning objectives, the circumstances under which Learners create new
learning objects or browse the task domain. We might even consider issues of retention and withdrawal
Outputs we could measure include: the number of learning objects created, the navigation paths taken
by each learner, the “affective paths” of each learner (i.e. the order in which they transitioned across various
Although our model was built from scratch and is a very early and rough representation of the Course
Planning problem in eLearning, we did try to identify points of leverage in the system. During the first few
runs of our model, we noticed that the average foundational knowledge output (i.e. the topic coverage by
the average learner) is tied directly to the learning objectives imposed by the teacher. We searched for
different phases in the system where different feedback loops might dominate the system, and noted two
major phases: first, as Learners are fulfilling their learning objectives, and second, behaviour after Learners
have completed their objectives and continue to exist within the system.
We ran our simulation with each learner being given the learning objective to master Module 6 - Program-
ming Languages. We observed that as the simulation ran, zero learners were picking up any knowledge
from this chapter. This was surprising because our model had a very broad assumption that if a Learner
merely visited a learning object with that topic, that they would immediately master the topic. We quickly
arrived at the conclusion that none of the learners were even browsing to any learning objects in that cat-
egory at all, which was why Fink’s measure for Foundational Knowledge remained at zero. In other words,
if by chance a Learner Agent’s starting location was placed in a subgraph that did not contain any learn-
ing objects that matched their learning objectives, then the Learner had zero possibility of learning anything.
Each Learner Agent’s behaviour was set to follow our browse() function, which causes them to navigate
from learning object to learning object like a graph traversal. However, we realized that our graph was
actually 3 graphs: Due to our iHelp query, there were zero links between learning objects of different cate-
gories! We realized our model did not realistically replicate true learner behaviour at all. In reality, learners
using iHelp could ”warp” from Module to Module simply by clicking links in the navigation tree; they didn’t
necessarily have to use the Forward and Back links that our graph was based on in AnyLogic.
MatchesMyObjectives() that allows Learner Agents to jump across subtrees without the need for content
links between all learning object nodes in the repository. While writing this function, we uncovered yet an-
other assumption that should be further investigated: When a Learner Agent starts to follow a new Learning
objective and there are 15 possible Learning Objectives in the repository that have the same topic, how does
the learner start? For our model, we simply select a random object.
We chose to base the Learning Object Agent network on an iHelp course, but would like to expand to al-
low the addition of Learning Objects in ways other than being created by Learner Agents. For example, the
instructor might flag a set of relevant RSS feeds and ask students to read incoming news items throughout
the course. Since we discovered that the content of the Learning Objects was a point of leverage in the sys-
tem, it would be important to perform further sensitivity analysis on a stochastic element like unpredictable
Although we ultimately only explored a “random, wandering learner” compared to a “Learner with objec-
tives”, we also present the other possibilities for experiments that we investigated in the earliest phases of
this project:
System vs. Learner control in learning objective negotiation [17] In our system, we could change the
proportion of “prescribed” vs “subscribed” learning objectives allowed. The system could allow more
freedom by allowing access to add or remove from the set the learner works on in their day-today
activity online. A related measure is Quality vs. Quantity in browsing: One learner may browse through
many Learning Objects, but not experience any deep learning, while another learner may browse
fewer objects but gain more value from them. In future work, this is probably the most important to put
Automated Content Sequencing perhaps instead of an assignment, we could also impose a Reading
Path and restrict the Learner’s navigation – “no distractions allowed for a sec”. Would we want to
idea transmission across social networks agent’s opinions of other agents - [Olorunleke et al., 2003]
[13] designed a simulation to study the spread of delusion among agents. HIs findings may show
Trying different teaching techniques testing out teaching techniques (assuming teaching techniques can
be represented as processes, perhaps using different Events). Should our system give learners
feedback when they make a mistake, or let them explore? Our model would need a lot of work to be
able to explore this type of question. A simulation might help us to see if this decision causes any
Learning object Recommendation What would happen if we gave the learners different starting sets of
learning objects? Or, different learners different sets of learning objects, but let them start to share
with each other. Can we measure different learning measures when the instructor uses a different
Group work Be able to test what happens when you cluster certain groups of people together with certain
Single user vs. Many users Since other Learners impact the environment by adding new learning objects,
an individual agent would have a different experience than an agent within a population of many.
For example, if a person is extremely interested in a particular topic, they could flood the learning
environment with questions and articles about that topic. This will impact the browsing experience
of others, because it changes the proportion of items available within a particular topic. Contribution
rates may also be impacted as a side-effect of the changing Learning Object landscape; if the course
is going in a direction that the student is uninterested in, their rates may drop.
Time Pressure and Work Load If Learning Objectives were given a time limit, and learners were expected
to work through all objectives within a certain period of time, then we may see impact on the amount
of exploration and browsing activity from each learner. For example, with too many Learning Ob-
jectives under a tight deadline we might expect to see high levels of Learner Agents spending time
in the Stressed state, while too few Learning Objectives might see an increase in time spent in the
Bored mentalFatigue state. Allowing a teacher to adjust the number of Learning objectives and then
measuring the number of ”subscribed” Learning Objects (as opposed to ”prescribed” learning objects)
could help them select the sweet spot for their class.
8 Conclusion
We work toward a new course planning tool that will help teachers move away from a model where all stu-
dents perform the same tasks at the same time toward a learning-centred environment where a teacher can
plan, deliberately, for each student to follow a personalized curriculum. At the same time, it is important to
have some measures of standardization (example, for accreditation or certification) and we suggest that the
set of “prescribed” learning objectives in our model fills this need. Common learning objectives also allows
for the planning of collaboration among groups of students by assigning the same Learning objective to
subsets of students but not for others. Although we did not explore this as much as we would have liked, we
have stumbled upon new ground and have made a good start. Other interesting questions a teacher might
ask are: optimum group sizes of certain types of classes. Maybe some domains (humanities, sciences,
etc.) work great with many learners and others work better with fewer.
Human tutor “action words” that are normally considered unattainable for a computer can now be consid-
ered in relation to the framework we have established: give advice advice, analogy, assert, clarify concept,
summarize, hint, explain, encourage, and so on (see [Kumar et al., 2005] [9] for a thorough list). We suggest
a few: To give advice means to identify a learner in a simulation with similar characteristics, calculate a few
likely navigation paths through the Learning Objects at hand, and then recommend the action with highest
utility (i.e. measure for significant learning experience). To give an analogy is to find a set of learning objects
with similar relationships between them (albeit easier said than done!). To assert means to have a learner
visit a Learning Object containing that fact. And so on... although these are still extremely difficult problems
in Artificial Intelligence, using a simulation allows us to describe them quantitatively. We continue to try
to understand what a computational model for different teaching strategies might look like, discussed in
Section 2, and whether detailed computational representations for human teaching tactics and dimensions
Throughout the development of this model, we have come to better understand the Ecological approach
and how it might be implemented. In our model, we would have to keep track of the purpose for which each
Learner made use of a particular learning object, and record that information with each Learning Object
Agent. We would also have to build in a recommendation mechanism that would compare Learner Agents
and present each with a set of Learning Objects to choose from based on the behaviour of previous similar
Learner Agents. This might help answer our question in Section 6.3, where we were only using a random
Our model allows us to articulate research questions in AIED that were not previously quantifiable. The
process of teaching and learning online can be explored as an experiment in an Agent-Based simulation,
using research findings from educational psychology and educational technology as a basis for the model
formation. We tackle the questions teachers face about their role in the participative web, and how they
continue to play a critical role even in a world where the teacher is no longer the primary disseminator of
knowledge, and the classroom is no longer the only place for collaboration and communication between
students.
References
[1] S. Afzal and P. Robinson. A Study of Affect in Intelligent Tutoring. In Proceedings of Workshop on
Modelling and Scaffolding Affective Experiences to Impact Learning, Intl. Conf. on Artificial Intelligence
[2] Ken Bain. What the best college teachers do. Harvard University Press, Cambridge, Massachusetts,
2004.
[3] J Breuker and A Muntjewerff. Ontological modelling for designing educational systems. AI-ED 1999
Workshop on Ontologies for Intelligent Educational Systems (Le Mans, France), 1999.
[4] C Conati, A Gertner, and K Vanlehn. Using Bayesian networks to manage uncertainty in student
[5] BK Daniel, GI McCalla, and R Schwier. Social Network Analysis techniques: implications for informa-
tion and knowledge sharing in virtual learning communities. International Journal of Advanced Media
[6] B Du Boulay and R Luckin. Modelling human teaching tactics and strategies for tutoring systems.
[7] L. Dee Fink. Creating Significant Learning Experiences: An Integrated Approach to Designing College
[8] NT Heffernan, KR Koedinger, and L Razzaq. Expanding the model-tracing architecture: A 3rd gener-
ation intelligent tutor for Algebra symbolization. International Journal of Artificial Intelligence in Educa-
[9] Vive Kumar, Jim Greer, and Gord McCalla. Assisting online helpers. Int. J. Learn. Technol., 1:293–321,
mar 2005.
[10] G McCalla. The ecological approach to the design of e-learning environments: Purpose-based capture
and use of information about learners. Journal of Interactive Media in Education, 2004.
[11] Gordon McCalla. The search for adaptability, flexibility and individualization: Approaches to curriculum
in intelligent tutoring systems. Adaptive Learning Environments: Foundation and Frontiers. Springer-
[12] A Mitrovic, M Mayo, P Suraweera, and B Martin. Constraint-based tutors: a success story. Proc.
[13] O Olorunleke. Overcoming agent delusion. In Proceedings of the second international Conference
on Autonomous Agents and Multiagent Systems, AAMAS’03, July 14-18, 2003, Melbourne, Australia,
2003.
[14] John D. Sterman. Business Dynamics. Systems Thinking and Modeling for a Complex World. The
[15] K Vanlehn. The behavior of tutoring systems. International Journal of Artificial Intelligence in Educa-
[16] J Vassileva and G McCalla. Multi-agent multi-user modeling in I-Help. User Modelling and User
[17] J Vassileva and B. Wasson. Instructional planning approaches: From tutoring towards free learning.
[18] Barbara Wasson. PEPE: A computational framework for a content planner. In S. Dijkstra, H.P.M.
Krammer, and J.J.G. van Merrienboer, editors, Instructional Models in Computer-Based Learning En-
vironments, pages 153–170. NATO ASI Series: Advanced Science Institutes Series F: Computer and
[19] B Woolf, W Burelson, and I Arroyo. Emotional intelligence for computer tutors. Supplementary Pro-