Do pedagogical agents make a difference to student motivation and

Educational Research Review 6 (2011) 27–54
Contents lists available at ScienceDirect
Educational Research Review
journal homepage: www.elsevier.com/locate/EDUREV
Review
Do pedagogical agents make a difference to student motivation and
learning?
Steffi Heidig a,∗ , Geraldine Clarebout b,1
a
b
née Domagk, University of Erfurt, Learning and New Media, P.O. Box 900 221, 99105 Erfurt, Germany
Katholieke Universiteit Leuven, Vesaliusstraat 2, Box 3770, 3000 Leuven, Belgium
a r t i c l e
i n f o
Article history:
Received 25 February 2009
Received in revised form 20 July 2010
Accepted 27 July 2010
Keywords:
Pedagogical agents
Animated agents
Multimedia learning environments
Motivation
Learning
a b s t r a c t
Pedagogical agents, characters that guide through multimedia learning environments,
recently gained increasing interest. A review was published by Clarebout, Elen, Johnson
and Shaw in 2002 where a lot of promises were made, but research on the motivational
and learning effects of pedagogical agents was scarce. More than 70 articles on pedagogical agents have been published since, and 26 of them examine motivational and learning
effects. We map out this research in order to answer three main questions: Do pedagogical
agents facilitate learner motivation and learning? Under what conditions are they effective?
How should they be designed? The review reveals that various studies have not investigated
the first two fundamental questions due to a lack of control groups. As research on pedagogical agents is highly complex, we propose a multi-level framework to enable systematic
comparisons between different studies and the identification of gaps in the literature. This
framework can be further applied to conceptualize and situate future studies.
© 2010 Elsevier Ltd. All rights reserved.
Contents
1.
2.
3.
4.
5.
6.
Result of the previous review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Aims of the presented review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Do pedagogical agents facilitate learner motivation and learning? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes? . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.
Pedagogical Agents-Conditions of Use Model (PACU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.
Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes?
—Empirical evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How should pedagogical agents be designed to foster learner motivation and learning outcomes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.
How should pedagogical agents be designed?-The learning environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1.
The learning environment—empirical evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.
How should pedagogical agents be designed? – Learner characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1.
Learner characteristics—empirical evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.
How should pedagogical agents be designed? – Functions of the pedagogical agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.1.
Functions of the pedagogical agent—empirical evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.
How should pedagogical agents be designed? – Design of the pedagogical agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.1.
Global design level: human vs. non-human characters and static vs. animated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
∗ Corresponding author. Tel.: +49 361 737 2753; fax: +49 361 737 2759.
E-mail addresses: steffi[email protected] (S. Heidig), [email protected] (G. Clarebout).
1
Tel.: +32 16 325718; fax: +32 16 326274.
1747-938X/$ – see front matter © 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.edurev.2010.07.004
28
29
29
30
42
42
42
44
44
44
45
45
46
46
47
47
28
7.
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
6.4.2.
Global design level—empirical evidence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.3.
Medium design level: technical decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.4.
Medium design level: technical decisions—empirical evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.5.
Medium design level: choice of the character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.6.
Medium design level: choice of the character—empirical evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.7.
Detail design level: age, gender, clothing, weight, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.8.
Detail design level: age, gender, clothing, weight, etc.—empirical evidence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Discussion and overall conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.
Do pedagogical agents facilitate learner motivation and learning? Do they make a difference? . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.
Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes?
When are they effective? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.
How should pedagogical agents be designed to foster learner motivation and learning outcomes? . . . . . . . . . . . . . . . . . . . . .
7.4.
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
49
49
50
50
50
51
51
51
51
52
52
52
Pedagogical agents are lifelike characters presented on a computer screen that guide users through multimedia learning environments. The term “agent” refers to the agent-metaphor—presenting a character on the screen (Erickson, 1997).
Therefore, the agent’s image on the screen is a defining feature of pedagogical agents. Although some authors argue that
the agent’s voice is crucial, but not its physical presence on the screen (presence principle, Mayer, Dow, & Mayer, 2003), the
mere presentation of a voice cannot be considered a pedagogical agent. Further, the term “animated pedagogical agent” is
frequently used in literature (e.g., Atkinson, 2002; Craig, Gholson, & Driscoll, 2002; Johnson, Rickel, & Lester, 2000; Lester
et al., 1997; Moreno, Mayer, Spires, & Lester, 2001). The image of a pedagogical agent however can either be presented as
static or as animated picture (Clark & Mayer, 2002).
The implementation of pedagogical agents is an attempt to introduce more instructional support and motivational elements into multimedia learning (Clark & Choi, 2005). While their supporters expect them to foster learner motivation and
learning outcomes due to the social cues they provide (e.g., Atkinson, 2002; Baylor & Kim, 2005; Craig et al., 2002; Johnson et
al., 2000), others express concerns that they may distract the learner from the learning content (Dehn & van Mulken, 2000;
Walker, Sproull, & Subramani, 1994). Therefore the question arises whether the presence of pedagogical agents does indeed
facilitate the learners’ motivation and the learning outcomes.
Research concerning pedagogical agents was initially carried out from a technological perspective. In most cases it was
related to projects that dealt with the development of pedagogical agents (Graesser et al., 1999; Johnson, Rickel, Stiles, &
Munro, 1998; Johnson et al., 2000; Lester et al., 1997, for an overview see Clarebout, Elen, Johnson, & Shaw, 2002). Most of
these early studies did not examine whether pedagogical agents are actually beneficial for learning, but focused on other
features, such as the perceived intelligence of the characters (King & Ohya, 1996), the credibility (Lester et al., 1997), the
entertainment value (Takeuchi & Naito, 1995), or the perceived usefulness (Cassell & Thórisson, 1999). Moreover, there was
not a clear separation between research in the field of interface agents and pedagogical agents at that time, as can be seen in
the review by Dehn and van Mulken (2000). Research on pedagogical agents from an educational point of view started at the
end of the 1990s. The first review on the effects of pedagogical agents on learning was provided by Clarebout et al. (2002).
At that time, only five empirical studies were available (published in peer reviewed journals) that reported learning results
(André, Rist, & Müller, 1999; Lester et al., 1997; Moreno, Mayer, & Lester, 2000; Moreno et al., 2001).
1. Result of the previous review
The initial studies on the effect of pedagogical agents on learning as reviewed by Clarebout et al. (2002) yielded mixed
results. Lester et al. (1997) presented five versions of the pedagogical agent “Herman the Bug” (Fig. 1), in which he exhibits
different types of communicative behaviors. In all five experimental conditions, the learners achieved better results in the
post-test than in the pre-test. Hence Lester et al. (1997) reasoned that the mere presence of the pedagogical agent leads to
better learning achievements and referred to that as a “persona effect”. However, the experimental design did not comprise
a control group. The positive effect of the pedagogical agent was presumed only from the difference between the pre- and
the post-test performance.
André et al. (1999) compared a group working with the pedagogical agent “PPPersona” with a control group without
an agent and failed to replicate the persona effect. Although the problem of the missing control group was pointed out
by Clarebout et al. (2002), the persona effect is still frequently cited, especially in studies from a technological perspective
(e.g., Craig et al., 2002; Dirkin, Mishra, & Altermatt, 2005; Perez & Solomon, 2005; Prendinger, Mori, & Ishizuk, 2005; Roda,
Angehrn, Nabeth, & Razmerita, 2003).
Moreno et al. (2000) also compared an agent group (“Herman the Bug”) with a no-agent group. They reported no differences in retention, but an advantage of the agent group on transfer, interest and motivation. In a follow-up, they varied three
features of the pedagogical agent: image, voice and personalized language. They found no effect of the agent’s image, but
an advantage of its voice on retention and transfer. Furthermore, the personalized language positively affected retention.
Moreno et al. (2001) varied the type of the agent’s image by comparing the animated agent “Herman the Bug” to a video of
a real person and found no effect for this variable on retention or transfer.
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
29
Fig. 1. The pedagogical agent “Herman the Bug” in the learning environment “Design-a-Plant” (Lester et al., 1997).
In sum, only two of the five studies that were available for the review in 2002 comprised a control group without an
agent. The other three studies compared different features of pedagogical agents without having a control group. As a control
group is needed in order to investigate whether pedagogical agents do facilitate learner motivation and learning outcomes,
evidence according to this research question was scarce. Recently, research on pedagogical agents gained increasing interest
and, following the review by Clarebout et al. (2002), a lot of studies have been published. We therefore used this review as
a starting point to map out research that has been done since.
2. Aims of the presented review
The presented review aims to answer three main questions:
1. Do pedagogical agents facilitate learner motivation and learning? Do they make a difference?
From an educational perspective, the priority objective is to provide instructional support and motivational elements to
foster student learning. Therefore, it is worthwhile to ask whether the presentation of a pedagogical agent might be a fruitful
strategy. Furthermore, the implementation of pedagogical agents is costly in terms of time, personnel, and budget, which
raises the question whether they afford additional benefits to justify these efforts. As a minimum requirement, one could
say that they should not interfere with learning or impair learner motivation.
The review of the existing studies will reveal that this question is too broad. Therefore the question has to be differentiated
to analyze the conditions under which pedagogical agents can be effective.
2. Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes? When are they effective?
In order to answer this question we provide a multi-level framework for research on pedagogical agents that takes the
conditions of their use into account. This framework is applied to systematically analyze the results of the existing studies.
3. How should pedagogical agents be designed to foster learner motivation and learning outcomes?
Following the questions whether pedagogical agents can be effective and under what conditions, we address the issue
of how pedagogical agents should be designed in order to facilitate the learning process. We therefore review studies that
compare different features of pedagogical agents.
3. Methodology
Using the review by Clarebout et al. (2002) as a starting point, empirical studies that have been published since were
investigated. The last search for articles was done in February 2010. For our investigation of the literature, the search engines
“ERIC”, “Web of science” and “PsychInfo”, and the same descriptors (pedagogical agent, animated agent, agent and virtual
reality, agent and multimedia, personal digital agent) as in the Clarebout et al. article were used. Papers that have been
30
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
published in scientific peer reviewed journals were included. The abstracts of the retrieved articles were read by the two
authors and independently marked as being relevant or not. Next, all relevant articles were retrieved in full version and
analyzed. Altogether, 75 articles were found, whereas 26 were included in the review using the following criteria.
The goal of applying pedagogical agents is to facilitate learner motivation and learning. Therefore, we only refer to studies
that measured motivation, retention or transfer performance. Accordingly, evaluative studies of specific pedagogical agents
such as “AutoTutor” or “Ore-Age” (Graesser, Chipman, Haynes, & Olney, 2005; Graesser et al., 2004; Guray, Celebi, Atalay,
& Pasamehmetoglu, 2003), or the prototypes “CoPAS”, “SA-Agent” and “Rule-Editor” (Morch, Dolonen, & Naevdal, 2004),
as well as studies not reporting on learning effects (e.g. Biswas, Leelawong, Schwartz, & Vye, 2005; Gulz & Haake, 2006;
Predinger, Ma, & Ishizuka, 2007) were excluded.
Thus, 26 articles that focus on the effect of pedagogical agents on learner motivation and learning were reviewed. As
some of the articles report multiple studies, the total number of experiments included in this review amounts to 39. Table 1
provides an overview of the results of these 39 studies.
4. Do pedagogical agents facilitate learner motivation and learning?
In order to answer the question whether pedagogical agents affect the learning process, only studies that comprised an
experimental group with a pedagogical agent and a control group without an agent could be considered. Without a control
group, it is not possible to draw conclusions on whether a pedagogical agent makes a difference. As the presentation of the
agent’s image on the screen is a defining feature of pedagogical agents, we only included studies that compared an image
group (agent group) and a no-image group (no-agent group).
Surprisingly, only 15 out of the 39 reviewed studies comprised a control group (marked in grey in Table 1). Nine of
these studies reported no differences in the learning outcomes (retention and/or transfer) for the agent and the no-agent
groups (Atkinson, 2002, experiment 1; Baylor & Ryu, 2003; Craig et al., 2002, experiment 1; Dirkin et al., 2005; Domagk,
2010, experiment 1; Dunsworth & Atkinson, 2007; Mayer et al., 2003, experiment 4; Moundridou & Virvou, 2002; Perez
& Solomon, 2005).2 Thus, presenting a pedagogical agent on the screen yielded no additional learning effect. Two of these
studies further measured motivation and reported no difference (Baylor & Ryu, 2003; Domagk, 2010, experiment 1).
A single study (Atkinson, 2002, experiment 2) indicated an advantage of the pedagogical agent on near and far transfer
performance when compared to the presentation of text or voice only (near transfer: Cohen’s f = 0.30, far transfer: Cohen’s
f = 0.31). Five studies reported mixed results (Clarebout & Elen, 2006; Domagk, 2010, experiment 2; Holmes, 2007; Lusk &
Atkinson, 2007; Plant, Baylor, Doerr, & Rosenberg-Kima, 2009). Clarebout and Elen (2006) found an advantage of pedagogical
agents on retention when compared to the control group (2 = 0.05). The transfer performance did not differ between the
agent and the no-agent groups. In Holmes (2007) study, one agent group (single student + agent) performed better on
retention tasks (deep explanations) than the control group (pairs of students + text), while the other agent group (pairs
of students + agent) did not differ from the control (no effect sizes provided). For shallow explanations and monitoring
statements however, he found no difference. Domagk (2010) reported no differences in the initial and state motivation, as
well as retention between four different agent groups and the control group. For transfer performance, however, she found
a disadvantage of a pedagogical agent dislikeable in appearance and voice to the control group and to the other agent groups
(2 = 0.28). Lusk and Atkinson (2007) reported no difference in retention between the agent and the no-agent groups. One
agent group (animated) however, outperformed the no-agent group on near and far transfer (Cohen’s f = 0.18), whereas the
other agent group (static) did not differ significantly. Plant et al. (2009) found an effect of a female agent on motivation and
learning, but not of a male agent (no effect sizes provided).
To summarize, the existing studies on pedagogical agents that compare the learning results when working with or without
a pedagogical agent draw a discouraging picture. Only five out of 15 studies reported advantages of the agent groups on
learning, while four of these five studies yielded mixed results. The majority of studies indicated no effect of pedagogical
agents on learning. In one study, the presence of a pedagogical agent even hindered learning. Although the use of pedagogical
agents aims at facilitating learner motivation and learning, only four of the 15 studies provided data on learner motivation
(Baylor & Ryu, 2003; Domagk, 2010, experiments 1 and 2; Plant et al., 2009). Three of them yielded no difference between
the agent and the no-agent groups.
Does that mean that pedagogical agents have no or sometimes small effects on learning but at least do not hinder the
learning process? As one study indicates a negative effect, even this may be questionable. In our view, the question whether
pedagogical agents generally facilitate the learning process is too broad. The pedagogical agents in the reviewed studies
executed different functions and were differently designed, ranging from a talking head (Moundridou & Virvou, 2002) to
a parrot (Atkinson, 2002), or a cartoon human character (Craig et al., 2002, Fig. 2). Moreover, they were used in different
learning environments covering different topics, such as nanotechnology (Dirkin et al., 2005), the electric motor (Mayer et
al., 2003) or ecology problems (Clarebout & Elen, 2006; Holmes, 2007). Therefore, the conditions of the use of the pedagogical
agent need to be taken into account.
2
Note: Atkinson (2002, experiment 1) reports results of a 2 × 2 design (aural/textual explanations vs. presence of agent) with control (no explanations,
no agent). We only refer to the difference between the agent and the no-agent-group, as the control group lacked explanations that were provided in both
agent groups.
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
31
Table 1
Empirical studies on pedagogical agents.
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
[1]
Atkinson (2002)
N = 50
Motivation: Retention: -
Experiment 1
Undergraduate psychology
students
2 × 2 design with control:
modality of explanations (voice
vs. text) × agent (agent vs. no
agent) × control (no
explanations, no agent)
(PA: Peedy the parrot)
18% male, 82% female
Topic: proportion-word
problems
[2]
Atkinson (2002)
N = 75
Experiment 2
undergraduate psychology
students
29.3% male, 70.7% female
Text only vs. voice only vs.
agent + voice
(PA: Peedy the parrot)
Atkinson et al. (2005)
Experiment 1
N = 50
College students
Atkinson et al. (2005)
Experiment 2
N = 40
High school students
Topic: proportional reasoning
word problems
Retention: -
Human vs. machine voice
(both given by PA “Peedy the
Parrot”, MS Agent Character)
Motivation: Retention (performance on practice problems):
M = 2.67, SD = 0.63/Human; M = 2.09,
SD = 0.85/Machine
→t-Test: Machine = < = Human (t(48) = 2.74,
p < 0.01, d = 0.79)
Near Transfer: M = 2.23, SD = 0.71/Human;
M = 1.62, SD = 0.77/Machine
→t-Test: Machine < Human (t(48) = 2.91,
p < 0.01, d = 0.84)
Far Transfer: M = 1.32, SD = 0.90/Human,
M = 0.77, SD = 0.67/Machine
→t-Test Machine < Human (t(48) = 2.46,
p < 0.05, d = 0.71)
Human vs. machine voice
(both given by PA “Peedy”, MS
Agent Character)
Motivation: Retention (performance on practice problems):
M = 2.33, SD = 0.64/Human; M = 1.80,
SD = 0.86/Machine
→t-Test: Machine < Human (t(38) = 2.20,
p < 0.01, d = 0.63)
Near Transfer: M = 2.51, SD = 0.59/Human;
M = 1.84, SD = 0.86/Machine
→t-Test: Machine < Human (t(38) = 2.89,
p < 0.01, d = 0.83)
Far Transfer: M = 1.74, SD = 0.74/Human;
M = 1.15, SD = 0.82/Machine
→t-Test: Machine < Human (t(38) = 2.42,
p < 0.05, d = 0.70)
Topic: proportional reasoning
word problems
[4]
Motivation: -
Near transfer: M = 5.82, SD = 4.00/text; M = 7.20,
SD = 3.35/voice; M = 8.36,
SD = 2.96/agent + voice
→ANOVA: Agent + Voice > Voice = Text
(F(1,72) = 3.21, p < 0.05, Cohen’s f = .30)
Far transfer: M = 3.92, SD = 3.41/Text; M = 4.32,
SD = 3.18/Voice; M = 6.38,
SD = 4.25/Agent + Voice
→ANOVA: Agent + Voice > Voice > Text
(F(1,72) = 3.57, p < 0.05, Cohen’s f = .31)
Topic: proportion-word
problems
[3]
Near transfer: M = 7.30, SD = 1.06/Voice;
M = 6.00, SD = 2.58/Text; M = 7.30,
SD = 1.49/Voice + Agent, M = 5.60,
SD = 2.37/Text + Agent, M = 4.80,
SD = 1.99/Control
→ANCOVA: Agent = No Agent (F(1,35) = .01,
MSE = 3.05, p = 0.93)
→ANCOVA: Voice > Text (F(1,35) = 5.91,
MSE = 3.05, p < 0.05, Cohen’s f = 0.42)
Far transfer: M = 5.70, SD = 2.00/voice; M = 4.90,
SD = 2.42/text; M = 6.30, SD = 2.67/voice + agent,
M = 4.30, SD = 2.987/text + agent, M = 2.90,
SD = 3.07/control
→ANCOVA: Agent = No Agent (F(1,35) = .13,
MSE = 4.31, p = 0.72)
→ANCOVA: Voice = Text (F(1,35) = 3.12,
MSE = 4.31, p = 0.09)
32
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 1 (Continued)
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
[5]
Baylor and Kim
(2005)
N = 71
Different PA roles: expert,
motivator, mentor (expert and
motivator)
Experiment 2
Pre-service teachers
Motivation (percieved engagement): M = 2.63,
SD = 0.84/Expert; M = 3.50, SD = 0.86/Motivator;
M = 3.66, SD = 0.75/Mentor
→contrast1: Expert < Motivator & Mentor
(F = 22.56, p < 0.001, d = 1.76); contrast2:
Motivator & Expert < Mentor (F = 6.99,
p < 0.01, d = 0.80)
Motivation (self-efficacy): M = 3.03,
SD = 0.73/Pre Expert; M = 3.06, SD = 0.92/Post
Expert; M = 2.67, SD = 0.82/Pre Motivator;
M = 2.96, SD = 0.86/Post Motivator; M = 2.50,
SD = 1.10/Pre Mentor; M = 3.05, SD = 1.14/Post
Motivator
→contrast1: Expert < Motivator & Mentor
(F = 2.83, p = 0.09); contrast2: Motivator &
Expert < Mentor (F = 2.66, p = 0.10)
Retention: Transfer: M = 2.85, SD = 1.26/Expert; M = 2.54,
SD = 1.02/Motivator; M = 3.15, SD = 0.81/Mentor
→contrast1: Expert & Motivator < Mentor
(F = 3.89, p < 0.05, d = 0.50); <contrast2:
Motivator < Expert & Mentor (F = 3.89, p>0.05,
d = 0.44)
N = 236
2 × 2 × 2design: PA deictic
gestures (present “DG+” vs.
absent “DG-”) × facial
expressions (present “FE+” vs.
absent “FE-”) × learning
outcome (procedural “PLO” vs.
attitudinal “ALO”)
Undergraduate students
32% male, 68% female
PA: human character, male
Motivation: Retention: M = -0.22, SD = 1.25/PLO,FE-,DG-;
M = 0.10, SD = 1.12/PLO,FE-,DG+; M = 0.14,
SD = 0.68/PLO,FE+,DG-; M = -0.02,
SD = 0.86/PLO,FE+,DG+; M = -0.20,
SD = 0.79/ALO,FE-,DG-; M = -0.31,
SD = 0.85/ALO,FE-,DG+; M = 1.23,
SD = 0.78/ALO,FE+,DG-; M = -0.19,
SD = 0.89/ALO,FE+,DG+
→MANOVA: Facial Expression
present > Facial Expression absent
(F(1,228) = 12.97, p < 0.05; no effect size
provided)
→MANOVA: Deictic Gestures
present > Deictic Gestures absent
(F(1,228) = 7.61, p < 0.05; no effect size
provided)
→MANOVA: interaction effect – Facial
Expression present: DG+ < DG-, Facial
Expression absent: DG+ > DG(F(1,228) = 12.87, p < 0.05; 2 = 0.053)
→MANOVA: interaction effect – Procedural
Learning Outcome: DG+ > DG-, Attitudinal
Learning Goal: DG+ < DG- (F(1,228) = 11.53,
p < 0.05; 2 = 0.048)
Transfer: -
12.5% male, 87.5% female
Age: 19.6 (SD = 3.93)
Topic: Instructional Planning
[6]
Baylor and Kim
(2009)
Topic 1 (procedural): use
web-based software program
Topic 2 (attitudinal): attitude
toward intellectual property
rules and laws
[7]
Baylor and Ryu
(2003)
N = 75
pre-service teachers
77% female, 23% male
age: 20.75 (SD = 2.01)
Topic: instructional planning
animated vs. static image of the
PA “Chris”(MS Agent Character
“Merlin”) vs. no agent
Motivation (percieved engagement): M = 3.27,
SD = 0.65/Animated; M = 2.83, SD = 0.93/Static,
M = 2.89, SD = 0.96/No Image
→contrast1: No Agent = Animated & Static
(not significant, no statistics); contrast 2:
Static < Animated (t(66) = -1.76, p≤0.05,
d = 0.55); post-hoc pairwise comparison: No
Agent < Animated (t(46) = 1.63, p≤0.05,
d = 0.46)
Retention/Transfer (developed instructional
plan): not significant (no statistics)
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
33
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
[8]
Clarebout and Elen
(2006)
N = 185
no agent vs fixed agent vs
Motivation: -
educational science first year
bachelor university students
adapted agent × selfregulation
162 female, 23 male
(PA: Merlin)
Retention (number of arguments): M = 3.75,
SD = 2.33/Fixed Agent; M = 2.64; SD = 2.30/No
Agent; M = 2.85, SD = 1.87/Adapted Agent
→MANOVA: Adapted Agent = Fixed
Agent > No Agent (Wilks’ lambda = .95,
F(1,165) = 9.11, p = .00, 2 = 0.05)
→no statistics for selfregulation
Transfer: (no means provided)
→ANOVA: Adapted Agent = Fixed Agent = No
Agent (F(2,180, 1.48, p>.05)
→Interaction effect with self-regulation: High
Self Regulators with Fixed Agents < High Self
Regulators with Adapted Agent
→Wilk’s lambda = .94, F(4,165) = 2.60, p = .04,
2 = 0.06)
agerange: 18 20
Topic: Ecology problem
[9]
Choi and Clark (2006)
N = 74
college students
human agent vs. arrow
prior knowledge (post-hoc)
32.4% male, 67.6% female
(PA: Genie)
age: 24.21 (SD = 4.81)
Topic: English relative clause
[10]
Craig et al. (2002)
N = 135
Experiment 1
college students
3 × 3 design: PA properties (no
agent vs. agent with mimic vs.
agent with
mimic+gesture) × instructional
picture features (static vs.
sudden onset vs. animation)
(PA: MS Agent Character,
human character)
Topic: lightning formation
[11]
Craig et al. (2002)
Experiment 2
N = 71
college students
Topic: lightning formation
voice vs. text vs. voice + text
(all given by PA, MS Agent
Character, human character)
Motivation: Retention: (no means for prior knowledge
provided)
→Interaction effect: low prior knowledge:
Agent > Arrow, intermediate & high prior
knowledge: Agent = Arrow (F(2,68) = 3.747,
p = .037; 2 = 0.09)
Retention (sentence combination): M = 8.97,
SD = 3.11/Arrow; M = 8.62, SD = 3.36/Agent
→ANOVA: Agent = Arrow (F(1,72) = .388,
p = .535)
Retention (interpretation test): M = 6.97,
SD = 1.96/Arrow; M = -7.17, SD = 1.77/Agent
→ANOVA: Agent = Arrow (F(1,72) = .558;
p = .457)
Retention (grammatical judgement): M = 9.31,
SD = 2.29/Arrow, M = 8.81, SD = 2.12/Agent
Transfer: Motivation: Retention: M = 3.91, SD = 2.88/No Agent;
M = 3.80, SD = 2.61/Agent With Mimic; M = 3.49,
SD = 2.62/Agent With Mimic + Gestures
→ANCOVA: Agent With
Mimic + Gestures = Agent With Mimic = No
Agent (not significant, no statistics)
Transfer: M = 1.69, SD = 1.16/No Agent; M = 1.76,
SD = 1.15/Agent With Mimic; M = 2.02,
SD = 1.37/Agent With Mimic + Gestures
→ANCOVA: Agent With
Mimic + Gestures = Agent With Mimic = No
Agent (not significant, no statistics)
Motivation: Retention: M = 3.46, SD = 1.61/Voice; M = 1.70,
SD = 1.43/Text; M = 2.63, SD = 2.22/Voice + Text
→ANCOVA: Voice > Text (F(2,68) = 5.68,
p≤0.01)
Transfer: M = 1.83, SD = 1.09/Spoken; M = 0.96,
SD = 0.71/Printed; M = 1.13,
SD = 0.99/Spoken + Printed
→ANCOVA: Voice > Voice + Text & Text
(F(2,68) = 5.13, p≤0,01)
34
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
[12]
Dirkin et al. (2005)
N = 116
text only vs. voice only vs.
voice + static image of PA
“Chuck” vs. less mechanical
voice + animated image of PA
“Chuck”
Motivation: Retention: M = 12.7, SD = 3.2/Text; M = 12.4,
SD = 2.0/Voice; M = 12.3, SD = 2.6/Voice + Static
Agent, M = 13.3, SD = 2.4/Voice + Animated
Agent
→ANOVA: Voice + Animated
Agent = Voice + Static Agent = Voice = Text
(F(3,112) = 0.909, p = 0.439)
Transfer: -
likable PA vs. neutral PA vs.
dislikable PA vs. no agent (PA:
human character)
Motivation (state 1): M = 5.27, SD = 1.03/Likable
PA; M = 4.98, SD = 1.10/neutral PA; M = 4.88,
SD = 1.14/dislikable PA; M = 5.09, SD = 0.87/No
Agent
→contrast: Agent-groups = No Agent
(T(282) = –0.26, p = .80)
Motivation (state 2): M = 5.01, SD = 0.97/Likable
PA; M = 4.64, SD = 1.21/neutral PA; M = 4.51,
SD = 1.28/dislikable PA; M = 4.75, SD = 0.95/No
Agent
→contrast: Agent-groups = No Agent
(T(72) = –0.16, p = .87)
Retention: M = 17.23, SD = 5.49/Likable PA;
M = 18.06, SD = 5.30/neutral PA; M = 17.33,
SD = 6.00/dislikable PA; M = 17.81, SD = 4.67/No
Agent
→contrast: Agent-groups = No Agent
(T(284) = –0.307, p = .76)
→ANOVA: Likable PA = Neutral PA= Dislikable
PA= No Agent (F(3, 285) = 1.53, p = .21)
Transfer: M = 7.27, SD = 1.51/Likable PA;
M = 6.39, SD = 2.67/neutral PA; M = 6.39,
SD = 1.42/dislikable PA; M = 6.73, SD = 2.72/No
Agent
→contrast: Agent-groups = No Agent
(T(285) = –0.17, p = .87)
→ANOVA: Likable PA > Neutral
PA = Dislikable PA = No Agent (F(3, 285) = 5.70,
p = .001, 2 = 0.06)
2 × 2 design with control:
apperance of PA (likable vs.
dislikable (LA vs. DA)) × voice
of PA (likable vs. dislikable (LV
vs. DV)) plus control (no agent)
Motivation (state 1): M = 5.38, SD = 1.14/LALV;
M = 4.88, SD = 1.15/DALV; M = 4.97,
SD = 1.10/LADV; M = 4.88, SD = 1.14/DADV;
M = 5.01, SD = 0.89/No Agent
college students
Topic: nanotechnology
[13]
Domagk (2010)
N = 292
Experiment 1
college students
86.0% female, 14% male
Topic: visual perception
[14]
Domagk (2010)
N = 174
Experiment 2
college student
89.1% female, 10,9% male
Topic: visual perception
Motivation (state 2): M = 5.04, SD = 1.25/LALV;
M = 4.45, SD = 1.34/DALV; M = 4.61,
SD = 1.25/LADV; M = 4.54, SD = 1.32/DADV;
M = 4.70, SD = 1.00/No Agent
→MANOVA (state 1&2): Agent-groups
(LALV = DALV = LADV = DADV) = No Agent
(Wilks’ = 0.87, F(24, 552) = 0.96, p = .51)
Retention: M = 17.88, SD = 6.14/LALV; M = 16.77,
SD = 6.30/DALV; M = 18.34, SD = 5.40/LADV;
M = 15.26, SD = 5.35/DADV; M = 18.22,
SD = 5.29/No Agent
→ANOVA: Agent-groups
(LALV = DALV = LADV = DADV) = No Agent (F(4,
168) = 1.77, p = .14)
→ANOVA: Likable Appearance > Dislikable
Appearance (F(1,134) = 4.48, p = .04, 2 = 0.03)
→ANOVA: Likable Voice = Dislikable Voice
(F(1,134) = 0.28, p = .60)
Transfer: M = 7.06, SD = 1.59/LALV; M = 6.03,
SD = 1.77/DALV; M = 6.51, SD = 1.54/LADV;
M = 4.06, SD = 1.81/DADV; M = 6.74,
SD = 1.79/No Agent
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
35
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
→ANOVA: DADV < LALV = DALV = LADV = No
Agent (F(4, 168) = 16.65, p < 0.001, 2 = 0.28)
→ANOVA: Likable Appearance > Dislikable
Appearance (F(1,134) = 37.00, p < 0.001,
2 = 0.22)
→ANOVA: Likable Voice > Dislikable Voice
(F(1,134) = 19.25, p < 0.001, 2 = 0.13)
[15]
Dunsworth and
Atkinson (2007)
N = 51
undergraduate students
on-screen text vs. voice vs.
voice + agent
(PA: Dr. Bob)
Topic: Human circulatory
system
[16]
Holmes (2007)
N = 80
fifth graders
3 conditions: SS1: pairs of
students + text
SSA: pairs of students + agent;
SA: individual student + agent
as partner and as resource
Topic: Ecology problem
[17]
Kim (2007)
46 undergraduate students
29% male, 71% female
2 × 2 × 2 design: agent
competency (high vs.
low) × learner competency
(high vs. low) × interaction
control
age: M = 20.48 (SD = 1.64)
(PA: Mike)
Topic: Instructional planning
Motivation: Retention: M = 4.98, SD = 1.90/Text; M = 5.18;
SD = 1.59/Voice; M = 6.47, SD = 1.59/Voice
+ Agent
→pairwise comparison test:
Voice + Agent = Voice = Text (F(2,48) = 4.21,
p < 0.05, cohen’s d = .091)
Near Transfer: M = 5.41, SD = 1.50/Text;
M = 5.94, SD = 1.89/Voice; M = 6.41,
SD = 2.00/Voice + Agent
→pairwise comparison test:
Voice + Agent = Voice = Text (F(2,48) = 1.30;
p>.05)
Far Transfer: M = 4.41, SD = 1.54/Text; M = 4.00,
SD = 1.90/Voice; M = 4.76,
SD = 1.92/Voice + Agent
→pairwise comparison test:
Voice + Agent = Voice = Text (F(2,48) = .77,
p>.05)
Motivation: Retention: (MANOVA (state 1) → SSA & SA
(agent conditions) > SSI (F(1,62) = 5.56,
p < 0.028)
Retention (deep explanations): (no means
provided) → SA > SSA = SSI (F(2, 29) = 5.821, p <
.008)
Retention (shallow explanations): (no means
provided) → SA = SSA = SSI (F(2,29) = 2.2444,
p = .106)
Retention (monitoring statements): (no means
provided) → SA = SSA = SSI (F(2,29) = .103,
p = .902)
Transfer: Motivation (self-efficacy): M = 3.13,
SD = .083/High comp. Learners + High comp.
Agent; M = 2.91, SD = 1.38/High com. Learners +
Low comp. Agent; M = 2.22, SD = .067/Low
comp. Learners + High comp. Agent; M = 3.57,
SD = .53/Low comp. Learners + Low comp.
Agent
M = 3.60, SD = 0.70/High comp. Learners +
Agent Control; M = 3.11, SD = 0.78/Low comp.
Learners + Agent Control; M = 2.33,
SD = 1.22/High comp. Learners + Learner
Control; M = 2.43, SD = 0.98/Low comp.
Learners + Learner Control
→ANOVA: High comp. Learner: High comp.
Agent > Low comp. Agent; Low comp.
Learner: Low comp. Agent > High comp.
Agent (F(1,34) = 9.53, p < 0.01, 2 = 0.27)
→ANOVA: High comp. Learners: Agent
Control > Learner Control; Low comp.
Learners: Learner Control > Agent Control
(F(1,34) = 7.51, p < 0.01, 2 = 0.22)
Retention (recall): (no means provided)
→ANOVA: High comp. Learner: High comp.
Agent > Low comp. Agent; Low comp.
Learner: Low comp. Agent > High comp.
Agent (F(1,34) = 5.54, p < 0.05, 2 = 0.18)
36
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
→ANOVA: High comp. Learner: Learner
Control = Agent Control; Low com. Learner:
Learner Control = Agent Control (no values
given)
Transfer: -
[18]
Kim et al. (2007)
N = 142
Experiment 1
college students
3 × 2 design: emotional
expression (positive vs.
negative vs. neutral) × gender
(male vs. female)
(PA: Chris)
40% male, 60% female
age: 20.25 (SD = 2.27)
Topic: instructional design for
e-learning
[19]
Kim et al. (2007)
N = 56
Experiment 2
pre-service teachers
2 × 2 design: PA empathetic
responses (responsive vs.
nonresponsive) × gender (male
vs. female)
(PA: Chris)
19.6% male, 80.4% female
age: 20.71 (SD = 2.92)
Topic: instructional planning
for sixth graders
[20]
Lusk and Atkinson
(2007)
N = 174
2 × 3 design: worked examples
(WE) (animated vs.
static) × visualisation of PA
(animated vs. static vs. no
agent)
college students
(PA: “Peedy the Parrot”, MS
Agent Character)
Motivation (perception of engagement): M = 3.06,
SD = 0.97/Positive Expression, M = 2.25,
SD = 0.83/Negative, M = 3.12, SD = 0.77/Neutral
→MANOVA: Positive > Negative
(F(2,128) = 14.77, p < 0.001, d = 1.09);
Neutral > Negative (F(2,128) = 14.77, p < 0.001,
d = 1.09)
(no statistics for gender)
→MANOVA: Female < Male (p < 0.05)
Motivation (interest):
Neutral = Negative = Positive (not significant,
no statistics)
(no statistics for gender)
→MANCOVA: Female < Male (Wilk’s i = 0.893,
F(5,127) = 3.05, p < 0.05, 2 = 0.11)
Motivation (self-efficacy):
Neutral = Negative = Positive (not significant,
no statistics)
Female = Male (not significant, no statistics)
Retention (recall): Neutral = Negative = Positive
(not significant, no statistics)
M = 1.52, SD = 1.84/Male; M = 0.65,
SD = 0.89)/Female
→ANOVA: Female < Male (F(1,66) = 7.08,
p < 0.01, d = 0.60)
Transfer: Motivation (perception of engagement):
Responsive = Nonresponsive (not significant,
no statistics)
M = 3.79, SD = 0.52/Male; M = 3.51,
SD = 0.43/Female
→ANOVA: Female < Male (F(1,48) = 4.11,
p < 0.05, d = 0.59)
Motivation (interest): (no statistics)
→MANCOVA: Responsive > Nonresponsive
(Wilk’s = 0.53, F(5,20) = 3.54, p < 0.05,
2 = 0.47)
Female = Male (not significant, no statistics)
Motivation (self-efficacy): (no statistics)
→MANCOVA: Responsive > Nonresponsive
(Wilk’s = 0.71, F(3,31) = 4.29, p < 0.01,
2 = 0.29)
Female = Male (not significant, no statistics)
Retention (recall): Responsive = Nonresponsive
(not significant, no statistics)
Female = Male (not significant, no statistics)
Transfer: –
Motivation: –
Retention (performance on practice problems):
M = 1.94, SD = 0.86/Animated PA-Static WE,
M = 1.63, SD = 1.14/Animated PA-Animated
WE; M = 1.84, SD = 0.95/Static PA-Static WE;
M = 1.91, SD = 1.03/Static PA-Animated WE,
M = 1.81, SD = 0.95/No PA-Static WE; M = 1.78,
SD = 1.03/No PA-Animated WE
→ANCOVA: Animated PA = Static PA = No PA
(not significant, no statistics)
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
37
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
75.3% female, 24.7% male
Results
Near Transfer: M = 8.21, SD = 2.92/Animated
PA-Static WE; M = 9.41, SD = 2.53/Animated
PA-Animated WE; M = 7.52, SD = 3.34/Static
PA-Static WE; M = 7.55, SD = 2.80/Static
PA-Animated WE; M = 7.28, SD = 3.40/No
PA-Static WE; M = 8.48, SD = 3.54/No
PA-Animated WE
→ANCOVA: Animated PA > No PA
(F(2,167) = 3.67, MSE = 0.42, p < 0.05, Cohen’s
f = 0.18)
Far Transfer: M = 5.24, SD = 3.20/Animated
PA-Static WE; M = 6.52, SD = 3.92/Animated
PA-Animated WE; M = 4.52, SD = 3.15/Static
PA-Static WE; M = 5.07, SD = 3.10/Static
PA-Animated WE; M = 4.21, SD = 3.55/No
PA-Static WE; M = 5.21, SD = 4.04/No
PA-Animated WE
→ANCOVA: Animated PA > No PA
(F(2,167) = 3.79, MSE = 0.52, p < 0.05, Cohen’s
f = 0.18)
Topic: proportional reasoning
word problems
[21]
Mayer et al. (2003)
Experiment 1
N = 52
college students
Topic: electric motor
voice vs. text
(both given by PA “Dr. Phyz”)
Motivation: Retention: Transfer: M = 8.43, SD = 2.56/Spoken; M = 6.54,
SD = 2.22/Printed
→t-Test: Voice > Text (t(54) = 2.96, p = 0.0046,
effect size = 0.85)
[22]
Mayer et al. (2003)
N = 37
Motivation: -
Experiment 2a
college students
Topic: electric motor
interactive (control over pace)
vs. noninteractive (no control
over pace)
(both given by PA “Dr. Phyz”)
Mayer et al. (2003)
N = 41
Motivation: -
Experiment 2b
college students
Topic: electric motor
interactive (control over pace)
vs. noninteractive (no control
over pace)
(both given by PA “Dr. Phyz”)
Mayer et al. (2003)
N = 54
Experiment 3
college students
Topic: electric motor
[25]
Mayer et al. (2003)
Experiment 4
N = 39
college students
Topic: electric motor
agent vs. no agent
(PA: Dr. Phyz)
Motivation: Retention: Transfer (Problem solving): M = 6.60,
SD = 3.28/Agent; M = 5.95, SD = 3.52/No Agent
→t-Test: Agent = No Agent (t(37) = .55, p = .58))
[26]
Moreno (2004)
N = 49
explanatory vs. corrective
feedback (both given by PA
“Herman the Bug”)
Motivation: M = 5.00, SD = 2.26/Explanatory;
M = 5.54, SD = 1.96/Corrective
Experiment 1
college students
age: 18.44 (SD = 0.75)
[23]
[24]
prequestion vs. no prequestion
(both given by PA “Dr. Phyz”)
Retention: Transfer: M = 7.95, SD = 2.27/Interactive;
M = 5.72, SD = 3.18/Noninteractive
→t-Test: Interactive > Noninteractive
(t(35) = 2.46, p = 0.0189, effect size = 0.70)
Retention: Transfer (1 week delayed): M = 6.41,
SD = 2.92/Interactive; M = 3.68, SD = 2.65/Non
interactive
→t-Test: Interactive > Noninteractive
(t(39) = 3.11, p = 0.0035, effect size = 1.03)
Motivation: Retention: Transfer: M = 8.90, SD = 2.50/Prequestion;
M = 6.24, SD = 3.20/No Prequestion
→t-Test: No Prequestion < Prequestion
(t(52) = 3.45, p = 0.0011, effect size = 0.83)
→Not significant (no statistics)
Retention: M = 7.48, SD = 1.12/Explanatory;
M = 6.38, SD = 1.44/Corrective → ANOVA:
Corrective < Explanatory (F(1.47) = 8.59,
MSE = 14.60, p = 0.005, effect size = 0.76)
38
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Topic: Botany
“Design-a-Plant”
[27]
Moreno (2004)
N = 55
Experiment 2
college students
age: 18.48 (SD = 0.82)
Results
Transfer: M = 32.74, SD = 6.38/Explanatory;
M = 24.92, SD = 6.72/Corrective → ANOVA:
Corrective < Explanatory (F(1.47) = 17.29,
MSE = 745.56, p < 0.0001, effect size = 1.16)
explanatory vs. corrective
feedback (both given by PA
“Herman the Bug”)
Motivation: M = 5.71, SD = 2.66/Explanatory;
M = 4.56, SD = 2.19/Corrective
→ not significant (no statistics)
Retention: M = 6.82, SD = 1.63/Explanatory;
M = 6.63, SD = 1.36/Corrective
→ not significant (no statistics)
Topic: Botany
“Design-a-Plant”
Transfer: M = 34.75, SD = 7.93/Explanatory;
M = 24.93, SD = 6.23/Corrective → ANOVA:
Corrective < Explanatory (F(1.53) = 25.95,
MSE = 1326.61, p < 0.001, effect size = 1.58)
[28]
Moreno and
Flowerday (2006)
N = 80
2 × 2 × 2 design: PA’s gender
(same gender as learner vs.
different gender) × similarity
of ethnicity (same vs. different
ethnicity) × choice of character
(choice vs. no choice)
Motivation: Retention: M = 5.48; SD = 1.85/Same Gender;
M = 4.97, SD = 2.63/Different Gender; M = 4.86,
SD = 1.91/Same Ethnicity; M = 5.19,
SD = 1.55/Different Ethnicity;
M = 5.66,SD = 2.29/Choice; M = 4.79,
SD = 2.30/No Choice
→ MANOVA: Different Gender = Same Gender
(Wilks’ = 0.94, F(3,70) = 1, p = ns); Different
Ethnicity = Same Ethnicity (Wilks’ = 0.92,
F(3,70) = 0.94, p = ns); No Choice = Choice
(Wilks’ = 0.96, F(3,70) = 1, p = ns)
Transfer: M = 6.45, SD = 2.85/Same Gender;
M = 6.69, SD = 3.80/Not Same Gender; M = 6.24,
SD = 2.77/Same Ethnicity; M = 6.97,
SD = 2.24/Not Same Ethnicity;
M = 6.73,SD = 3.32/Choice; M = 6.40,
SD = 3.33/No Choice
→ MANOVA: Different Gender = Same Gender
(Wilks’ = 0.94, F(3,70) = 1, p = ns); Different
Ethnicity = Same Ethnicity (Wilks’ = 0.92,
F(3,70) = 0.94, p = ns); No Choice = Choice
(Wilks’ = 0.96, F(3,70) = 1, p = ns)
(some marginal interaction effects are not
reported, see Baylor & Flowerday)
2 × 3 design: modality (spoken
text vs. printed
text) × immersion (desktop
display vs.
head-mounted-display sitting
vs. head-mounted-display
walking)
(PA: “Herman the Bug”)
Motivation: Retention: M = 6.01, SD = 1.85/Desktop;
M = 6.50, SD = 1.85/HMD Sitting; M = 5.82,
SD = 2.04/HMD Walking
college students
26.2% male, 73.8% female
age: 26.88 (SD = 8.17)
Topic: car braking system
[29]
Moreno and Mayer
(2002)
N = 89
Experiment 1
college students
Topic: Botany
“Design-a-Plant”
→ two-way ANOVA: HMD Walking = HMD
Sitting = Desktop (F(2,81) = 0.90, MSE = 2.85,
p = 0.41)
M = 6.84, SD = 1.51/Voice; M = 5.43,
SD = 2.01/Text
→ two-way ANOVA: Voice > Text
(F(1,81) = 14.03, MSE = 44.72, p < 0.01)
Transfer: M = 31.06, SD = 9.54/Desktop;
M = 30.69, SD = 9.20/HMD Sitting; M = 30.52,
SD = 11.68/HMD Walking
→ two-way ANOVA: HMD Walking = HMD
Sitting = Desktop (F(2,81) = 0.01, MSE = 1.01,
p = 0.99)
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
39
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
M = 36.12, SD = 8.34/Voice, M = 25.57,
SD = 8.80/Text
→ two-way ANOVA: Voice > Text
(F(1,81) = 35.39, MSE = 2,452.76, p < 0.01)
[30]
Moreno and Mayer
(2002)
N = 75
Experiment 2
college students
3 × 2 design: modality (spoken
text vs. printed text vs.
spoken + printed
text) × immersion (desktop
display vs.
head-mounted-display
walking)
(PA: “Herman the Bug”)
Topic: Botany
“Design-a-Plant”
[31]
Moreno and Mayer
(2004)
N = 48
college students
2 × 2 design: speech style
(personalized vs.
formal) × immersion (desktop
display vs.
head-mounted-display)
(PA: “Herman the Bug”)
43.7% male, 56.3% female
age: 19.54 (SD = 1.70)
Topic: Botany
“Design-a-Plant”
[32]
Moreno and Mayer
(2005)
N = 105
Experiment 1
college students
29.5% male, 70.5% female
age: 18.44 (SD = 0.75)
2 × 2 design: guidance
(explanatory vs. corrective
feedback) × reflection
(students asked to reflect their
choices or not)
(all given by PA “Herman the
Bug”)
Motivation: Retention: M = 6.81, SD = 1.55/Desktop;
M = 6.55, SD = 1.68/HMD Walking
→ two-way ANOVA: HMD Walking = Desktop
(F(1,69) = 0.61, MSE = 1.40, p = 0.44)
M = 7.25, SD = 1.57/Voice; M = 5.88,
SD = 1.51/Text; M = 6.96, SD = 1.46/Voice + Text
→ two-way ANOVA:
Voice = Voice + Text > Text (F(2,69) = 5.45,
MSE = 12.52, p < 0.01)
Transfer: M = 28.17, SD = 7.47/Desktop;
M = 30.55, SD = 7.36/HMD Walking
→ two-way ANOVA: HMD Walking = Desktop
(F(1,69) = 2.37, MSE = 108.77, p = 0.13)
M = 32.08, SD = 6.16/Spoken; M = 24.84,
SD = 5.96/Printed; M = 30.77, SD = 8.15
→ two-way ANOVA:
Voice = Voice + Text > Text (F(2,69) = 8.73,
MSE = 400.95, p < 0.01)
Motivation: Retention: M = 5.46, SD = 2.09/HMD; M = 6.55,
SD = 1.50/Desktop
→ two-way ANOVA: HMD < Desktop
(F(1,44) = 5.22, MSE = 14.08, p < 0.05, d = -0.73)
M = 6.79, SD = 1.29/Personalized; M = 5.21,
SD = 2.06/Formal
→ two-way ANOVA: Formal < Personalized
(F(1,44) = 11.14, MSE = 30.08, p = 0.002, effect
size = 0.77)
Transfer: M = 26.17, SD = 6.96/HMD; M = 27.83,
SD = 5.48/Desktop
→ two-way ANOVA: HMD < Desktop
(F(1,44) = 1.53, MSE = 33.33, p = 0.22, d = -0.30)
M = 31.13, SD = 4.36/Personalized; M = 22.88,
SD = 5.03/Formal
→ two-way ANOVA: Formal < Personalized
(F(1,44) = 37.36, MSE = 816.75, p = 0.0001, effect
size = 1.64)
Motivation: Retention: M = 7.05, SD = 1.52/Explanatory,
M = 6.46, SD = 1.40/Corrective
→ ANOVA (Bonferroni corrected alpha
level = 0.0125): Corrective = Explanatory
(F = (1,101) = 4.68, MSE = 10.07, p = 0.03, effect
size = 0.42)
M = 6.71, SD = 1.49/Reflection; M = 6.84,
SD = 1.50/No Reflection
→ ANOVA (Bonferroni corrected alpha
level = 0.0125): No Reflection = Reflection
(F = (1,101) = 0.18, MSE = 0.38, p = 0.67)
40
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Topic: Botany
“Design-a-Plant”
[33]
Moreno and Mayer
(2005)
N = 71
Experiment 2
college students
Close Transfer: M = 20.60,
SD = 3.75/Explanatory; M = 17.98,
SD = 3.51/Corrective → ANOVA (Bonferroni
corrected alpha level = 0.0125):
Corrective < Explanatory (F = (1,101) = 13.62,
MSE = 179.71, p < 0.01, effect size = 0.75)
M = 19.82, SD = 4.12/Reflection; M = 18.90,
SD = 3.53/No Reflection
→ ANOVA (Bonferroni corrected alpha
level = 0.0125): No Reflection = Reflection
(F = (1,101) = 1.34, MSE = 17.71, p = 0.25)
Far Transfer: M = 12.95, SD = 4.94/Explanatroy;
M = 60.53, SD = 3.44/Corrective → ANOVA
(Bonferroni corrected alpha level = 0.0125):
Corrective < Explanatory (F = (1,101) = 57.24,
MSE = 1,076.66, p < 0.01, effect size = 1.87)
M = 9.89, SD = 5.45/Reflection; M = 9.88,
SD = 5.29/No Reflection
→ ANOVA (Bonferroni corrected alpha
level = 0.0125): No Reflection = Reflection
(F = (1,101) = 0.03, MSE = 0.62, p = 0.86)
2 × 2 design: selection of
answers (interactive vs.
noninteractive) × reflection
(students asked to reflect their
choices or not)
(PA: “Herman the Bug”)
28.2% male, 71.8% female
age: 20.44 (SD = 4.54)
Topic: Botany
“Design-a-Plant”
[34]
Results
Moreno and Mayer
(2005)
N = 78
2 × 3 design: selection of
answers (interactive vs.
non-interactive) × reflection
(students asked to reflect their
choices vs. no reflection vs.
program reflective)
Experiment 3
college students
(PA: “Herman the Bug”)
Motivation: Retention: M = 7.11, SD = 1.51/Interactive;
M = 6.89, SD = 1.64/Noninteractive
→ two-way ANOVA:
Noninteractive = Interactive (F(1,67) = 0.43,
MSE = 0.97, p = 0.52)
M = 7.06, SD = 1.55/Reflection; M = 6.89,
SD = 1.64/NoReflection
→ two-way ANOVA: No
Reflection = Reflection (F(1,67) = 0.11,
MSE = 0.24, p = 0.75)
Close Transfer: M = 20.58, SD = 3.65/Interactive;
M = 22.37, SD = 4.98/Noninteractive
→ two-way ANOVA:
Noninteractive > Interactive (F(1,67) = 3.86,
MSE = 63.58, p = 0.05, d: not provided)
M = 22.92, SD = 4.47/Reflection; M = 19.97,
SD = 3.89/No Reflection
→ two-way ANOVA: No
Reflection < Reflection (F(1,67) = 9.99,
MSE = 164.40, p < 0.01, effect size = 0.76)
Far Transfer: M = 12.39, SD = 4.75/Interactive;
M = 18.23, SD = 9.20/Noninteractive
→ two-way ANOVA:
Noninteractive > Interactive (F(1,67) = 14.77,
MSE = 629.26, p < 0.01, d = 0.63)
M = 17.33, SD = 8.97/Reflection; M = 13.14,
SD = 5.81/No Reflection
→ two-way ANOVA: No
Reflection < Reflection (F(1,67) = 8.46,
MSE = 360.29, p < 0.01, effect size = 0.72)
Motivation: Retention: M = 7.33,
SD = 1.76/Interactive-Program Reflection;
M = 7.19, SD = 1.42/Interactive-Self Reflection;
M = 6.00, SD = 1.77/Interactive-No Reflection;
M = 7.38, SD = 1.31/Noninteractive-Programm
Reflection; M = 5.81,
SD = 1.64/Noninteractive-No Reflection
→ ANOVA: Noninteractive-No Reflection
(NINR) = Noninteractive-Program Reflection
(NIPR) = Interactive-No Reflection
(INR) = Interactive-Self Reflection
(ISR) = Interactive-Program Reflection (IPR)
(no statistics for interactive vs.
non-interactive)
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
41
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
→ ANOVA: No-Reflective < Reflective
(F(4,73) = 3.66, MSE = 9.23, p < 0.01, effect
size = 0.81)
Close Transfer: M = 23.07, SD = 2.52/IPR;
M = 21.19, SD = 2.95/ISR; M = 22.20,
SD = 3.32/INR; M = 22.50, SD = 4.35/NIPR;
M = 20.69, SD = 5.55/NINR
→ ANOVA: NINR = NIPR = INR = ISR = IPR
(F(4,73) = 0.97, MSE = 14.79, p = 0.43)
→ ANOVA: Noninteractive-No
Reflection = Noninteractive-Program
Reflection = Interactive-No
Reflection = Interactive-Self
Reflection = Interactive-Program Reflection
(F(4,73) = 0.97, MSE = 14.79, p = 0.43)
Far Transfer: M = 19.60, SD = 5.90/IPR; M = 13.44,
SD = 5.73/ISR; M = 13.73, SD = 6.03/INR;
M = 18.93, SD = 6.57/NIPR; M = 11.63,
SD = 5.50/NINR
→ NINR = NIPR = INR = ISR = IPR (no statistics
for interactive vs. non-interactive)
→ Self Reflection & No Reflection < Program
Reflection (F(4,73) = 5.61, MSE = 199.00,
p < 0.01, effect size = 0.80)
30.8% male, 69.2% female
age: 19.72 (SD = 2.19)
Topic: Botany
“Design-a-Plant”
[35]
[36]
Moundridou and
Virvou (2002)
Perez and Salomon
(2005)
N = 48
agent vs. no agent
Motivation: -
college students
Topic: algebraic equations
(PA: talking head)
Retention: Transfer (Problem solving): M = 4.75,
SD = .44/Agent; M = 4.46, SD = .72/No Agent
→ t-Test: Agent = No Agent (t(46) = 1.69,
p = .10))
N = 27
text-based feedback vs. spoken
feedback plus hints and clues
by PA “Genie” (MS Agent
Character)
Motivation: -
college students
100% male
Topic: CD-Rom Player
disassembly process
[37]
Plant et al. (2009)
N = 106
male agent vs. female agent vs.
no agent
Middle school students
(PA: male & female human)
42% male, 58% female
Age: 13.63 (SD = .88)
Topic: mathematics
[38]
Veletsianos (2009)
Results
N = 59
college students
5 male, 54 female
Age: 21.63 (SD = 2.05)
Topic: educational technology
Expressive agent (EA) versus
non-expressive agent (NEA)
(PA: human character, female)
Retention (wrong tool choice): M = 7.2,
SD = 4.7/Text-based Feedback, No Agent;
M = 6.9, SD = 5.2/Agent → t-Test: Agent = No
Agent (not significant, no statistics)
Transfer: -
Motivation: (career interest ∼motivation to
study engineering:: M = 3.48; SD = 1.06/female
agent; M = 2.70; SD = 1.22/male agent; M = 2.70;
SD = 1.24/no agent
→ ANOVA: Female > Male = No Agent
(F(1,100) = 5.26, p < 0.01)
Retention: M = 4.76; SD = 1.65/female agent;
M = 3.90; SD = 1.64/male agent; M = 3.55,
SD = 1.73/no agent
→ ANOVA: Female > Male = No Agent
(F(1,100) = 5.30; p < 0.01)
Transfer: Motivation: Retention: MEA = 7.90, SD = 1.61/MNEA = 6.72,
SD = 1.73
→ ANOVA: Expressive agent > non-expressive
agent (F(1,53) = 7.13, p < 0.05; cohen’s d = .71)
Transfer: -
42
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 1 (Continued )
No.
Authors (year)
Participants (N, kind, age,
gender), topic
Independent variables
Results
[39]
Wang et al. (2008)
N = 37
social intelligence of the PA
(polite feedback that promotes
learner face and mitigates face
threat vs. direct feedback that
disregards learner face)
Motivation (self-efficacy): M = 2.03,
SD = 0.76/Polite; M = 2.13, SD = 0.61/Direct
college students
Topic: factory modelling and
simulation (forecast future
demands, develop production
plan)
→ t-Test: Direct = Polite (t(35) = -0.45, p = 0.28)
Motivation (interest): M = 3.40, SD = 1.19/Polite;
M = 3.35, SD = 0.86/Direct
→ t-Test: Direct = Polite (t(35) = 0.14, p = 0.89)
Retention: M t= 19.45, SD = 5.61/Polite;
M = 15.65, SD = 5.15/Direct
→ -Test: Direct < Polite (t(35) = 2.135, p = 0.04,
d = 0.73)
Transfer: Note. The table comprises the results of the studies as far as reported by the authors. Studies with a control group (no agent) are marked in light grey.
Studies with a control group and more than one agent group are marked in dark grey.
Fig. 2. Examples of pedagogical agents: a talking head (left, Moundridou & Virvou, 2002), “Peedy the Parrot” (middle, Microsoft©, Atkinson, 2002), and a
cartoon human character (right, Craig et al., 2002).
5. Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes?
To enable systematic comparisons of existing studies on pedagogical agents, we propose a framework that comprises the
conditions of their use (cf. Domagk, 2008): Pedagogical Agents-Conditions of Use (PACU).
5.1. Pedagogical Agents-Conditions of Use Model (PACU)
PACU comprises four conditions of the use of pedagogical agents: (1) the learning environment in which the pedagogical
agent is implemented and its topic; (2) the characteristics of the learner who works with the learning environment; (3) the
functions that the pedagogical agent executes (the instructional support that it provides such as feedback, guidance or direct
instruction); and (4) the pedagogical agents design (the choice of the character with features like human vs. non-human
characters, animation, voice vs. text, gender, clothing, etc.) (Fig. 3).
These conditions are not independent from each other, as an effective pedagogical agent for children might look and
act quite differently compared to one that is used in educational software for adults. These interrelationships between the
conditions are represented by the arrows in Fig. 3. PACU can now be used as a framework to review studies according to the
question, under what conditions do pedagogical agents facilitate motivation and learning outcomes.
5.2. Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes?—Empirical evidence
To answer this question, the experimental design of a study needs to fulfill two conditions: (a) comparing at least two
agent groups to (b) one control group. Without a control group, it is not possible to provide evidence that the pedagogical
agent indeed facilitated motivation or learning outcomes. Without a second agent group it cannot be said whether possible
differences between the agent and the control group are due to the presence or the features of the pedagogical agent provided.
We therefore selected those studies that met both conditions.
Whereas only 15 out of the 39 reviewed studies comprised a control group, 10 of these studies additionally comprised
more than one agent group (see Table 1). We applied the Pedagogical Agents-Conditions of Use Model (PACU, Fig. 3) to
systematize the review of these studies.
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
43
Fig. 3. Pedagogical Agents—Conditions of Use Model (PACU).
The majority of these studies, 8 out of 10, concerned the design of pedagogical agents. Three of these studies compared animated and static agents to a control group (Baylor & Ryu, 2003; Dirkin et al., 2005; Lusk & Atkinson, 2007).
They reported no differences in retention. Lusk and Atkinson (2007) reported an advantage of the animated agent on
near and far transfer compared to the static agent and the control group (Cohen’s f = 0.18 for near and far transfer).
Baylor and Ryu (2003) found higher engagement ratings (motivation) in the animated agent group than in the static
agent (d = 0.55) and the control group (d = 0.46). Craig et al. (2002) reported no differences in retention or transfer when
comparing an agent with mimic and gestures to an agent with mimic and no agent. Atkinson (2002, experiment 1)
compared aural and textual explanations of pedagogical agents to no-agents groups (voice only, text only). He found an
advantage for aural explanations on near transfer with a large effect size (Cohen’s f = 0.42), but no difference between
aural and textual explanations on far transfer. The presence of the agent, however, yielded no differences on near or
far transfer. Plant et al. (2009) reported an advantage of a female agent on retention and career interest (motivation)
compared to a male agent (d = 0.53) and the control group (d = 0.71) that did not differ from each other. Two studies
compared likable and dislikeable pedagogical agents in appearance (Domagk, 2010, experiment 1) and voice (Domagk,
2010, experiment 2) to a control group without an agent. Both studies yielded no difference in motivation and retention
between the different agent groups and the control group. In contrast, differences were found on transfer performance.
The first study showed an advantage of the likable pedagogical agent compared to the control and the other agent groups
(2 = 0.06). The second study, however, indicated a disadvantage of the pedagogical agent dislikeable in appearance and
voice (2 = 0.28).
One of the ten studies concerned the function of the pedagogical agent. Clarebout and Elen (2006) compared a fixed
and an adapted agent (advice on a fixed moment in time vs. adapted advice based on the previous steps of the student). They reported no difference between the adapted and the fixed agent on retention, but an advantage of both
agent groups compared to the control (2 = 0.05). The three experimental groups, however, did not differ on transfer.
Clarebout and Elen (2006) further considered self-regulation as a learner characteristic and reported an interaction effect.
High self-regulators performed better on transfer tasks when they were presented with an adapted rather than a fixed agent
(2 = 0.06).
One study investigated the role of the learning environment, more precisely the learning setting by comparing pairs of
students working with an agent or a text based format of resources to generate effective explanations with single students
working with the agent (Holmes, 2007). Single students working with the agent performed better on retention tasks (deep
Table 2
Under what conditions do pedagogical agents facilitate motivation and learning? Aspects examined by studies consisting of one control and more than one
agent group.
Conditions of use (PACU)
Aspects examined
Studies
Characteristics of the learning environment
Characteristics of the learner
Function of the pedagogical agent
Learning setting: pairs of students vs. single students
Self-regulation skills
Adapted vs. fixed (predefined) advice
[16]
[8]
[8]
Design of the pedagogical agent
Animated vs. static agent
Voice vs. text
Agent with mimic + gestures vs. agent with mimic
Perceived appeal of the pedagogical agent (appearance and voice)
female vs. male
[7], [12], [20]
[1]
[10]
[13], [14]
[37]
Note: The numbers in the column “studies” refer to Table 1, where means, standard deviations and statistics are provided.
44
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 3
Empirical studies on features of the learning environment.
Features of the learning environment
Aspects examined
Studies
Learning setting
Virtual reality
Interactivity
Pairs of students vs. single students
Head mounted display (virtual reality) vs. desktop display
Control over pace
Selection of answers by the learner vs. by the pedagogical agent
[16]
[29], [30], [31]
[22], [23]
[33], [34]
Note: The numbers in the column “studies” refer to Table 1, where means, standard deviations and statistics are provided.
explanations) than pairs of students working with the agent or with text (no effect size provided). No differences were found
on shallow explanations or monitoring statements.
To summarize, only 10 studies were available to answer the question under what conditions pedagogical agents facilitate
learning outcomes. Learner motivation has only been investigated in four studies (Baylor & Ryu, 2003; Domagk, 2010,
experiments 1 and 2; Plant et al., 2009). According to PACU, most of the studies investigated the design of the pedagogical
agent (8 out of 10). Nevertheless, cumulative evidence on specific aspects is scarce. Three studies compared animated and
static agents and yielded mixed results. Two studies considered the perceived appeal the pedagogical agent and indicated
an effect on transfer performance (Domagk, 2010, experiments 1 and 2). The other conditions (characteristics of the learning
environment, learner variables and functions of the pedagogical agent) were only investigated in single studies .An overview
of the aspects examined is provided in Table 2.
6. How should pedagogical agents be designed to foster learner motivation and learning outcomes?
The vast majority of studies on pedagogical agents investigated different features of pedagogical agents by comparing
different agent groups not having a control group without an agent. They did not aim at answering the question whether the
implementation of pedagogical agents is effective in terms of motivation or learning. They focused rather on the question
of how pedagogical agents should be designed in order to facilitate motivation and learning. 24 of the 39 empirical studies
on pedagogical agents fell into this category. Furthermore, the 10 studies mentioned with different agent groups and a
control group can be consulted to answer the question of how to design pedagogical agents. Therefore, the studies reviewed
in this section amount to 34. To systematize the review, the Pedagogical Agents-Conditions of Use Model (PACU) was
applied, resulting in individual paragraphs about features of the learning environment, learner characteristics, functions of
the pedagogical agent, and its design. As some studies examine two or more features that are related to different conditions
(e.g., one learner characteristic and one function of the pedagogical agent), some studies may be mentioned in more than
one section.
6.1. How should pedagogical agents be designed?-The learning environment
Pedagogical agents are applied in different learning environments that have specific properties. They can appear in virtual
environments, in multimedia learning environments or in meta-learning systems, as well as in multi-agents systems (Bendel,
2003). The learning environment is not only determined by the applied technology, but also by its instructional design,
the learning content, the learning goals, the target group, its multimedia, and interactivity design (e.g., Niegemann et al.,
2008).
6.1.1. The learning environment—empirical evidence
Eight of the 34 reviewed studies paid specific attention to the learning environment in which the pedagogical agent is
implemented (Table 3). The study by Holmes (2007), who compared single students and pairs of students working with an
agent, has been reported on page 1317.
In three studies by Moreno and Mayer (2002, experiments 1 and 2; Moreno & Mayer, 2004), the learning environment
“Design-a-Plant” with the pedagogical agent “Herman the Bug” was presented either via desktop computer or head-mounted
display (virtual reality). The virtual reality was intended to give the learner the sense of being actually situated in the
simulated environment. Based on interest theory (Renninger, Hidi, & Krapp, 1992), Moreno and Mayer (2002, 2004) argued
that a higher presence could lead to better learning results as it may encourage the learners to active cognitive processing.
Two of the three studies yielded no difference between the virtual reality conditions (head-mounted display while walking or
sitting) and the desktop computer condition in retention or transfer (Moreno & Mayer, 2002, experiments 1 and 2). Contrary
to expectations, the third study (Moreno & Mayer, 2004) even showed an advantage of the desktop-condition compared to
the virtual reality condition in retention and transfer (retention: d = −0.73, transfer: d = −0.30). According to Moreno and
Mayer (2004), a possible explanation for these results is the potential of the virtual reality to distract the learner from the
learning task.
In two other studies, Moreno and Mayer (2005, experiments 2 and 3) compared an interactive and a non-interactive
version of the learning environment “Design-a-Plant” with the pedagogical agent “Herman the Bug”. In the interactive
version, the learner had to choose one answer out of eight, while in the non-interactive version the correct answer was
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
45
Table 4
Empirical studies on the role of learner characteristics.
Characteristics of the learner
Aspects examined
Studies
Cognitive factors
Prior knowledge
Academic competency (GPA scores)
Self-regulation skills
–
–
[9]
[17]
[8]
–
–
Metacogntive factors
Emotional factors
Motivational factors
Note: The numbers in the column “studies” refer to Table 1, where means, standard deviations and statistics are provided.
selected by the pedagogical agent. In the first study (Moreno & Mayer, 2005, experiment 2), they found no differences in
retention, but a main effect for interactivity on close and far transfer scores (far transfer: d = 0.63, close transfer: no effect
size provided). Students learning with the non-interactive version outperformed those with the interactive one. The second
study yielded no difference in close transfer between the interactive and the non-interactive group (Moreno & Mayer, 2005,
experiment 3). Mayer et al. (2003, experiments 2a and 2b) also examined the role of interactivity (control over pace vs.
no control over pace). They used a learning environment on the electric motor with the pedagogical agent “Dr. Phyz” and
reported an advantage of the interactive group on an immediate (experiment 2a) and a delayed transfer test (after 1 week,
experiment 2b) compared to the non-interactive group.
Up to now, eight studies have investigated the role of the learning environment for the design of pedagogical agents.
According to three studies by Moreno and Mayer, technically more complex learning environments (virtual reality) were
not superior to simpler versions (desktop computer). In one of the three studies, the virtual reality condition even yielded
negative effects and hindered learning. According to aspects of interactivity, the available four studies yielded mixed results.
While control over pace facilitated transfer performance in two studies, the selection of the correct answer yielded no difference in retention and transfer. It even harmed transfer performance in one study. Therefore, additional features such
as aspects of interactivity or a technically more complex design of the learning environment do not necessarily facilitate learning. They may even hinder it. Motivational effects of features of the learning environment have not yet been
investigated.
6.2. How should pedagogical agents be designed? – Learner characteristics
Characteristics of the learner who interacts with the pedagogical agent comprise a multitude of cognitive and noncognitive variables. Cognitive factors, such as prior knowledge, as well as affective factors have been shown to be important
mediating factors of the effectiveness of instructional support (Kalyuga, Ayres, Chandler, & Sweller, 2003; Pekrun, Goetz, Titz,
& Perry, 2002; Um, Song, & Plass, 2007). Relevant variables in learning contexts and, therefore, in relation to pedagogical
agents are: (a) emotional factors, such as boredom, pride, pleasure, or shame (e.g., Götz, Zirngibl, Hall, & Pekrun, 2003;
Pekrun, Elliot, & Maier, 2006); (b) motivational factors, such as interest, achievement motivation, self-efficacy (e.g., Eccles
& Wigfield, 2002; Keller & Litchfield, 2002; Kuhl, 2000; Rheinberg, Vollmeyer, & Rollett, 2000); (c) cognitive factors such as
prior knowledge (e.g., Kalyuga et al., 2003); and (d) metacognitive factors, such as self-regulation (e.g., Zimmerman, 2000).
6.2.1. Learner characteristics—empirical evidence
Only three out of the 34 empirical studies on pedagogical agents considered the role of learner characteristics (Table 4).
Two of the three studies concerned cognitive factors. Choi and Clark (2006) investigated the learner’s prior knowledge.
They found that learners with low prior knowledge performed better on retention tasks when working with the human
pedagogical agent “Genie” rather than an arrow (2 = 0.09). This difference was not found for learners with intermediate or
high prior knowledge.
Kim (2007) considered students’ academic competency, expressed by their GPA scores, and investigated the influence of
the competency level of the agent (high vs. low) and the interaction control (agent-control vs. learner-control). An interaction
effect occurred between learner competency and agent competency on self-efficacy (motivation, 2 = 0.27) and recall scores
(retention, 2 = 0.18). High competency learners showed higher motivation and retention scores when working with a high
competency agent. For low competency learners, an agent with low competency was more beneficial. Furthermore, the study
yielded an interaction effect between learner competency and interaction control on self-efficacy (motivation, 2 = 0.22),
but not on retention. High competency learners reported higher self-efficacy in the agent controlled condition, whereas low
competency learners reported higher self-efficacy in the learner controlled condition.
The third study considered learners’ self-regulation skills as a metacognitive factor (Clarebout & Elen, 2006). Highselfregulators performed better on transfer tasks when they were presented an adapted rather than a fixed agent (2 = 0.06, see
page 12 17).
To summarize, empirical studies that consider learner characteristics are scarce, even so prior knowledge in particular
has been shown to be an important mediator of the effectiveness of instructional support (Kalyuga et al., 2003). While single
studies are available on cognitive and metacognitive aspects, the role of emotional and motivational learner prerequisites
has not yet been investigated.
46
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Table 5
Empirical studies on functions of pedagogical agents.
Function of the pedagogical agent
Aspects examined
Studies
Motivation
Information
Information processing
Storing and retrieving
Transfer of information
Agent’s role: motivator vs. expert vs. mentor
Agent’s role: motivator vs. expert vs. mentor
Explanatory vs. corrective feedback
–
–
[5]
[5]
[26], [27], [32]
–
–
Monitoring and directing
Reflection prompts
Prequestion
Adapted vs. fixed (predefined) advice
[32], [33], [34]
[24]
[8]
Note: The numbers in the column “studies” refer to Table 1, where means, standard deviations and statistics are provided.
6.3. How should pedagogical agents be designed? – Functions of the pedagogical agent
Pedagogical agents may execute a variety of functions, such as guidance, motivation or focusing attention. They can,
therefore, provide different kinds of instructional support, such as feedback, reflection or elaboration cues. To categorize
different functions and roles of pedagogical agents, we propose to use Klauer’s (1985; Klauer & Leutner, 2007) teaching
functions:
Motivation: e.g., Arouse interest, highlight the relevance of the topic, strengthen the learner’s confidence.
Information: e.g., Draw the learner’s attention to the learning content, activate prior knowledge, enable the learner to
integrate the new information and prior knowledge into a new knowledge structure.
Information processing: e.g., Provide explicit information about prerequisites, conditions, relationships or outcomes of the
learning content, enable learners to decompose new information into smaller units, and to synthesize them in order to
extract similarities and differences.
Storing and retrieving: e.g., Guide learners to compare new and already stored information by reviewing similarities and
differences, as well as by integrating the new information into the existing cognitive structure.
Transfer of information: e.g., Apply the new knowledge and transfer it to other topics and new problems.
Monitoring and directing: e.g., Monitor the learner’s activities and guide them.
Pedagogical agents may offer support on at least one of these functions. An agent that does not fulfill any of these functions
may not be necessary and may even hinder learning as it potentially distracts the learner from the learning task.
6.3.1. Functions of the pedagogical agent—empirical evidence
Nine of the 34 studies were retrieved that investigated the function of the pedagogical agent (Table 5).
One of these studies by Baylor and Kim (2005, experiment 2) focused on the functions “motivation” and “information”.
They presented three pedagogical agents performing different roles (Fig. 4): motivator (motivation), expert (information),
and mentor (both). They were operationalized by image, affect, animation, dialogue, and voice. Learners in the motivation
function groups (motivator, mentor) reported higher self-efficacy (no effect size provided) and more engagement (d = 1.76)
than learners in the information function group (expert). Furthermore, the mentor group (motivation + information) showed
higher self-efficacy and engagement (d = 0.80) than the motivator and expert group. Regarding transfer performance, learners
in the mentor group (motivation + information) outperformed those in the motivator and expert group (d = 0.50). Agents
with an information function (mentor, expert) resulted in better transfer performance than the agent with the motivational
function only (motivator, d = 0.44). Overall, the mentor that comprised both motivational and information function was
beneficial for motivation and transfer performance.
Fig. 4. Pedagogical agents used in the study by Baylor and Kim (2005): expert (left), motivator (middle), and mentor (right).
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
47
Three studies investigated the function “information processing” by comparing explanatory and corrective feedback that
were provided by the pedagogical agent “Herman the Bug” (Moreno, 2004, experiments 1 and 2; Moreno & Mayer, 2005,
experiment 1). The form of feedback did not affect learner motivation (Moreno, 2004, experiments 1 and 2). Moreno (2004,
experiment 1) reported an advantage of the explanatory feedback group on retention compared to the corrective feedback
group (d = 0.76), while the other two studies yielded no difference in retention. In addition, explanatory feedback facilitated
transfer performance with medium to large effect sizes (0.75 < d < 1.87).
Most of the studies (five out of nine) considered the function “monitoring and directing”. Three studies by Moreno and
Mayer (2005, experiments 1 to 3) investigated the effect of reflection prompts—asking the learners to explain the choices
they have made. The pedagogical agent “Herman the Bug” was presented in both groups (reflection vs. no reflection). In
the first study, the guided reflection yielded no difference in retention, near or far transfer tasks. In the second study, they
also found no differences in retention, but an advantage for reflection prompts in near and far transfer. The third study
comprised an additional condition with program reflection—asking the learners to reflect on the correct answer as provided
by the pedagogical agent in contrast to self-reflection on their own choices that might be right or wrong. The reflective groups
outperformed the non-reflective groups on retention, but yielded no difference in close transfer tasks. Regarding far transfer,
the program reflection group outperformed the self-reflection and the no-reflection group. In the study of Mayer et al. (2003,
experiment 3), the pedagogical agent “Dr. Phyz” presented a prequestion to one group. The prequestion group outperformed
the no-prequestion group on transfer tasks. Clarebout and Elen’s (2006) study yielded no difference in retention and transfer
between a fixed and an adapted agent (see page 12 17).
The available nine studies on the pedagogical agents’ functions mainly focused on the teaching functions “information
processing” (3 studies) and “monitoring and directing” (5 studies). Three studies indicated that explanatory feedback may
facilitate transfer performance, while the presentation of reflection prompts yielded mixed results. The functions “storing
and retrieving”, as well as “transfer of information” have not yet been investigated. Only a single study was available on
the functions “motivation” and “information”. An agent that executed both functions was more effective than an agent that
provided support on either motivation or information. This result indicates that the combination of different functions might
be an effective approach.
6.4. How should pedagogical agents be designed? – Design of the pedagogical agent
Possibly the most complex condition of the use of pedagogical agents is its design. Most of the studies that compared
different agent groups (25 out of 34) have focused on this aspect. To enable a systematic review of these studies, we propose a
subordinate model (cf. Domagk, 2008; 2010): Pedagogical Agents—Levels of Design (PALD). It organizes the different design
features of pedagogical agents on three levels, whereas decisions on a higher level presuppose decisions on a lower level
(Fig. 5):
1. Global design level: Human vs. non-human characters and static vs. animated;
2. Medium design level: Technical decisions and choice of the character; and
3. Detail design level: Age, gender, clothing, weight, etc.
In the following, each level will be described in a separate paragraph, followed by the review of the corresponding studies.
6.4.1. Global design level: human vs. non-human characters and static vs. animated
On a global level, two main decisions on the design of the pedagogical agent have to be made: whether to use a human
or a non-human character, and whether the agent should be animated or static.
Human characters can be depicted as cartoon characters such as “Dr. Phyz” (Mayer et al., 2003), as well as video sequences
or pictures of human tutors. For non-human pedagogical agents, animals such as “Herman the Bug” (e.g., Moreno & Mayer,
2004) and “Peedy” the parrot (Atkinson, 2002) can be chosen, as well as objects such as a can, an electric light bulb or a
ball (Baylor, 2005). The pedagogical agents can be depicted as either animated or static. Animated pedagogical agents can
execute deictic gestures and, therefore, help to focus the attention of the learner (e.g., Atkinson, 2002; Baylor, Ryu, & Shen,
2003). On the other hand, the presence of a moving agent may produce a split-attention effect (e.g., Ayres & Sweller, 2005)
and distract the learner from the task (e.g., Baylor et al., 2003; Craig et al., 2002).
6.4.2. Global design level—empirical evidence.
Four of the 25 studies can be categorized on the global design level (Table 6). Choi and Clark (2006) compared a human pedagogical agent (“Genie”) to a non-human agent (arrow) and found no differences in retention.
Three studies compared animated and static pedagogical agents. They yielded no difference in retention (Baylor &
Ryu, 2003; Dirkin et al., 2005; Lusk & Atkinson, 2007). Lusk and Atkinson (2007) reported higher transfer scores
for the animated agent group than for the static agent group, and Baylor and Ryu (2003), higher engagement
scores.
48
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Fig. 5. Pedagogical Agents—Levels of Design model (PALD).
Table 6
Empirical studies on the design of pedagogical agents.
Design of the pedagogical agent
Aspects examined
Studies
Global design level
Human vs. non-human agent
Animated vs. static agent
[9]
[7], [12], [20]
Medium design level: technical decisions—visual
presence
Realism and lifelikeness
Socially intelligent agent
Emotional expression (positive vs. neutral vs. negative)
Empathetic responses (responsive vs. non-responsive)
Expressive and non-expressive agent
[39]
[18]
[19]
[38]
Animation level
Agent with mimic + gestures vs. agent with mimic
Deictic gestures and facial expression
[10]
[6]
Voice output
Spoken vs. printed text
Spoken + printed vs. spoken vs. printed text
Human vs. computer-generated voice
[1], [11], [21], [29], [30]
[11], [30]
[3], [4]
Speech style
Formal vs. a personalized
[31]
Medium design level: technical
decisions—auditory presence
Medium design level: choice of the character
Competence of the agent
Perceived appeal of the pedagogical agent (appearance
and voice)
[17]
[13], [14]
Detail design level
Male vs. female
Same vs. different gender as the learner
Same vs. different ethnicity as the learner
[18], [19], [37]
[28]
[28]
Note: The numbers in the column “studies” refer to Table 1, where means, standard deviations and statistics are provided.
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
49
6.4.3. Medium design level: technical decisions
Following these decisions on a global level, technical decisions on the medium design level have to be made. They concern
the visual and auditory presence of the pedagogical agent. Regarding the visual presentation, the degree of lifelikeness and
realism, the animation level, and a partial vs. full presentation of the pedagogical agent have to be chosen. Regarding the
auditory presence, decisions about the voice output (audio vs. text) and speech style (personalized vs. formal) are required.
As for the visual presence, the design of pedagogical agents can range from a simple cartoon or drawing to as much realism
and lifelikeness as possible. It is attempted to bring image and animation of pedagogical agents to perfection, especially from
a technological perspective. Aiming at photorealism, specific features such as muscle contractions, crinkle creation or lip
synchronization are discussed (Bradshaw, 1997; Ruttkay & Pelachaud, 2004). Lifelikeness means that the behavior of the
pedagogical agent should seem to be natural and appropriate to the learner. It is assumed that lifelike characters are more
credible and that unnatural behavior may distract the learner (Bercht & Viccardi, 2000; Elliot, Rickel, & Lester, 1999; Johnson
et al., 2000; Krämer, 2006; Lester, Voerman, Towns, & Callaway, 1999). For modeling emotions of the pedagogical agent,
models of emotional expression are applied (e. g. Ekman & Friesen, 1978; Ekman & Rosenberg, 2005). The terms “lifelike
pedagogical agents” (Elliot et al., 1999; Lester, Towns, & FitzGerald, 1999) and “emotional pedagogical agents” (Brna, Cooper,
& Razmerita, 2001) are frequently used in this context. Closely linked to realism and lifelikeness, the animation level of a
pedagogical agent (if applicable) can range from rudimental movements to complex courses of motion. Additionally, it can
be designed as either 2-dimensional or 3-dimensional. A partial presentation of the pedagogical agent, particularly for human
characters, can be chosen by displaying the head (and the upper part of the body) only.
For the auditory presence, the voice output can be varied by designing pedagogical agents that communicate via spoken
or printed text. According to the modality effect (e.g., Low & Sweller, 2005; Mousavi, Low, & Sweller, 1995; Penney, 1989),
spoken text that accompanies a visual presentation of learning material is assumed to be more appropriate for learning than
printed text (at least under time constraints). Additionally, spoken text can be presented via human or computer-simulated
voice. Social agency theory (Mayer, 2005b) suggests that human voices may be favorable compared to computer-simulated
ones (Atkinson, Mayer, & Merril, 2005). The pedagogical agent can also address the learner in a personalized or formal style.
The personalized style is intended to facilitate the sense of social presence (Moreno & Mayer, 2004).
6.4.4. Medium design level: technical decisions—empirical evidence
Fourteen of the 25 studies on the design of pedagogical agents concerned technical decisions on the medium design level
(Table 6).
Four of these studies were related to the features realism and lifelikeness. Wang et al. (2008) compared a socially intelligent
pedagogical agent that provides polite feedback, taking the students’ goals and actions into account, and a pedagogical agent
providing direct feedback, disregarding the learners’ actions. No differences were found for interest and self-efficacy (motivation). However, the polite feedback group performed better on retention tasks than the direct feedback group (d = 0.73).
Kim, Baylor, and Shen (2007, experiment 1) investigated the role of the pedagogical agent’s emotional expression (positive
vs. neutral vs. negative). They found an advantage of the positive and neutral emotional expression compared to the negative expression on perceived engagement (2 = 1.09), but not on interest or self-efficacy (motivation). The performance on
retention tasks did not differ between the three groups. In another study (Kim et al., 2007, experiment 2), the pedagogical
agent provided empathetic responses (responsive vs. non-responsive). Responsive agents facilitated interest (2 = 0.47) and
self-efficacy (2 = 0.29), but did not affect perceived engagement or retention. Veletsianos (2009) compared an expressive
and non-expressive agent. He found that an agent that is expressive (pauses, raises its voice from time to time, . . .) leads to
better retention than a non-expressive agent (d = 0.71).
Two studies explored different animation levels of pedagogical agents. Craig et al. (2002) found no differences in retention
or transfer for an agent with mimic (gaze, eye blinks, and eye movements), an agent with mimic and gestures, and the control
group. The study of Baylor and Kim (2009) revealed an interaction effect between deictic gestures and facial expression of
agents (2 = 0.053). When a pedagogical agent used facial expressions, the absence of gestures enhanced learning; while
when a pedagogical agent did not show facial expressions, deictic gestures enhanced learning. The authors further reported
main effects for deictic gestures and facial expression (no effect sizes provided). The results indicated that the presence
of facial expressions is beneficial for learning. The main effect of deictic gestures was not specified by the authors. Deictic
gestures, however, could be shown to be beneficial for learning in the case of a procedural learning goal but not for an
attitudinal learning goal (2 = 0.048).
Regarding the auditory presence of the pedagogical agent, five studies compared different voice outputs: spoken vs. printed
text. All of these studies demonstrated the modality effect, as learners that were presented spoken explanations outperformed
those in the text condition in retention and transfer tasks (Atkinson, 2002, experiment 1; Craig et al., 2002, experiment 2;
Mayer et al., 2003, experiment 1; Moreno & Mayer, 2002, experiments 1 and 2). Two studies did not assess retention
(Atkinson, 2002, experiment 1; Mayer et al., 2003, experiment 1) and Atkinson (2002) found differences in close, but not in
far transfer performance. In addition, two studies comprised an additional condition: presenting both spoken and printed
text in order to test the redundancy effect (cf. Mayer, 2005a; Sweller, 2005). Craig et al. (2002, experiment 2) demonstrated
the redundancy effect for transfer but not for retention tasks. Learners in the spoken + printed text condition performed
worse on transfer problems than learners in the spoken text condition. However, Moreno and Mayer (2002, experiment 2)
reported no difference between the spoken and the spoken + printed condition in retention and transfer; both were superior
to the text condition.
50
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Fig. 6. The pedagogical agents “Mike” (left, Kim, 2007) and “Tom” (middle: likable, right: dislikable, Domagk, 2010).
Two studies compared a human to a computer-generated voice (Atkinson et al., 2005, experiments 1 and 2) and consistently
found an advantage of the human voice in retention, near and far transfer (0.63 < d < 0.84).
One study by Moreno and Mayer (2004) examined a formal vs. a personalized speech style of the pedagogical agent. They
found an advantage of the personalized style in retention (d = 0.77) and transfer (d = 1.64).
In sum, six studies on technical decisions on the medium design level considered the visual presence of the pedagogical
agent. As they investigated diverse aspects of the agent’s realism, lifelikeness and animation level, no cumulative evidence
can be drawn from these studies. Stronger evidence can be derived for the pedagogical agent’s auditory presence that was
examined in 8 studies. Five studies consistently showed an advantage of spoken compared to printed text. Two studies
consistently indicated an advantage for the use of a human rather than a computer-generated voice.
6.4.5. Medium design level: choice of the character
The second category on the medium design level is the choice of the character. To guide decisions on features, such as
age, gender, clothing, or function of the pedagogical agent, Bendel (2003) proposed three approaches:
◦ Determining features, interests and/or competencies that should be ascribed to the pedagogical agent: Single features are
designed in a way that the character appears to be competent, likable or reliable. The appearance, actions and statements
of the pedagogical agent can then be chosen accordingly.
◦ Aiming at a specific role: Pedagogical agents can be conceptualized as experts, teachers or peers. The pedagogical agent
“Adele” (Shaw et al., 1999), for instance, represents a young doctor by wearing a stethoscope and a white lab coat.
◦ Replicating real or fictional role models: The design of pedagogical agents can also be guided by real role models, such as
trainers, lecturers or personalities of society, art or science. Furthermore, fictional characters out of movies or literature
can be used.
These three approaches can be applied to guide decisions on the design of pedagogical agents. Nevertheless, it remains
unclear, which specific features should be chosen if the pedagogical agent is supposed to appear likable or competent. First
suggestions and empirical results for features that are related to the pedagogical agents’ appeal are reported by Domagk,
Poepperling, and Niegemann (2006).
6.4.6. Medium design level: choice of the character—empirical evidence
Three studies could be retrieved that explicitly considered the choice of the character (Table 6). Kim (2007) compared an
agent who had a high competence level in the domain to an agent with low competence (Fig. 6). The low competent agent
told the learners that he did not know so much about the topic and that they would work together to reach the goals. The
reported interaction effects between agent competency and learner competency have been discussed on page 17 19 in the
learner variables section.
Two studies by Domagk (2010, experiments 1 and 2) investigated the role of the pedagogical agent’s appeal (Fig. 6).
The appeal of the pedagogical agent’s appearance and voice did not affect learner motivation. In the first study, a likable
appearance facilitated transfer performance (2 = 0.06), while no differences between the likable, neutral and dislikeable
agent occurred on retention. In the second study, the presentation of a pedagogical agent likable in appearance was beneficial
for retention (2 = 0.03) and transfer performance (2 = 0.22) compared to a dislikeable one. A likable voice facilitated transfer
performance (2 = 0.13) but did not affect retention. These differences, however, did not occur in comparison to a control
group without an agent (Domagk, 2010, experiment 2). Here, a pedagogical agent dislikeable in appearance and voice even
hindered transfer (2 = 0.28, see page 12 17).
To summarize, the available three studies on the choice of the character indicated that the competency and appeal of a
pedagogical agent may affect learning. Moreover, a disadvantageous design of the character can even hinder learning. Effects
on learner motivation, however, have not yet been shown.
6.4.7. Detail design level: age, gender, clothing, weight, etc.
Follow-up decisions have to be taken on the detail level dependent on decisions on the global and medium design
level. They also concern the visual and the auditory presence of the pedagogical agent. As for the visual presence of the
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
51
pedagogical agent, features such as age, gender, clothing, weight, and ethnicity have to be defined. If the pedagogical agent
communicates via spoken text (auditory presence), features of its voice, such as intonation, accentuation and speech rate,
have to be determined.
6.4.8. Detail design level: age, gender, clothing, weight, etc.—empirical evidence.
Four of the 25 studies examined design features on the detail level (Table 6). They considered the pedagogical agent’s
gender (Kim et al., 2007, experiments 1 and 2; Moreno & Flowerday, 2006; Plant et al., 2009). Kim et al. (2007, experiments 1
and 2) assessed three indicators for learner motivation. They found higher perceived engagement scores for the groups that
were presented with a male rather than a female agent (dexp2 = 0.59). Learners further reported higher interest when working
with a male compared to a female agent in the first study (2 = 0.11). This did not hold for the second study. Self-efficacy
ratings did not differ between the groups. They further reported no difference in retention in one study (experiment 2), but
an advantage of the male agent group compared to the female agent group in the other study (experiment 1, d = 0.60). Plant
et al. (2009) however, found a female agent having a more positive effect on motivation and learning than a male agent.
Moreno and Flowerday’s (2006) study comprised three conditions: presenting pedagogical agents of the same vs. a different
gender as the learner (same vs. different gender), of the same ethnicity as the learner (same vs. different ethnicity), and
having the learners choose the pedagogical agent with whom they want to work (choice vs. no choice). Neither similarity
of gender, similarity of ethnicity nor choice yielded differences in retention or transfer scores.
To summarize, the available four studies on the detail design level investigated the role of the pedagogical agent’s gender.
They yielded mixed results on motivation and learning, but showed that even characteristics on the detail design level of
the pedagogical agent may affect the learning process. In order to clarify the role of the pedagogical agent’s gender, future
studies may additionally consider learner characteristics and the topic of the learning environment to control for gender
stereotypes.
7. Discussion and overall conclusion
The review presented aimed at mapping out research on pedagogical agents and their effectiveness for learner motivation
and learning. Using the 2002 review (Clarebout et al., 2002) as a starting point revealed that research on pedagogical agents
has recently gained a lot of interest. 75 articles on pedagogical agents could be retrieved, and 26 of these articles met the
criteria of presenting empirical studies where motivational and/or learning effects were investigated. As some of these
articles referred to multiple studies, a total number of 39 studies have been included (in 2002, the total number amounted
to 5).
7.1. Do pedagogical agents facilitate learner motivation and learning? Do they make a difference?
Pedagogical agents are expected to facilitate learner motivation and learning due to the social cues they provide (e.g.,
Atkinson, 2002; Baylor & Kim, 2005; Craig et al., 2002; Johnson et al., 2000). The review of studies that compared agent and
control groups (15 out of 39), however, revealed that the majority of studies (9 out of 15) yielded no difference on learning.
Motivational measures have only been applied in 4 out of 15 studies, and 3 of them yielded no difference. Therefore, a general
motivating or learning facilitating effect of pedagogical agents has to be questioned. In light of the great variety of pedagogical
agents used in the studies and the specific functions they execute, the question of whether they are generally effective is
too broad. A more fruitful approach is to ask under what conditions pedagogical agents might be effective for whom. Thus,
we introduced the Pedagogical Agents—Conditions of Use Model (PACU, Fig. 3) that takes four conditions into account: (1)
characteristics of the learning environment, (2) learner variables, (3) the function that the pedagogical agent executes, and
(4) the design of the pedagogical agent. We further specified categories for the functions and the design of pedagogical
agents. Klauer’s (1985; Klauer & Leutner, 2007) teaching functions can be applied to describe the instructional role of the
pedagogical agent. However, the design of pedagogical agents is such a complex topic that we proposed a subordinate model,
“Pedagogical Agents—Levels of Design (PALD, Figure 5)”. The application of both models – PACU and PALD – as a framework
for the review presented demonstrates their utility to enable systematic comparisons of existing studies and guide future
studies. They allow the identification of gaps and open questions in pedagogical agent research, as well as the identification
of aspects that are well examined. These models, therefore, provide a basis to map out a research agenda on pedagogical
agents.
7.2. Under what conditions do pedagogical agents facilitate learner motivation and learning outcomes? When are they
effective?
Only 10 out of the 39 studies reviewed addressed the question under what conditions pedagogical agents might be effective. The majority of these studies considered the design of the pedagogical agents (8 out of 10), whereas only single studies
addressed the other three conditions of their use: the learning environment, learner characteristics, and the pedagogical
agent’s function. Motivation as a dependent variable has only been assessed in four of these studies. Due to the small number of studies that fell into this category, little can be said about the effectiveness of specific features. Three studies that
compared an animated, a static and no agent consistently indicate no difference on retention (Baylor & Ryu, 2003; Dirkin
52
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
et al., 2005; Lusk & Atkinson, 2007). Two studies consistently indicated no effect of the perceived appeal of the pedagogical agent on motivation but on transfer performance when compared to a control group (Domagk, 2010). More research
is needed to identify conditions under which pedagogical agents benefit learner motivation and learning. The exploration
of appropriate functions of a pedagogical agent and relevant learner characteristics has especially been widely neglected
so far.
7.3. How should pedagogical agents be designed to foster learner motivation and learning outcomes?
The focus of recent pedagogical agent research, however, has clearly been on the comparison of different pedagogical
agent groups (without a control group) as 24 of the 39 studies reviewed fell into this category. They sought to answer the
question of how pedagogical agents should be designed rather than whether it is worth implementing a pedagogical agent.
Together with studies that comprised more than one agent and a control group, the studies reviewed according to this
question amounted to 34. Again, the vast majority of studies focused on the design of pedagogical agents (25 out of 34),
whereas the other conditions of their use gained less attention: learning environment (8 out of 34), learner characteristics
(3 out of 34), and function of the pedagogical agent (9 out of 34). To systematize studies on the design of pedagogical agents,
the Pedagogical Agents—Levels of Design (PALD) model has been applied. Most studies were available on technical decisions
on the medium design level (14 out of 24). The choice of the character on the medium design level (3 out of 24), as well as the
global and detail design level (each 4 out of 24) have been rarely considered. Whereas most aspects have only been examined
in single studies, and some aspects yielded mixed results, the investigation of the following aspects yielded consistent results:
Considering the learning environment, three studies by Moreno and Mayer (2002, experiments 1 and 2; 2004) indicated
that technically more complex learning environments (virtual reality) did not benefit or even hindered learning. Two studies
consistently showed an advantage of providing control over pace (Mayer et al., 2003, experiments 2a and 2b). According
to the functions of pedagogical agents, three studies consistently demonstrated an advantage of explanatory compared
to corrective feedback on transfer performance (Moreno, 2004, experiments 1 and 2; Moreno & Mayer, 2005, experiment
1). Although the design of the pedagogical agent has been investigated most frequently, only one single feature yielded
consistent findings across multiple studies: five studies indicated that explanations via spoken text are more beneficial
for learning than explanations via printed text (Atkinson, 2002, experiment 1; Craig et al., 2002, experiment 2; Mayer et
al., 2003, experiment 1; Moreno & Mayer, 2002, experiments 1 and 2). To summarize, only one function of a pedagogical
agent (providing explanatory feedback) and one design feature (spoken explanations) are comparatively well examined but
lack a comparison to a control group without an agent. Motivational effects have only been considered in 11 out of the 34
studies.
7.4. Conclusion
Despite the great number of existing studies on pedagogical agents, a lot of open questions remain. The application of
PACU and PALD highlights the complexity of pedagogical agents as a research topic. Not only have four conditions of their
use to be taken into account, but also each condition itself comprises a variety of possible features, most prominently the
design of the pedagogical agent.
The review presented ought to answer three main questions: Do pedagogical agents facilitate learner motivation and
learning? Under what conditions are they effective? How should pedagogical agents be designed? The separation of these
questions revealed a methodological issue. Due to a lack of control groups, various studies did not investigate the first
two fundamental questions. They focused rather on the third question of how to design pedagogical agents by comparing
different agent groups. However, an experimental design with an additional control group would enable the provision of
evidence on all three questions. Studies designed in this way may contribute to the discussion on whether pedagogical
agents are effective, as well as to the development of empirically tested guidelines for practitioners under what conditions
it is reasonable to implement an agent and how to design it.
References
André, E., Rist, T., & Müller, J. (1999). Employing AI methods to control the behavior of animated interface agents. Applied Artificial Intelligence, 13, 415–448.
Atkinson, R. K. (2002). Optimizing learning from examples using animated pedagogical agents. Journal of Educational Psychology, 94, 416–427.
Atkinson, R. K., Mayer, R. E., & Merril, M. M. (2005). Fostering social agency in multimedia learning: Examining the impact of an agent‘s voice. Contemporary
Educational Psychology, 30, 117–139.
Ayres, P., & Sweller, J. (2005). The split-attention principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp.
135–146). Cambridge: Cambridge University Press.
Baylor, A. L. (2005). The impact of pedagogical agents image on affective outcomes. Paper presented at the International Conference on Intelligent User
Interfaces. San Diego, CA.
Baylor, A. L., & Kim, S. (2009). Designing nonverbal communication for pedagogical agents: When less is more. Computers in Human Behavior, 25, 450–457.
Baylor, A. L., & Kim, Y. (2005). Simulating instructional roles through pedagogical agents. International Journal of Artificial Intelligence in Education, 15(1).
Baylor, A. L., & Ryu, J. (2003). The effects of image and animation in enhancing pedagogical agent persona. Journal of Educational Computing Research, 28(4),
373–395.
Baylor, A. L., Ryu, J., & Shen, E. (2003). The effects of pedagogical agent voice and animation on learning, motivation and perceived persona. Paper presented at
the ED-MEDIA. Honolulu, Hawaii.
Bendel, O. (2003). Pädagogische Agenten im Corporate E-Learning. Institut für Wirtschaftsinformatik, from http://beat.doebe.li/bibliothek/b01472.html.
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
53
Bercht, M., & Viccardi, R. M. (2000). Pedagogical agents with affective and cognitive dimensions. Retrieved 22.01, 2006, from
http://www.c5.cl/ieinvestiga/actas/ribie2000/papers/173/index.htm.
Biswas, G., Leelawong, K., Schwartz, D., & Vye, N. (2005). Learning by teaching: A new agent paradigm for educational software. Applied Artificial Intelligence,
19, 363–392.
Bradshaw, J. M. (1997). Software agents. Menlo Park, CA: AAAI Press/The MIT Press.
Brna, P., Cooper, B., & Razmerita, L. (2001). Marching to the wrong distant drum: Pedagogic agents, emotion and student modelling. Proceedings of the 2nd
international workshop on attitude, personality and emotions in user-adapted interaction. Sonthofen, Germany.
Cassell, J., & Thórisson, K. R. (1999). The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Applied Artificial
Intelligence, 13, 519–538.
Choi, S., & Clark, R. E. (2006). Cognitive and affective benefits of an animated pedagogical agent for learning English as a second language. Journal of
Educational Computing Research, 34(4), 441–466.
Clarebout, G., & Elen, J. (2006). Open learning environments and the impact of pedagogical agents. Journal of Educational Computing Research, 35,
211–226.
Clarebout, G., Elen, J., Johnson, W. L., & Shaw, E. (2002). Animated pedagogical agents: An opportunity to be grasped? Journal of Educational Multimedia and
Hypermedia, 11(3), 267–286.
Clark, R. C., & Mayer, R. E. (2002). E-learning and the science of instruction. Proven guidelines for consumers and designers of multimedia learning. San Francisco:
Jossey-Bass/Pfeiffer.
Clark, R. E., & Choi, S. (2005). Five design principles for experiments on the effects of animated pedagogical agents. Journal of Educational Computing Research,
32(3), 209–225.
Craig, S. D., Gholson, B., & Driscoll, D. M. (2002). Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture
features, and redundancy. Journal of Educational Psychology, 94(2), 428–434.
Dehn, D. M., & van Mulken, S. (2000). The impact of animated interface agents: A review of empirical research. International Journal of Human–Computer
Studies, 52, 1–22.
Dirkin, K. H., Mishra, P., & Altermatt, E. (2005). All or nothing: Levels of sociability of a pedagogical software agent and its impact on student perceptions
and learning. Journal of Educational Multimedia and Hypermedia, 14(2), 113–127.
Domagk, S. (2008). Pädagogische Agenten in multimedialen Lernumgebungen. Empirische Studien zum Einfluss der Sympathie auf Motivation und Lernerfolg.
(Band 9 der Reihe Wissensprozesse und digitale Medien). Berlin: Logos.
Domagk, S. (2010). Do pedagogical agents facilitate learner motivation and learning outcomes? The role of the appeal of agent’s appearance and voice.
Journal of Media Psychology, 22(2), 82–95.
Domagk, S., Poepperling, P., & Niegemann, H. M. (2006). Do people prefer the opposite sex? Characteristics of pedagogical agents. In G. Clarebout, & J. Elen
(Eds.), Avoiding simplicity, confronting complexity (pp. 187–192). Rotterdam: Sense Publishers.
Dunsworth, Q., & Atkinson, R. K. (2007). Fostering multimedia learning of science: Exploring the role of an animated agent’s image. Computers & Education,
49(3), 677–690.
Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53, 109–132.
Ekman, P., & Friesen, W. (1978). Facial action coding system. Palo Alto: Consulting Psychologist Press.
Ekman, P., & Rosenberg, E. L. (2005). What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system (Facs). New
York: Oxford University Press.
Elliot, C., Rickel, J., & Lester, J. C. (1999). Lifelike pedagogical agents and affective computing: An exploratory synthesis. In M. Wooldridge, & M. Veloso (Eds.),
Artificial Intelligence today (pp. 195–212). Berlin: Springer–Verlag.
Erickson, T. (1997). Designing agents as if people mattered. In J. M. Bradshaw (Ed.), Software agents (pp. 79–96). Menlo Park, CA: AAAI Press/The MIT Press.
Götz, T., Zirngibl, A., Hall, N., & Pekrun, R. (2003). Emotions, learning and achievement from an educational-psychological perspective. In P. Mayring, & C.
v. Rhoeneck (Eds.). Learning emotions. Berlin: Peter Lang.
Graesser, A., Chipman, P., Haynes, B., & Olney, A. (2005). AutoTutor: An intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on
Education, 48(4), 612–618.
Graesser, A., Lu, S., Jackson, G. T., Mitchell, H., Ventura, M., Olney, A., et al. (2004). AutoTutor: A tutor with dialogue in natural language. Behavioral Research
Methods, Instruments, and Computers, 36, 180–192.
Graesser, A. C., Wiemer-Hastings, K., Wiemer-Hastings, P., Kreuz, R., & T. R. G. (1999). AutoTutor: A simulation of a human tutor. Journal of Cognitive Systems
Research, 1, 35–51.
Gulz, A., & Haake, M. (2006). Design of animated pedagogical agents—A look at their look. International Journal of Human-Computer Studies, 64, 322–339.
Guray, C., Celebi, N., Atalay, V., & Pasamehmetoglu, A. G. (2003). Ore-Age: A hybrid system for assisting and teaching mining method selection. Expert
Systems with Applications, 24, 261–271.
Holmes, J. (2007). Designing agents to support learning by explaining. Computer & Education, 48, 523–547.
Johnson, W. L., Rickel, J., Stiles, R., & Munro, A. (1998). Integrating pedagogical agents into virtual environments. Teleoperators and Virtual Environments,
7(6), 523–546.
Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International
Journal of Artificial Intelligence in Education, 11, 47–78.
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 32–132.
Keller, J. M., & Litchfield, B. C. (2002). Motivation and performance. In R. A. Reiser, & J. V. Dempsey (Eds.). Trends and issues in instructional design and
technology (pp. 83–98). Upper Saddle River, NJ/Columbus, OH: Merrill/Prentice Hall.
Kim, Y. (2007). Desirable characteristics of learning companions. International Journal of Artificial Intelligence, 17(4), 371–377.
Kim, Y., Baylor, A. L., & Shen, E. (2007). Pedagogical agents as learning companions: The impact of agent emotion and gender. Journal of Computer Assisted
Learning, 23, 220–234.
King, W. J., & Ohya, J. (1996). The representation of agents: Anthropomorphism, agency, and intelligence. Paper presented at the human factors in computing
systems: CHI’96 Electronic Conference Proceedings.
Klauer, K. J. (1985). Framework for a theory of teaching. Teaching & Teacher Education, 1(1), 5–17.
Klauer, K. J., & Leutner, D. (2007). Lehren und Lernen. Einführung in die Instruktionspsychologie. Weinheim: Beltz.
Krämer, N. (2006). Soziale Wirkungen virtueller Helfer. Gestaltung und Evaluation von Mensch-Computer-Interaktionen. Stuttgart: Kohlhammer.
Kuhl, J. (2000). A functional-design approach to motivation and self-regulation. The dynamics of personality systems interaction. In M. Boekaerts, P. R.
Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 111–169). San Diego, San Francisco, New York: Academic Press.
Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., & Bhogal, R. S. (1997). The persona effect: Affective impact of animated pedagogical agents.
In CHI’97: Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 359–366). New York: ACM Press.
Lester, J. C., Towns, S. G., & FitzGerald, P. J. (1999). Achieving affective impact: Visual emotive communication in lifelike pedagogical agents. International
Journal of Artificial Intelligence in Education, 10, 278–291.
Lester, J. C., Voerman, J. L., Towns, S. G., & Callaway, C. B. (1999). Deictic believability: Coordinating gesture, locomotion, and speech in lifelike pedagogical
agents. Applied Artificial Intelligence, 13, 383–414.
Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 147–158).
Cambridge: Cambridge University Press.
Lusk, M. M., & Atkinson, R. K. (2007). Animated pedagogical agents: Does their degree of embodiment impact learning from static or animated worked
examples. Applied Cognitive Psychology, 21, 747–764.
54
S. Heidig, G. Clarebout / Educational Research Review 6 (2011) 27–54
Mayer, R. E. (2005a). Principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and
temporal contiguity principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning. Cambridge: Cambridge University Press.
Mayer, R. E., (2005b) Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge
handbook of multimedia learning (pp. 201–214). Cambridge: Cambridge University Press.
Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based
microworlds. Journal of Educational Psychology, 4, 806–813.
Morch, A. I., Dolonen, J. A., & Naevdal, J. E. B. (2004). An evolutionary approach to prototyping pedagogical agents: From simulation to integrated system.
Journal of Network and Computer Applications, 29, 177–199.
Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia.
Instructional Science, 32, 99–113.
Moreno, R., & Flowerday, T. (2006). Student‘s choice of animated pedagogical agents in science learning: A test of the similarity-attraction hypothesis on
gender and ethnicity. Contemporary Educational Psychology, 31, 186–207.
Moreno, R., & Mayer, R. E. (2002). Learning science in virtual reality multimedia learning environments: Role of method and media. Journal of Educational
Psychology, 94(3), 598–610.
Moreno, R., & Mayer, R. E. (2004). Personalized messages that promote learning in virtual environments. Journal of Educational Psychology, 96(1), 165–173.
Moreno, R., & Mayer, R. E. (2005). Role of guidance, reflection, and interactivity in an agent-based multimedia game. Journal of Educational Psychology, 97(1),
117–128.
Moreno, R., Mayer, R. E., & Lester, J. C. (2000). Life-like pedagogical agents in constructivist multimedia environments: Cognitive consequences of their interaction.
Paper presented at the World Conference on Educational Multimedia, Hypermedia and Telecommunications (ED-MEDIA 2000). Montreal, Canada.
Moreno, R., Mayer, R. E., Spires, H., & Lester, J. (2001). The case of social agency in computer-based teaching: Do students learn more deeply when they
interact with animated pedagogical agents? Journal of Educational Psychology, (87), 177–214.
Moundridou, M., & Virvou, M. (2002). Evaluating the persona effect of an interface agent in a tutoring system. Journal of Computer-Assisted Learning, 18(3),
253–261.
Mousavi, S. Y., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology,
87, 319–334.
Niegemann, H. M., Domagk, S., Hessel, S., Hein, A., Hupfer, M., & Zobel, A. (2008). Kompendium multimediales Lernen. Berlin: Springer.
Pekrun, R., Elliot, A. J., & Maier, M. A. (2006). Achievement goals and discrete achievement emotions: A theoretical model and prospective test. Journal of
Educational Psychology, 98, 583–597.
Pekrun, R., Goetz, T., Titz, W., & Perry, R. P. (2002). Academic emotions in students’ self-regulated learning and achievement: A program of qualitative and
quantitative research. Educational Psychologist, 37(2), 91–105.
Penney, C. G. (1989). Modality effects and the structure of short-term memory. Memory and Cognition, 17, 389–442.
Perez, R., & Solomon, H. (2005). Effect of a socratic animated agent on student performance in a computer-simulated disassembly process. Journal of
Educational Multimedia and Hypermedia, 14(1), 47–59.
Plant, E. A., Baylor, A. L., Doerr, C. E., & Rosenberg-Kima, R. B. (2009). Changing middle-school students’ attitudes and performance regarding engineering
with computer-based social models. Computers & Education, 53, 209–215.
Predinger, H., Ma, C., & Ishizuka, M. (2007). Eye movements as indices for the utility of life-like interface agents: A pilot study. Interacting with Computers,
19, 281–292.
Prendinger, M., Mori, J., & Ishizuk, M. (2005). Using human physiology to evaluate subtle expressivity of a virtual quizmaster in a mathematical game.
International Journal of Human–Computer Studies, 62(2), 231–245.
Renninger, K. A., Hidi, S., & Krapp, A. (1992). The role of interest in learning and development. Hillsdale, NJ: Erlbaum.
Rheinberg, F., Vollmeyer, R., & Rollett, W. (2000). Motivation and action in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.),
Handbook of self-regulation (pp. 503–529). San Diego, San Francisco, New York: Academic Press.
Roda, C., Angehrn, A., Nabeth, T., & Razmerita, L. (2003). Using conversational agents to support the adoption of knowledge sharing practices. Interacting
with Computers, 15(1), 57–89.
Ruttkay, Z., & Pelachaud, C. (2004). From brows to trust: Evaluating embodied conversational agents. Dordrecht: Kluwer Academic Publishers.
Shaw, E., Ganeshan, R., Johnson, W. L., & Millar, D. (1999). Building a case for agent-assisted learning as a catalyst for curriculum reform in medical education.
Paper presented at the International Conference on Artificial Intelligence in Education. Marina del Rey, USA.
Sweller, J. (2005). The redundancy principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 159–168).
Cambridge: Cambridge University Press.
Takeuchi, A., & Naito, T. (1995). Situated facial displays: Towards social interaction. In I. Katz, R. Mack, L. Marks, M. B. Rosson, & J. Nielsen (Eds.). Human
factors in computing systems: CHI’95 conference proceedings (pp. 450–455). New York: ACM Press.
Um, E., Song, H., & Plass, J. L. (2007). The effect of positive emotions on multimedia learning. Paper presented at the World Conference on Educational
Multimedia, Hypermedia & Telecommunications (ED-MEDIA 2007). Vancouver, Canada.
Veletsianos, G. (2009). The impact and implications of virtual character expressiveness on learning and agent–learner interactions. Journal of Computer
Assisted Learning, 25, 345–357.
Walker, J. H., Sproull, L., & Subramani, R. (1994). Using a human face in an interface. In B. Adelson, S. Dumais, & J. Olson (Eds.). Human factors in computing
systems: CHI’94 conference proceedings (pp. 85–91). New York: ACM Press.
Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International
Journal of Human–Computer Studies, 66, 98–112.
Zimmerman, B. J. (2000). Attaining self-regulation. A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of selfregulation (pp. 13–39). San Diego, CA: Academic Press.