Dogs (Canis familiaris) Account for Body Orientation but Not Visual

Journal of Comparative Psychology
2014, Vol. 128, No. 3, 285–297
© 2014 American Psychological Association
0735-7036/14/$12.00 DOI: 10.1037/a0035742
Dogs (Canis familiaris) Account for Body Orientation but Not Visual
Barriers When Responding to Pointing Gestures
Evan L. MacLean, Christopher Krupenye, and Brian Hare
Duke University
In a series of four experiments we investigated whether dogs use information about a human’s visual
perspective when responding to pointing gestures. While there is evidence that dogs may know what
humans can and cannot see, and that they flexibly use human communicative gestures, it is unknown if
they can integrate these two skills. In Experiment 1 we first determined that dogs were capable of using
basic information about a human’s body orientation (indicative of her visual perspective) in a point
following context. Subjects were familiarized with experimenters who either faced the dog and accurately
indicated the location of hidden food, or faced away from the dog and (falsely) indicated the unbaited
container. In test trials these cues were pitted against one another and dogs tended to follow the gesture
from the individual who faced them while pointing. In Experiments 2– 4 the experimenter pointed
ambiguously toward two possible locations where food could be hidden. On test trials a visual barrier
occluded the pointer’s view of one container, while dogs could always see both containers. We predicted
that if dogs could take the pointer’s visual perspective they should search in the only container visible
to the pointer. This hypothesis was supported only in Experiment 2. We conclude that while dogs are
skilled both at following human gestures, and exploiting information about others’ visual perspectives,
they may not integrate these skills in the manner characteristic of human children.
Keywords: dog, cognition, pointing, perspective taking, theory of mind
tunity to test whether the skills we see in human children are
shared by other species.
Dogs use a variety of human gestures to find hidden food or
objects including dynamic and static pointing, gaze direction, and
novel arbitrary communicative gestures (e.g., Hare et al., 1998;
Lakatos et al., 2012; Riedel et al., 2006; Soproni et al., 2002).
Several studies have ruled out the possibility that dogs simply rely
on olfactory cues, proximity of the experimenter to the target, or
the movement created by the gesture (e.g., Agnetta et al., 2000;
Hare et al., 1998; Riedel et al., 2006; Soproni et al., 2002).
Whereas dogs spontaneously use a variety of communicative cues,
they do not use equally salient noncommunicative cues to find
hidden food (Kaminski et al., 2012; Soproni et al., 2001; Udell,
Giglio, et al., 2008).
Although a variety of species have been shown to exploit some
cooperative-communicative gestures (e.g., Giret et al., 2009;
Hernádi et al., 2012; Kaminski et al., 2005; Proops et al., 2010;
Smet & Byrne, 2013) dogs are remarkable with respect to the wide
range of cues that they use, and the flexibility with which they do
so. The flexibility with which dogs use human gestures has not
been observed in other species that have been directly compared
with dogs (Gácsi, Gyori, et al., 2009; Hare et al., 2002; Hare et al.,
2010; Miklósi et al., 2003; Virányi et al., 2008; but see Udell et al.,
2008, 2010, 2013, 2012, 2011). While great apes and wolves can
slowly learn to rely on one type of human gesture with extended
contact with humans, they do not generalize this acquired skill to
novel gestures or even slightly modified gestures (Povinelli et al.,
1997; Tomasello et al., 1997; Virányi et al., 2008). In contrast,
dogs not only show flexibility when interpreting novel cues, but
also do not require intensive exposure to humans to exhibit these
skills. Even dog puppies as young as 6 –9 weeks, and puppies
The key feature of cognition is that it allows for flexible problem solving. Dogs have shown flexibility in communicative tasks
using human gestures and in tasks requiring them to discriminate
what a human can or cannot see (Hare & Woods, 2013). However,
it is unclear whether dogs exhibit the level of flexibility seen in
human children. By 12–14 months, human children begin to integrate information about others’ visual perspectives and knowledge
states when interpreting communicative gestures (Liebal et al.,
2009; Moll et al., 2006; Moll & Meltzoff, 2011; Tomasello et al.,
2007; Tomasello & Haberl, 2003). The ability to model another
individual’s attention during communication has been proposed to
scaffold learning during human language acquisition (Akhtar et al.,
1996; Tomasello, 2008). Given the communicative flexibility observed in dogs—including skills that are not observed in our
closest living primate relatives— dogs provide an unusual oppor-
This article was published Online First March 10, 2014.
Evan L. MacLean and Christopher Krupenye, Department of Evolutionary Anthropology, Duke University; Brian Hare, Department of Evolutionary Anthropology and Center for Cognitive Neuroscience, Duke University.
We thank the local dog owners who volunteered to have their dogs
participate in these studies. We thank all members of the Duke Canine
Cognition Center for their help with subject recruitment, testing, and video
coding. This work was supported in part by the Office of Naval Research
(No. N00014-12–1-0095); National Institute of Health Grant 5 R03
HD070649-02; and National Science Foundation Grants DGE-1106401,
NSF-BCS-27552 and NSF-BCS-25172.
Correspondence concerning this article should be addressed to Evan
MacLean, Box 90383 Biological Sciences, Duke University, Durham, NC
27705. E-mail: [email protected]
285
286
MACLEAN, KRUPENYE, AND HARE
reared in litters without intensive exposure to humans, use pointing
and arbitrary gestures well above chance levels (Hare et al., 2002;
Riedel et al., 2008; Virányi et al., 2008; but see Dorey et al., 2010).
While explicit training of dogs has in some cases been shown to
help them use human gestures even more faithfully (Elgier et al.,
2009; Mckinley & Sambrook, 2000), large surveys of dog owners
show no relationship between performance and a variety of rearing
history measures (Gácsi et al., 2009). Overall then, dogs’ understanding of human gestures is more similar to the early emerging
communicative skills seen in human children than any other nonhuman species studied to date (Hare & Tomasello, 2005)
While dogs are similar to human children in their understanding
of communicative gestures they do not acquire human language or
culture (but for examples of label learning see Grassmann et al.,
2012; Griebel & Oller, 2012; Kaminski et al., 2004; Pilley & Reid,
2011). Thus, dogs provide a unique opportunity to examine hypotheses regarding sociocognitive processes unique to human development. One proposal for how children interpret gestures and
words so flexibly emphasizes their understanding of others’ attention. For example, Tomasello and Haberl (2003) gave children
experience playing with two toys together with an experimenter
(E1). Then this experimenter left the area while a second experimenter (E2) played together with the child and a third toy. Afterward, all three toys were placed together on the floor, E1 looked
ambiguously in their direction and reacted with surprise while
requesting that the child “give it” to her. In this case the majority
of 18-month-olds selected the toy that E1 was unfamiliar with
(even though children were familiar with all three objects), presumably inferring through exclusion that E1’s attention was directed toward something that was new and unfamiliar to her. These
results, and related findings (e.g., Liebal et al., 2009), suggest that
it is the integration of 1) skills for following social cues and 2)
skills for reasoning about others’ attention that allows humans to
communicate in ways unlike any other species (Moll & Meltzoff,
2011; Tomasello & Haberl, 2003).
The current studies were designed to test this hypothesis by
evaluating whether dogs can use information about what humans
can or cannot see to respond appropriately to their gestures. Numerous studies suggest that dogs are sensitive to cues associated
with what others can and cannot see. First, several studies have
shown that dogs are less likely to obey a command, and more
likely to steal a prohibited piece of food when the experimenter
cannot see or hear them (Bräuer et al., 2004; Call et al., 2003;
Kaminski et al., 2013; Kundey et al., 2012; Schwab & Huber,
2006). Second, several studies illustrate that dogs are more likely
to beg or seek interaction from a person who can see the dog or the
reward being requested than someone who cannot (Cooper et al.,
2003; Gácsi et al., 2004; Udell et al., 2011). Third, multiple studies
have shown that dogs also make inferences about others’ visual
perspectives during communication. For example, Soproni et al.
(2001) reported that dogs used a human’s gaze direction to find
hidden food when the experimenter turned and oriented toward a
baited container, but not when the experimenter oriented into space
above the container. Further, Kaminski et al. (2009) have shown
that dogs use information about what a human can and cannot see
when inferring the referent of a verbal command. In this study, two
objects were placed on the dog’s side of a transparent and opaque
barrier such that the experimenter could only see one object while
the dog could see both. When the experimenter gave a “fetch”
command dogs were more likely to retrieve the object that the
experimenter could see. Thus, current evidence suggests that dogs
have basic perspective taking skills, and may use these skills in
communicative contexts.
In the studies presented here we investigated whether dogs
integrate information about the communicator’s visual perspective
when responding to a pointing cue. We hypothesized that if dogs
were capable of inferring a human’s visual perspective during
communication, they should interpret gestures as more likely to
refer to an object that the communicator could see than one that he
could not. In our first study we explored whether dogs can use
basic information about an experimenter’s body orientation (and
corresponding visual perspective) when deciding whether or not to
follow a pointing cue. In Experiments 2– 4 the experimenter
pointed ambiguously toward two possible hiding locations, both of
which were visible to the dog. However, in test trials only one of
these locations was visible to the experimenter. We predicted that
if dogs, like children, can reason about shared experiences between
themselves and their communicative partner they should strategically search in the only location visible to both individuals.
General Methods
Participating dogs were recruited through the Duke Canine
Cognition Center (DCCC) Web site. Owners from the RaleighDurham–Chapel Hill, NC, area followed a link on the DCCC Web
site to fill out a Dog Registration Questionnaire (http://bit.ly/
AmWURq), and their information was then added to our database.
The database was screened to remove all dogs with an ownerreported history of aggression or debilitating health problems,
including any vision-related impairment such as cataracts. Dogs
were then selected from the database and their owners were
contacted via email. Owners who were willing to participate
brought their dogs for a 1 hr-long session at the DCCC. In the
majority of cases, the owner was not present in the testing room.
However, if the dog was too nervous or distracted with the owner
outside of the room, the owner sat in the room behind the dog and
out of sight during trials. In all experiments, if a dog met
experiment-specific abort criteria (see below) all data from this
subject were excluded from analysis of that experiment and this
subject was not retested. Owners participated on a voluntary basis,
and were offered free parking. All dog owners signed informedconsent forms before participation. Testing procedures adhered to
regulations set forth by the Duke Institutional Animal Care and
Use Committee (IACUC #303–11–12). All sessions were digitally
recorded from a ceiling-mounted camera, or a digital camcorder in
back of the testing room.
Experiment 1
Method
Subjects and Apparatus. Forty dogs completed this experiment and eight dogs were excluded because of the abort criteria
specified below. Food rewards were hidden beneath small opaque
plastic cups (diameter 10 cm, height 12 cm) that were positioned
2 m apart from one another, each 2.4 m away from the dog’s
starting position. With the exception of warm-up trials in which the
baiting process was visible to the dog, the experimenter occluded
DOG POINT FOLLOWING AND VISUAL PERSPECTIVE TAKING
the baiting and sham baiting of each cup using a poster board
occluder (35 ⫻ 50 cm).
Procedure
Sit Test. We first conducted four trials to assess subjects’
responsiveness to a “sit” command when the experimenter was
either facing toward or away from the dog. The dog handler (H)
positioned the dog at the start line 1 m away from the experimenter
(E1). Depending on the condition, E1 either faced and looked at the
dog, or had her back turned. Once the dog was in position and standing
upright, H said “ok” to indicate that E1 should command the dog to
sit. At this point E1 said “(dog’s name), sit!” and subjects were
given 10 s to obey the command. If the dog sat or lay down within
this 10 s period, E1 praised the dog and the next trial was
administered. If the dog did not sit or lay down within 10 s after
the command the dog was not praised and the next trial was
administered. We conducted four “sit” trials and whether E1 faced
toward or away from the dog was alternated between trials. The
condition of the first trial was counterbalanced between subjects.
Warm-Ups. Next we conducted a series of warm-up trials to
familiarize dogs with the process of finding food in the containers
to be used during the test. Warm-ups consisted of two phases for
which dogs were required to meet a criterion before advancing to
the main test.
Phase 1. Two overturned plastic cups were positioned 3 m
apart, 2.4 m in front of the dog at the left and right sides of the
room. At the start of each trial H positioned the dog at the start line
(midline of the room, equidistant to the two choices) while E1
called the dog’s name and showed it a piece of food. E1 then
walked directly to one of the two overturned cups, visibly placed
the food underneath it, and then walked to the back of the room
and turned her back. H then let the dog’s leash go slack and said
“ok!” encouraging the dog to make a choice. Throughout this
experiment choices were defined as the dog’s snout or paw entering a 10 cm radius (marked on the floor) around either cup. Dogs
were allowed 30 s to make a choice, and if no choice was made
during this period, the trial was repeated. If the dog chose the
baited cup, E1 lifted the cup, praised the dog, and allowed the dog
to eat the food. If the dog chose the unbaited cup, E1 lifted this cup
to show that it contained no food, while H walked the dog back to
the start line. The baited location was counterbalanced across trials
and the same side was never baited on more than two consecutive
trials. The side that was baited on the first trial was counterbalanced between subjects. Dogs were required to choose correctly on
4/5 consecutive trials to complete Phase 1 of the warm-ups.
Phase 2. This phase of the warm-ups was identical to Phase 1
with the exception that E1 stood near both cups before the dog’s
choice. On each trial, after showing the dog the treat, E1 walked to
the cup to his right and either visibly baited this cup, or stood
passively behind it, and then proceeded to the cup to his left again
either visibly baiting the cup, or standing passively behind it. E1
then walked to the back of the room, centered between the two
cups with her back turned to the subject and dogs were allowed to
make a choice. Dogs were required to choose correctly on 4/5
consecutive trials to complete Phase 2 of the warm-ups.
Test. During the test dogs gained experience with two novel
experimenters (E2 and E3) who provided pointing cues. Within a
block of five trials, one of these individuals was designated the
287
“facing pointer” while the other was designated the “back-turned
pointer.” The facing pointer always stood facing the dog and pointed to
the cup containing the hidden food. The back-turned pointer stood
with her back to the dog and pointed to the cup that did not contain
the food. For the first four trials in each block (single pointer trials)
only one experimenter (E2 or E3) performed the pointing cue. For
the fifth trial in each block (double pointer trials), both E2 and E3
simultaneously performed the pointing cue (each experimenter
indicating a different cup). In all trials, E1 performed the baiting.
In single pointer trials the experimenter (E2 or E3) stood 1.82 m
directly in front of the dog. E1 showed the dog a piece of food and
then hid the food behind the occluder. E1 then baited or sham
baited each of the cups (always starting with the cup to her right),
walked to the back of the room carrying the occluder with her, and
turned her back to the dog. E1 then said “ready, set, point” cuing
the experimenter to give the pointing gesture. The pointing gesture
was performed with the arm closest to the indicated cup with index
finger extended and this posture was maintained until the dog
made a choice or the trial timed out. Once the pointing cue was
given H slacked the dog’s leash and said “ok” allowing the dog to
make a choice. If a subject chose the baited container, she was
allowed to consume the reward whereas choices to the unbaited
container yielded no reward. Because the back-turned pointer
always pointed to the incorrect cup, following this gesture led to
incorrect responses whereas following the gesture from the forward experimenter led to correct responses.
Double pointer trials were identical to single pointer trials with
the following exceptions. Both E2 and E3 stood side by side in
front of the dog with one individual facing forward and the other
facing backward (consistent with their roles in the preceding four
trials). After E1 baited the cups, she walked to the back of the
room, turned her back, and said “ready, set, point!” to instruct the
experimenters to point to the cups. In these trials, only the cup
indicated by the facing pointer contained the reward. As in single
pointer trials, only choices to the baited cup yielded reward. We
did not include trials to rule out the use of olfactory cues because
numerous studies have documented that dogs do not use odor cues
in this context (Bräuer et al., 2006; Hare et al., 2002, 1998; Hauser
et al., 2011; Ittyerah & Gaunet, 2009; Mckinley & Sambrook,
2000; Riedel et al., 2006, 2008; Szetei et al., 2003; Udell, Giglio,
et al., 2008) and odor controls in our subsequent studies replicate
this phenomenon (see below).
Design and Analysis
We conducted 20 trials per subject, composed of four, 5-trial
blocks. Each of the 5-trial blocks began with four single pointer
trials (two facing pointer, two back-turned pointer) followed by a
single double pointer trial. The identity of the facing and backturned pointers was consistent within a block but counterbalanced
between blocks to assure that differences between conditions were
related to the positional orientations of the pointers, and not
reputational differences between the experimenters. The role of E2
and E3 (facing or back-turned pointer) was alternated with each
successive block of trials. Within each block the forward and
backward pointers each indicated the left and right cups on one
trial (single pointer trials) and the cup that the forward pointer
indicated during double pointer trials was counterbalanced across
the session. Lastly, the order of which experimenter provided the
288
MACLEAN, KRUPENYE, AND HARE
pointing cue during single pointer trials followed an A-B-B-A
design, and which experimenter provided the first pointing cue in
each block (facing pointer or back-turned pointer) was counterbalanced across the session.
For the sit test we compared the number of trials that dogs
obeyed the command when the experimenter was facing forward
versus backward using a paired-sample t test. In pointing trials we
compared performance in both conditions of the single pointer
trials (facing and back-turned pointer) to chance using one-sample
t tests. We compared performance between these conditions using
paired-sample t tests. For double pointer trials we compared subjects’ performance to chance expectation using one-sample t tests
with the number of trials in which subjects followed the forwardfacing pointer as the dependent measure. To explore the possibility
of learning within the session we compared the first and second
half of each trial type (facing, back-turned, and double-pointer)
using paired-sample t tests. All statistical tests were two-tailed.
On all trials, dogs were allowed 30 s to make a choice. If a
subject did not choose within 30 s the trial was repeated. If a
subject did not choose on five consecutive trials, or on a total of 10
trials over the course of the session, the session was aborted. If
subjects were unwilling to consume food placed on the floor, or
showed excessive stress at any point during the test, the session
was aborted. Partial data from aborted sessions were excluded
from analysis and these subjects were not retested. A second coder
naïve to the purpose of the study coded 20% of trials from video
to assess interrater reliability. Reliability was almost perfect (Landis & Koch, 1977) both for the sit test and the pointing trials (␬–sit
test ⫽ .93; pointing trials ⫽ .91) and live coding was used in cases
of disagreement.
Figure 1. The main findings from Experiment 1. Dogs were more likely
to follow a pointing gesture from an individual who faced them compared
with one with his back turned. The bar for “both” indicates the percentage
of trials that dogs followed the gesture from the forward facing pointer
when both the forward and back-turned pointers gestured simultaneously
toward different locations. The dotted line indicates chance expectation and
error bars reflect the SEM. Asterisks indicate significance at p ⬍ .05.
assessed the correlation between a difference score from the sit test
(percent trials dog sat when E1 faced dog minus percentage trials
dog sat when E1’s back was turned) with the percentage of trials
that dogs followed the pointing cue from the forward facing
pointer in double pointer trials. These scores were not correlated
(R ⫽ .08, p ⫽ .61).
Discussion
Results
In the sit test, dogs obeyed the sit command significantly more
often when the experimenter faced the dog (56 ⫾ 7%), than when
the experimenter faced away from the dog when giving the command (29 ⫾ 6%; t39 ⫽ 3.73, p ⬍ .01). In the single pointer trials,
dogs followed the direction of the pointing cue more frequently
than expected by chance when the experimenter faced forward and
indicated the correct location (Figure 1; 62 ⫾ 3%; t39 ⫽ 4.13, p ⬍
.01) but not when the experimenter faced backward and indicated
the incorrect location (55 ⫾ 3; t39 ⫽ 1.58, p ⫽ .12). The difference
between these conditions was significant with dogs following the
forward facing pointer’s cues more frequently than the backward
facing pointer (t39 ⫽ 2.27, p ⫽ .03). Dogs followed the facing
pointer’s gestures at higher levels in the second compared with the
first half of trials (t39 ⫽ ⫺2.25, p ⫽ .03) indicating a small
learning effect, but followed gestures from the back-turned pointer
at similar levels across the session (t39 ⫽ ⫺0.71, p ⫽ .48).
When both the forward and backward pointers simultaneously
gestured to different cups, dogs had a significant preference to
follow the gesture from the forward facing pointer (mean percent
choices to cup indicated by forward facing pointer: 58 ⫾ 4, t39 ⫽
2.18, p ⫽ .04) and their tendency to do so did not differ between
the first and second half of the session (t39 ⫽ 0.12, p ⫽ .90).
Lastly, we assessed whether dogs’ sensitivity to the experimenter’s body orientation in the sit test related to their tendency to
follow the forward facing pointer in the double pointer trials, both
possible indicators of visual perspective taking. Specifically, we
Experiment 1 showed that dogs given experience with a forward
facing pointer who accurately indicated the location of hidden
food, and a backward facing pointer who deceptively indicated the
incorrect location, preferred to follow the pointing cue from the
forward facing pointer when these gestures were pitted against one
another. In trials when the back-turned pointer indicated the incorrect location, dogs still followed the direction of the point (to
the unbaited cup) on the majority of trials. This finding replicates
two other studies demonstrating that dogs tended to follow a
dishonest pointing cue despite repeated experience in this context
(Kundey et al., 2010; Petter et al., 2009), and another study
showing that dogs reliably follow points from a backward-facing
pointer (Udell et al., 2013).
The findings from Experiment 1 suggest that dogs are sensitive
to positional cues that relate to a human’s visual perspective
toward the dog, and can use this information in a choice context.
However, these findings may be explained by a variety of mechanisms unrelated to the dog’s ability to consider a human’s visual
perspective. For example, dogs may have used a simple discriminative rule in which gestures from a forward facing individual
were interpreted as a reliable cue whereas gestures from a backward facing individual were not. These contingencies may have
been learned over the lifetime or within the experiment itself.
Supporting the latter possibility, dogs followed gestures from the
forward-facing pointer more faithfully in the second compared
with the first half of single-pointer trials. Nonetheless, dogs exhibited some flexibility in their gesture following in that they did
DOG POINT FOLLOWING AND VISUAL PERSPECTIVE TAKING
not merely rely on the proximity of the pointed finger to a container, and instead used a combination of information from both
the experimenter’s body orientation, and the direction of the
pointing cue. Although these behaviors do not demonstrate
visual perspective taking, they illustrate sensitivity to basic
positional cues related to visual perspective during communication, a prerequisite for inferences about visual perspective in
this context. Thus, the results from Experiment 1 show that
dogs are capable of exploiting basic information about an
experimenter’s body orientation in the context of a point following paradigm. In Experiments 2– 4 we build on this foundation and address the question of whether dogs use information about the pointer’s visual perspective when interpreting the
referent of a pointing gesture.
Experiment 2
In Experiment 2 we evaluated whether dogs spontaneously
exploit information about the pointer’s visual perspective when
inferring to which of two possible hiding locations the experimenter’s pointing cue referred.
Method
Subjects and Apparatus. Thirty dogs (none of which had
participated in Experiment 1) completed the experiment and five
dogs were excluded because of the abort criteria specified below.
Food was hidden inside round buckets (height 36 cm, diameter 23
cm) placed 1.65 m in front of, and 60 cm to the left and right of the dog.
For smaller dogs that could not easily reach into these containers we
used shorter containers (height 18 cm). In some trials, a large
barrier (130 ⫻ 80 cm) was placed between these containers occluding the Experimenter’s (but not the dog’s) view of one of the
containers (see below).
Procedure
Sit Test and Warm-Ups. As in Experiment 1 the procedure
began with the sit test and a series of warm-up trials to familiarize the dog with finding food in the buckets. The procedure
for the sit test and warm-up trials was identical to Experiment
1 with the following exceptions. Before the warm-up trials dogs
received two trials in which only a single container was positioned and visibly baited directly in front of the dog. Dogs were
required to approach and eat food out of the container on two
trials before proceeding to warm-up trials involving two containers. The next stage of warm-ups was identical to Phase 2
warm-ups in Experiment 1, and again dogs were required to
meet a criterion of 4/5 consecutive correct choices. Then in
Phase 3 warm-ups E1 showed the dog a piece of food, hid the
food behind the occluder, and walked to one of the containers
where he placed the food in the container, hiding this process
from the dog using the occluder (e.g., invisible displacement).
E1 then returned to a central position between the containers
and H said “ok” releasing the dog to make a choice. Because E1
only visited one of the two locations, dogs were required to
infer that the food was invisibly displaced from behind the
occluder to under the container that E1 had approached. These
trials served to familiarize dogs with the baiting process used in
289
the test, in which the containers were baited or sham baited
behind the occluder. Dogs were required to choose correctly on
4/5 consecutive trials to advance to the test. In all warm-up
trials dogs were given 15 s to make a choice and the trial was
repeated if no choice was made within this time limit. Throughout this experiment an approach was scored as the dog making
physical contact with the container using her snout or paw, or
the dog’s snout crossing the edge of the container.
Test. The test consisted of two trial types: directional trials
and ambiguous trials. Dogs began in the same location on all
trial types, 1.65 m away from and centered between the two
containers. At the start of the trial E1 showed the dog a piece of
food, hid the food behind the occluder, and then approached and
baited or sham baited each container, always approaching the
container to his right first. In directional trials E1 then stood 2
m from the dog, centered between the two containers, said,
“look!”, and pointed to the baited container. The pointing cue
was performed with contralateral arm from the container being
indicated with the index finger extended and head oriented
toward the indicated container. This gesture was maintained
until a choice was made or the trial timed out. If a dog chose the
baited container she was allowed to consume the reward while
choices to the unbaited container ended the trial without reward.
If no choice was made within 15 s the trial was repeated.
In ambiguous trials E1 showed the dog a single reward but
baited both containers (occluding this process from the dog)
before moving to a location 2.74 m to the side of the midline
with both containers directly in front of him (Figure 2A, B). E1
then raised his arm 90 degrees with index finger extended, said
“look!”, and pointed along a vector running in the direction of
both containers. As in directional trials this gesture was maintained until a choice was made or the trial timed out. On one
half of ambiguous trials the barrier was positioned between the
two containers so that E1 could only see the container nearest
to him (Figure 2A), while dogs could see both containers. On
the other half of ambiguous trials the barrier was not present
and E1 could see both containers from the location where he
pointed (Figure 2B). Choices in ambiguous trials were nondifferentially reinforced.
We predicted that if dogs were capable of taking E1’s visual
perspective they should interpret E1’s pointing cue differently
between the trials in which the barrier was and was not present.
Specifically, when the barrier was present and E1 could only see
one of the two containers, we predicted that dogs should interpret
E1’s gesture as indicating the only container visible to E1, despite
the fact that dogs could see both containers. In contrast, when the
barrier was absent and E1 could see both containers, we predicted
that subjects should interpret the pointing gesture as being equally
likely to indicate either container. On ambiguous trials if dogs
approached E1 (crossing a threshold 1.82 m off center and in front
of E1), instead of approaching the containers, the trial was aborted
and repeated.
In addition to the test trials we conducted four odor control
trials with each subject. These trials were identical to the
directional trials with the exception that E1 did not provide a
pointing cue to the food’s location. These trials served to verify
that dogs could not locate the reward using olfaction.
MACLEAN, KRUPENYE, AND HARE
290
Figure 2. The pointing cue on ambiguous trials in Experiments 2 and 3. In Experiment 2 the visual barrier
occluded E1’s view of the distant container (A), but E1 could see both containers when this barrier was removed
(B). In Experiment 3 the visual barrier occluded E1’s view of the proximal container (C), but E1 could see both
containers when this barrier was removed (D).
Design and Analysis
We conducted 20 trials per subject consisting of eight directional trials, eight ambiguous trials, and four odor control trials.
Within the ambiguous trials the barrier blocking E1’s view of one
of the containers was present on half of the trials. Trials were
administered in blocks of two directional trials followed by two
ambiguous trials alternating across the session. The odor control
trials were administered in blocks of 2 trials 1) midway through the
session, and 2) at the end of the session. The container that was
baited was counterbalanced within each block of directional trials
and odor control trials. On ambiguous trials the side from which
E1 pointed was counterbalanced across the session and the side
from which E1 pointed on the first set of ambiguous trials was
counterbalanced between subjects.
Analysis of the sit test was identical to Experiment 1. We
compared dog’s accuracy on the directional, control, and ambiguous trials to chance expectation using one-sample t tests. In the
ambiguous trials we compared the percent of choices to the container nearest to E1 when the barrier was and was not present using
paired sample t tests. We predicted that if dogs were sensitive to
E1’s visual perspective they should choose the container nearest to
E1 more often when the barrier was present and this was the only
container that E1 could see than when the barrier was absent and
E1 could see both containers.
On all trials, dogs were allowed 15 s to make a choice. If a
subject did not choose within 15 s the trial was repeated. If at any
point dogs did not choose on three consecutive trials then two
additional Phase 2 warm-up trials were administered to refamil-
iarize subjects with the process of searching for food in the
containers. Lastly, if a subject did not make a choice on six
consecutive trials, or a total of 10 trials across the test, the session
was aborted. Partial data from aborted sessions were excluded
from analysis and these subjects were not retested. A second coder
naïve to the purpose of the study coded 20% of trials from video
to assess interrater reliability. Reliability was perfect for all measures.
Results
As in Experiment 1, dogs obeyed the “sit” command more often
when the experimenter faced the dog (M ⫽ 55 ⫾ 8%) than when
his back was turned (M ⫽ 43 ⫾ 7); however, this difference was
not significant (t29 ⫽ 1.49, p ⫽ .15). In directional trials dogs
followed the pointing cue to the baited container on 75 ⫾ 4% of
trials, significantly more often than expected by chance (t29 ⫽ 6.3,
p ⬍ .01). Dogs did not differ from chance in the odor control trials
when no social cue was provided (M ⫽ 47 ⫾ 3, t29 ⫽ ⫺1.0, p ⫽
.33).
In the ambiguous trials dogs chose the container nearest to E1
more often than expected by chance both when the barrier was and
was not present (barrier present: M ⫽ 90 ⫾ 4%, t29 ⫽ 10.8, p ⬍
.01; barrier absent: M ⫽ 83 ⫾ 4%, t29 ⫽ 7.78, p ⬍ .01). Thus, dogs
had a strong bias to choose this container regardless of whether or
not E1 could see one or both containers. However, as predicted if
dogs were capable of taking E1’s visual perspective, subjects
chose the container nearest to E1 more often when this was the
DOG POINT FOLLOWING AND VISUAL PERSPECTIVE TAKING
only container E1 could see than when E1 could see both containers (Figure 2; t29 ⫽ 2.1, p ⫽ .05).
To assess whether sensitivity to E1’s visual perspective and
point following skills were related we correlated the difference
score from the sit test (percent trials dog sat when E1 faced dog
minus percentage trials dog sat when E1’s back was turned) with
performance on the pointing trials. Performance on the directional
pointing trials was positively correlated with this difference score
suggesting that dogs who were most responsive to E1’s body
orientation in the sit test also tended to follow pointing cues at a
high level (R ⫽ .36, p ⫽ .05). In addition, the difference score
from the sit test was also positively correlated with the percent of
trials that dogs chose the container nearest to E1 when this was the
only container E1 could see.
Discussion
The main finding from Experiment 2 was that dogs were more
likely to follow E1’s ambiguous pointing cue to a container near to
E1 when this was the only container E1 could see, than when E1
could see both containers. Critically dogs could see both containers
in each of these conditions making it unlikely that they used an
egocentric rule regarding what they could see themselves when
choosing. This finding is consistent with the idea that when interpreting the referent of E1’s pointing cue dogs inferred which of the
two containers E1 was indicating by considering E1’s visual
perspective at the moment the gesture was provided.
However, there are two alternative hypotheses that may explain dogs’ behavior without invoking an understanding of E1’s
visual perspective. First, it is possible that dogs used a geometric heuristic to follow the direction of E1’s pointing cue along
a vector until this line was interrupted by a physical barrier
(e.g., Tomasello et al., 1999). In this case, the vector running
from E1’s extended finger toward the containers would have
intersected with the barrier (when it was present) leading dogs
to search more frequently on E1’s side of the barrier. A second
possibility is that dogs were averse to following a path that
created visual separation between themselves and E1 (e.g., Hare
et al., 2006; Melis et al., 2006). In this case, when the barrier
was present dogs would have selectively searched in the container nearest to E1 to avoid moving behind the barrier and out
of visual contact with E1. We address each of these possibilities
in Experiment 3.
Experiment 3
The results of Experiment 2 suggest that dogs may exploit
information about a human’s visual perspective when inferring the
referent of a pointing cue. However, dogs were strongly biased
toward choosing the container nearest to E1 (as in Lakatos et al.,
2012) in both of the test conditions, a finding that may be explained by multiple phenomena (discussed above). In Experiment
3 we replicated the procedure from Experiment 2 but switched
which of the two containers E1 could see by varying the size and
position of the visual barrier. Specifically, in Experiment 3 a small
barrier prevented E1 from seeing the container nearest to him
while allowing him to see the distal container.
291
Subjects and apparatus
Twenty dogs (none of which had participated in Experiments 1
or 2) completed the experiment and four dogs were excluded
because of the abort criteria (N ⫽ 3) specified below or experimental error during the session (N ⫽ 1). The apparatus was
identical to Experiment 2 with the following exceptions. The
containers were positioned 1.82 m from one another, each 91 cm
from the midline. We used a small visual barrier (50 ⫻ 40 cm)
positioned between E1 and the container nearest to him. This
barrier occluded E1’s view of the container nearest to him (in
ambiguous trials) but allowed E1 to see the distal barrier across the
room.
Procedure
The procedure for the sit test, warm-ups, directional trials, and
odor control trials was identical to Experiment 2 with the exception that E1 gave the dog a treat at the end of each trial of the sit
test, regardless of the dog’s behavior. This change was implemented to encourage attentiveness to the experimenter during
these trials. In ambiguous trials E1 showed the dog a single reward
but baited both containers (occluding this process from the dog)
before moving to a location 1.68 m to the side of the midline with
both containers directly in front of him. E1 then kneeled, raised his
arm 90 degrees with index finger extended, said “look!”, and
pointed along a vector running in the direction of both containers.
On one half of ambiguous trials the small barrier was positioned
between E1 and the container nearest to him such that it occluded
his view of this container but allowed him to see the distal
container across the room (Figure 2C). On the other half of
ambiguous trials the barrier was not present and E1 could see both
containers from the location where he pointed (Figure 2D).
Choices in ambiguous trials were nondifferentially reinforced. In
these trials if dogs approached E1 rather than the containers
(crossing a line 107 cm from center) the trial was aborted and
repeated.
Design and analysis
The design and analysis were identical to Experiment 2. We
predicted that if dogs were sensitive to E1’s visual perspective they
should choose the container distal to E1 more often when the
barrier was present and this was the only container that E1 could
see than when the barrier was absent and E1 could see both
containers. The trial length, abort criteria, and procedure for repeating trials on which there was no choice were identical to
Experiment 2. A second coder naïve to the purpose of the study
coded 20% of trials from video to assess interrater reliability.
Reliability was perfect for the sit test, almost perfect (Landis &
Koch, 1977) for the pointing trials (␬ ⫽ .95), and live coding was
used in cases of disagreement.
Results
Replicating the pattern from Experiments 1 and 2, dogs were
more likely to obey E1’s “sit” command when E1 faced the dog
(M ⫽ 65 ⫾ 10%) than when his back was turned (M ⫽ 45 ⫾ 9%;
t19 ⫽ 2.18, p ⫽ .04). Again, in the directional trials dogs followed
E1’s pointing cue above chance expectation (M ⫽ 79 ⫾ 5%, t19 ⫽
MACLEAN, KRUPENYE, AND HARE
292
6.3, p ⬍ .01), but performed at chance in odor control trials when
the relevant social cue was omitted (M ⫽ 51 ⫾ 5%, t19 ⫽ .27, p ⫽
.79).
In the ambiguous trials dogs were highly biased to choose the
container nearest to E1 both when the barrier was present and E1
could not see this container (mean choices to container near E1 ⫽
96 ⫾ 2%, t19 ⫽ 22.58, p ⬍ .01), and when the barrier was absent
and E1 could see this container (mean choices to container near
E1 ⫽ 93 ⫾ 4%, t19 ⫽ 11.57, p ⬍ .01). There was no significant
difference in dogs’ choices between these two conditions (Figure
2; t19 ⫽ 1.83, p ⫽ .08) and the means were in the opposite
direction to the prediction if dogs were capable of inferring E1’s
visual perspective when interpreting the pointing cue. In contrast
to Experiment 2, there was no correlation between the difference
score from the sit test and performance in directional trials
(R ⫽ ⫺.02, p ⫽ .95) or the percent of trials that dogs chose the
container E1 could see when only one container was visible to him
(R ⫽ ⫺.04, p ⫽ .88).
Discussion
The main finding from Experiment 3 was that dogs’ choices
were biased primarily by the containers’ proximity to E1, rather
than by E1’s visual perspective toward the containers. This finding
contrasts with that from Experiment 2 in which dogs were more
likely to choose the only container that E1 could see when a barrier
occluded his view of the other container. This negative result
should be interpreted cautiously as many factors may have influenced dogs’ behavior in the ambiguous trials. For example, dogs
may have had difficulty inhibiting an impulse to approach E1
leading them to almost invariantly choose the container nearest to
him (mean choices to container near E1 across conditions ⫽ 94%).
This possibility is supported by numerous studies documenting the
constraining role of inhibitory control on dogs’ problem solving
skills (Bray et al., 2014; Osthaus et al., 2010; Pongrácz et al., 2001;
Tapp et al., 2003; Wobber & Hare, 2009). Second, the small
barrier may not have been perceived as an obvious physical barrier
to E1’s line of sight in the same way as the larger barrier used in
Experiment 2. Consequently, even if dogs were capable of taking
E1’s visual perspective, other task demands may have overshadowed these abilities. In Experiment 4 we control for these possibilities using a design with reduced demands on inhibitory control
and a larger visual barrier between E1 and the hidden container.
Subjects and Apparatus
Thirty dogs completed the experiment and seven dogs were
excluded because of the abort criteria specified below. One of the
subjects had previously participated in Experiment 1, and three
subjects had previously participated in Experiment 3. The apparatus was identical to Experiment 3 with the following exceptions.
The containers were positioned 1.82 m in front of the dog, and 1.82
m apart from one another, equidistant from the midline. When
providing the pointing cue E1 sat in a chair 3 m in front of the dog,
and equidistant to both containers. On some trials we positioned
two large barriers (height 1 m, width 0.8 m) between E1 and the
containers. Both of these barriers were constructed of plastic
material but one was opaque, preventing E1 from seeing through
it, while the other was transparent.
Procedure
The procedure for the sit test, warm-ups, and directional trials
was identical to Experiment 3 with the following exceptions. In
warm-ups E1 sat in the chair in front of the dog after baiting the
containers. In directional trials E1 sat in the chair when providing
the pointing cue, which was performed with the right arm, index
finger extended, and head oriented toward the indicated container.
In ambiguous trials E1 showed the dog a single treat but baited
both containers behind the occluder before sitting in the chair
where he performed the pointing gesture. On these trials E1 said
“look!”, and pointed to a location on the floor directly between the
two containers making it ambiguous which of the two containers
was being indicated. On one half of ambiguous trials (test condition) the two barriers were positioned between E1 and the containers. Because one barrier was opaque it occluded E1’s view of
the container behind it while the other barrier was transparent
allowing E1 to see through it (Figure 3A). Importantly, from the
dog’s perspective both containers were visible. On the other half of
ambiguous trials (control condition) the barriers were positioned
Experiment 4
Experiments 2 and 3 showed that dogs’ choices were most
influenced by the pointers’ proximity to, rather than his perspective toward, the containers where food could be hidden. In our
final experiment we controlled for this bias by replicating the basic
procedure from Experiments 2 and 3 in a context where E1 was
equidistant to both containers but sometimes could only see one
because of the presence of a large visual barrier. Our test paradigm
was similar to that from Experiments 2 and 3 as well as a study by
Kaminski et al. (2009) in which dogs were shown to take the visual
perspective of an experimenter who communicated verbally with
them.
Figure 3. The main findings from Experiment 2 and 3. In Experiment 2
subjects chose the container nearest to E1 more often when this was the
only container he could see (because the barrier occluded the other container) than when E1 could see both containers. In Experiment 3 dogs did
not selectively search in the container E1 could see when the barrier
occluded E1’s view of the other container. Error bars reflect the SEM.
Asterisks indicate significance at p ⬍ .05. NS ⫽ not significant.
DOG POINT FOLLOWING AND VISUAL PERSPECTIVE TAKING
perpendicularly to the containers and neither barrier occluded E1’s
or the dog’s view of the containers (Figure 3B). These trials served
as a control for any spontaneous preferences for approaching the
container near the opaque or transparent barrier when the barriers
had no impact on E1’s visual perspective. Choices in ambiguous
trials were nondifferentially reinforced.
Design and Analysis
We conducted 24 trials with each subject consisting of 16
directional trials and eight ambiguous trials (four test and four
control). Directional trials were administered in blocks of four and
which container was baited was counterbalanced within these
blocks. After each block of directional trials we conducted two
ambiguous trials in either the test or control condition, the order of
which was counterbalanced between subjects. The order of the
blocks of test and control conditions followed an ABBA design.
Whether the opaque barrier was positioned next to the left or right
container was counterbalanced across the session.
Analysis of the sit test and directional trials was identical to the
previous experiments. In ambiguous trials we compared the percent of trials that dogs chose the container next to the transparent
barrier to chance using one-sample t tests (in both test and control
conditions). We compared the percent of trials that dogs chose the
container next to the transparent barrier in the test versus the
control condition using paired-sample t tests. We predicted that if
dogs were capable of taking E1’s visual perspective they should
search in the container behind the transparent barrier (from E1’s
perspective) more frequently when the opaque barrier occluded
E1’s view of the second container than when the opaque barrier
had no bearing on E1’s view of the containers.
The trial length, abort criteria and procedure for repeating trials
on which there was no choice were identical to Experiment 2. A
second coder naïve to the purpose of the study coded 20% of trials
293
from video. Reliability was almost perfect (Landis & Koch, 1977)
for all measures (␬–sit test ⫽ .92; pointing trials ⫽ .97) and live
coding was used in cases of disagreement.
Results
As in Experiments 1–3, dogs obeyed the “sit” command more
often when the experimenter faced the dog (M ⫽ 60 ⫾ 9%), than
when his back was turned (M ⫽ 42 ⫾ 8%); however, this difference was not significant (t29 ⫽ 1.58, p ⫽ .13). In directional trials
dogs again followed E1’s pointing cue at levels exceeding chance
expectation (M ⫽ 74 ⫾ 3, t29 ⫽ 7.04, p ⬍ .01).
In ambiguous trials dogs did not search in the container next to
the transparent barrier more often than chance in either the test or
control condition (test: M ⫽ 48 ⫾ 4%, t29 ⫽ ⫺.62, p ⫽ .54;
control: M ⫽ 49 ⫾ 3%, t29 ⫽ ⫺.24, p ⫽ .81) and there was no
difference between these conditions (Figure 4; t29 ⫽ ⫺.32, p ⫽
.75). Lastly, to assess whether sensitivity to E1’s visual perspective
and point following skills were related we correlated the difference
score from the sit test with scores from the point following trials.
The difference score from the sit test was not correlated with
performance on directional trials (R ⫽ .13, p ⫽ .5) or the percent
of trials that dogs chose the only container E1 could see in the test
condition of the ambiguous trials (R ⫽ ⫺.06, p ⫽ .77).
Discussion
The main finding from Experiment 4 was that dogs did not
selectively search in the container E1 could see if only one container was visible to him when he provided the pointing cue. This
finding suggests that dogs did not reason about E1’s visual perspective when inferring the referent of the ambiguously directed
gesture. These results contrast with positive findings from a similar
paradigm in which dogs were required to infer the referent of a
Figure 4. The test (A) and control (B) conditions in Experiment 4. In test trials the opaque barrier occluded
E1’s view of one of the containers. In control trials the opaque barrier did not affect E1’s visual perspective
toward the containers. Dogs did not selectively search in the container next to the transparent barrier in either
the test or control conditions. The dotted line indicates chance expectation and error bars reflect the SEM. NS ⫽
not significant.
MACLEAN, KRUPENYE, AND HARE
294
spoken command (“fetch”) when the experimenter could only see
one of two possible referents because of a similar arrangement of
opaque and transparent barriers (Kaminski et al., 2009). One
possible reason for this difference is that the current study used a
pointing cue that was directed to a location between the two
targets. Thus, dogs may have looked toward the empty space
between the containers and then searched indiscriminately after
seeing nothing in this location (e.g., Scheider et al., 2011). In
contrast, the “fetch” command used by Kaminski et al. had no
spatial component perhaps facilitating a more flexible search.
Nonetheless, our findings suggest that if dogs are capable of
reasoning about a human’s visual perspective during communication, these skills may not generalize across all communicative
contexts.
General Discussion
In a series of four experiments we investigated whether dogs use
information about a human’s visual perspective when responding
to pointing gestures. We found that dogs were sensitive to cues
about an experimenter’s body orientation, and used this information when choosing which of two pointing gestures to follow.
However, dogs did not use information about whether a visual
barrier occluded the experimenter’s line of sight, and generally
preferred to search in a location near to the experimenter, regardless of whether the experimenter could see this location when he
gestured. Our results suggest that while dogs are sensitive to some
cues regarding others’ visual perception, and skilled at following
human gestures, they do not appear to integrate these two skills as
flexibly as human children.
In Experiment 1 we found that dogs were sensitive to the
pointer’s body orientation (indicative of what he could or could not
see); they preferred to follow gestures from a pointer who faced
them and accurately indicated the food’s location compared with a
pointer who faced away from the dog and provided a dishonest
point. Thus, dogs showed a basic sensitivity to the communicator’s
body orientation and used this information flexibly when deciding
which gesture to follow. In Experiments 2– 4 the experimenter
pointed ambiguously in the direction of multiple possible locations
where the food could be hidden. In the test trials, a physical barrier
occluded the experimenter’s view of one of these locations, but
dogs could always see both locations. We predicted that if dogs
were sensitive to the experimenter’s visual perspective, they
should selectively search in the location visible to both individuals.
The data supported this hypothesis only in Experiment 2, and the
means of the test and control conditions in Experiments 3 and 4
were in the opposite direction of the prediction.
Importantly, subjects in these studies were successful at following E1’s pointing cue when it was directed unambiguously toward
one of the two containers, and also showed sensitivity to information about E1’s visual perspective, obeying the “sit” command
more frequently when E1 faced the dog than when he turned his
back. Therefore, it is unlikely that limited skills in either of these
domains explains why dogs did not integrate information about
E1’s visual perspective when inferring the referent of his pointing
cue.
The limited perspective-taking skills in these studies contrast
with the impressive abilities that dogs have shown in other contexts presumably requiring similar cognitive abilities. For exam-
ple, when Kaminski et al. (2009) asked dogs to fetch one of two
objects, subjects were more likely to retrieve the only object that
was visible to the experimenter even though both objects were
visible to the dog. As we note above, this difference may be
attributable to the fact that our studies included a spatial referent
(the pointing gesture) compared with the nonspatial verbal command “fetch.” Consequently, dogs may have been biased to attempt to follow the direction of the pointing cue rather than
appealing to the experimenter’s visual perspective. Previous research has shown that dogs might not follow the spatial vector of
pointing cues precisely, and typically approach the location nearest
to the experimenter in the general direction being indicated (Lakatos et al., 2012). Our findings from Experiments 2 and 3 replicate
this phenomenon, but this tendency cannot explain the findings
from Experiment 4 in which both containers were equidistant from
the experimenter. A second notable difference between our studies
and those of Kaminski et al. (2009) is that our tasks required dogs
to infer the location of hidden food, whereas the objects were
always visible to dogs in the task used by Kaminski et al. Thus,
with both objects in plain view, dogs may have more readily
recognized the need to infer to which of the two alternatives the
command referred. Nonetheless, these tasks were very similar in
that dogs were confronted with a binary choice between an object
that the experimenter could and could not see, and ambiguous
information about which object was being indicated.
A second alternative explanation for dogs’ behavior in our tasks
invokes a higher level of perspective taking. Specifically, it is
possible that dogs realized that the experimenter had knowledge of
both locations (because he visited them when baiting the containers) even when he could no longer see one of the containers from
where he pointed. Accordingly, dogs may have searched indiscriminately inferring that the pointing cue was equally likely to be
directed to either of the containers that the experimenter knew were
present. We find this possibility to be unlikely given that there is
no strong evidence that dogs represent knowledge states in others
(Kaminski et al., 2009; Kaminski et al., 2011; Virányi et al., 2006;
but see Topál et al., 2006).
Our findings add to a growing literature on the contextual
factors that affect dogs’ responses to communicative gestures. For
example, dogs are more likely to follow a human’s line of sight, or
pointing gestures, when these behaviors are preceded by ostensivecommunicative cues such as calling the dog’s name or direct gaze
toward the subject (Kaminski et al., 2012; Téglás et al., 2012).
Dogs are also more likely to search for the referent of a gesture
when a hiding-finding context has previously been established, or
when the cue is accompanied by an affiliative vocalization (Scheider et al., 2011). Thus, dogs are sensitive to a range of social
factors that provide context for interpreting social cues and our
results from Experiment 1 further illustrate dogs’ flexible use of
cues from body orientation. However, our data from Experiments
2– 4 indicate that dogs may be limited in their ability to infer the
shared perceptual environment between themselves and a communicative partner in more complex situations. One possibility supported by our data is that dogs most flexibly infer others’ perspectives using cues from body orientation, but are limited in their
understanding of visual barriers.
If dogs are sensitive to humans’ visual perspectives, as a growing body of evidence suggests, the results of these studies can be
interpreted in two ways. First, it is possible that dogs showed
DOG POINT FOLLOWING AND VISUAL PERSPECTIVE TAKING
limited flexibility in the current contexts because of difficulty
determining the referent of a spatially imprecise gesture. We find
this possibility unlikely because dogs do not rely on precise linear
vectors when following pointing cues and instead use these gestures to determine the general direction in which to search (Lakatos et al., 2012). Moreover, in these studies the ambiguity regarding to which of the two containers the gesture referred could be
resolved through cues about the experimenter’s visual perspective.
However, dogs did not make these pragmatic inferences. A second
possibility is that while dogs are skilled at following cooperativecommunicative gestures, and to some extent reasoning about others’ visual perspectives, they may not be able integrate these skills
similarly to human children. Therefore, it may not be the mere
presence of these sociocognitive abilities that is critical for human
development, but rather the ability to integrate them when making
pragmatic inferences during communication (Tomasello, 2008).
This possibility is further supported by studies of chimpanzees
who are skilled at taking others’ visual perspective (Bräuer et al.,
2007; Hare et al., 2000, 2006; MacLean & Hare, 2012; Melis et al.,
2006), but who do not spontaneously respond to cooperativecommunicative gestures such as pointing (e.g., Herrmann & Tomasello, 2006; Povinelli et al., 1997). Therefore, it may be that
both dogs and chimpanzees possess a (different) subset of the
sociocognitive skills found in humans, but that neither species has
the ability to use these skills in concert. This possibility has
important implications for research on the evolution of uniquely
human social cognition. Specifically, identifying isolated sociocognitive processes shared between humans and nonhuman animals may document what is necessary, but not sufficient, to
explain human cognitive flexibility. Therefore, additional research
investigating the extent to which animals can flexibly integrate
these cognitive skills will be critical for identifying the traits that
do, and do not, make human cognition unique.
References
Agnetta, B., Hare, B., & Tomasello, M. (2000). Cues to food location that
domestic dogs (Canis familiaris) of different ages do and do not use.
Animal Cognition, 3, 107–112. doi:10.1007/s100710000070
Akhtar, N., Carpenter, M., & Tomasello, M. (1996). The role of discourse
novelty in early word learning. Child Development, 67, 635– 645. doi:
10.2307/1131837
Bräuer, J., Call, J., & Tomasello, M. (2004). Visual perspective taking in
dogs (Canis familiaris) in the presence of barriers. Applied Animal
Behaviour Science, 88, 299 –317. doi:10.1016/j.applanim.2004.03.004
Bräuer, J., Call, J., & Tomasello, M. (2007). Chimpanzees really know
what others can see in a competitive situation. Animal Cognition, 10,
439 – 448. doi:10.1007/s10071-007-0088-1.
Bräuer, J., Kaminski, J., Riedel, J., Call, J., & Tomasello, M. (2006).
Making inferences about the location of hidden food: Social dog, causal
ape. Journal of Comparative Psychology, 120, 38 – 47. doi:10.1037/
0735-7036.120.1.38
Bray, E. E., MacLean, E. L., & Hare, B. A. (2014). Context specificity of
inhibitory control in dogs. Animal Cognition, 17, 1–17.
Call, J., Brauer, J., Kaminski, J., & Tomasello, M. (2003). Domestic dogs
(Canis familiaris) are sensitive to the attentional state of humans. Journal of Comparative Psychology, 117, 257–263. doi:10.1037/0735-7036
.117.3.257
Cooper, J. J., Ashton, C., Bishop, S., West, R., Mills, D. S., & Young, R. J.
(2003). Clever hounds: Social cognition in the domestic dog (Canis
295
familiaris). Applied Animal Behaviour Science, 81, 229 –244. doi:
10.1016/S0168-1591(02)00284-8
Dorey, N. R., Udell, M. a. R., & Wynne, C. D. L. (2010). When do
domestic dogs, Canis familiaris, start to understand human pointing?
The role of ontogeny in the development of interspecies communication.
Animal Behaviour, 79, 37– 41. doi:10.1016/j.anbehav.2009.09.032
Elgier, A. M., Jakovcevic, A., Barrera, G., Mustaca, A. E., & Bentosela, M.
(2009). Communication between domestic dogs (Canis familiaris) and
humans: Dogs are good learners. Behavioural Processes, 81, 402– 408.
doi:10.1016/j.beproc.2009.03.017
Gácsi, M., Györi, B., Virányi, Z., Kubinyi, E., Range, F., & Miklósi, Á.
(2009). Explaining dog wolf differences in utilizing human pointing
gestures: Selection for synergistic shifts in the development of some
social skills. PLoS ONE, 4, e6584. doi:10.1371/journal.pone.0006584
Gácsi, M., Kara, E., Belenyi, B., Topal, J., & Miklosi, A. (2009). The effect
of development and individual differences in pointing comprehension of
dogs. Animal Cognition, 12, 471– 479. doi:10.1007/s10071-008-0208-6
Gácsi, M., Miklosi, A., Varga, O., Topal, J., & Csanyi, V. (2004). Are
readers of our face readers of our minds? Dogs (Canis familiaris) show
situation-dependent recognition of human’s attention. Animal Cognition,
7, 144 –153. doi:10.1007/s10071-003-0205-8
Giret, N., Miklósi, Á., Kreutzer, M., & Bovet, D. (2009). Use of
experimenter-given cues by African gray parrots (Psittacus erithacus).
Animal Cognition, 12, 1–10. doi:10.1007/s10071-008-0163-2
Grassmann, S., Kaminski, J., & Tomasello, M. (2012). How two wordtrained dogs integrate pointing and naming. Animal Cognition, 15,
657– 665. doi:10.1007/s10071-012-0494-x
Griebel, U., & Oller, D. K. (2012). Vocabulary learning in a Yorkshire
terrier: Slow mapping of spoken words. PLoS ONE, 7, e30182. doi:
10.1371/journal.pone.0030182
Hare, B., Brown, M., Williamson, C., & Tomasello, M. (2002). The
domestication of social cognition in dogs. Science, 298, 1634 –1636.
doi:10.1126/science.1072702
Hare, B., Call, J., Agnetta, B., & Tomasello, M. (2000). Chimpanzees
know what conspecifics do and do not see. Animal Behaviour, 59,
771–785. doi:10.1006/anbe.1999.1377
Hare, B., Call, J., & Tomasello, M. (1998). Communication of food
location between human and dog (Canis familiaris). Evolution of Communication, 2, 137–159. doi:10.1075/eoc.2.1.06har
Hare, B., Call, J., & Tomasello, M. (2006). Chimpanzees deceive a human
competitor by hiding. Cognition, 101, 495–514. doi:10.1016/j.cognition
.2005.01.011
Hare, B., Rosati, A., Kaminski, J., Bräuer, J., Call, J., & Tomasello, M.
(2010). The domestication hypothesis for dogs’ skills with human communication: A response to Udell et al. (2008) and Wynne et al. (2008)
Animal Behaviour, 79, e1– e6. doi:10.1016/j.anbehav.2009.06.031
Hare, B., & Tomasello, M. (2005). Human-like social skills in dogs?
Trends in Cognitive Sciences, 9, 439 – 444. doi:10.1016/j.tics.2005.07
.003
Hare, B., & Woods, V. (2013). The genius of dogs: How dogs are smarter
than you think. New York, NY: Dutton Adult.
Hauser, M. D., Comins, J. a., Pytka, L. M., Cahill, D. P., & VelezCalderon, S. (2011). What experimental experience affects dogs’ comprehension of human communicative actions? Behavioural Processes,
86, 7–20. doi:10.1016/j.beproc.2010.07.011
Hernádi, A., Kis, A., Turcsán, B., & Topál, J. (2012). Man’s underground
best friend: Domestic ferrets, unlike the wild forms, show evidence of
dog-like social-cognitive skills. PLoS ONE, 7, e43267. doi:10.1371/
journal.pone.0043267
Herrmann, E., & Tomasello, M. (2006). Apes’ and children’s understanding of cooperative and competitive motives in a communicative situation. Developmental Science, 9, 518 –529. doi:10.1111/j.1467-7687
.2006.00519.x
296
MACLEAN, KRUPENYE, AND HARE
Ittyerah, M., & Gaunet, F. (2009). The response of guide dogs and pet dogs
(Canis familiaris) to cues of human referential communication (pointing
and gaze). Animal Cognition, 12, 257–265. doi:10.1007/s10071-0080188-6
Kaminski, J., Brauer, J., Call, J., & Tomasello, M. (2009). Domestic dogs
are sensitive to a human’s perspective. Behaviour, 146, 979 –998. doi:
10.1163/156853908X395530
Kaminski, J., Call, J., & Fischer, J. (2004). Word learning in a domestic
dog: Evidence for “fast mapping”. Science, 304, 1682–1683. doi:
10.1126/science.1097859
Kaminski, J., Neumann, M., Bräuer, J., Call, J., & Tomasello, M. (2011).
Dogs, Canis familiaris, communicate with humans to request but not to
inform. Animal Behaviour, 82, 651– 658. doi:10.1016/j.anbehav.2011.06
.015
Kaminski, J., Pitsch, A., & Tomasello, M. (2013). Dogs steal in the dark.
Animal Cognition, 16, 385–394. doi:10.1007/s10071-012-0579-6
Kaminski, J., Riedel, J., Call, J., & Tomasello, M. (2005). Domestic goats,
Capra hircus, follow gaze direction and use social cues in an object
choice task. Animal Behaviour, 69, 11–18. doi:10.1016/j.anbehav.2004
.05.008
Kaminski, J., Schulz, L., & Tomasello, M. (2012). How dogs know when
communication is intended for them. Developmental Science, 15, 222–
232. doi:10.1111/j.1467-7687.2011.01120.x
Kundey, S. M. A., De Los Reyes, A., Arbuthnot, J., Allen, R., Coshun, A.,
Molina, S., & Royer, E. (2012). Domesticated dogs ’ (Canis familiaris)
response to dishonest human points. International Journal of Comparative Psychology, 23, 201–215.
Kundey, S. M. a., German, R., De Los Reyes, A., Monnier, B., Swift, P.,
Delise, J., & Tomlin, M. (2012). Domestic dogs’ (Canis familiaris)
choices in reference to agreement among human informants on location
of food. Animal Cognition, 15, 991–997. doi:10.1007/s10071-0120525-7
Lakatos, G., Gácsi, M., Topál, J., & Miklósi, Á. (2012). Comprehension
and utilisation of pointing gestures and gazing in dog– human communication in relatively complex situations. Animal Cognition, 15, 201–
213. doi:10.1007/s10071-011-0446-x
Landis, J. R., & Koch, G. G. (1977). The measurement of observer
agreement for categorical data. Biometrics, 33, 159 –174. doi:10.2307/
2529310
Liebal, K., Behne, T., Carpenter, M., & Tomasello, M. (2009). Infants use
shared experience to interpret pointing gestures. Developmental Science,
12, 264 –271. doi:10.1111/j.1467-7687.2008.00758.x
MacLean, E. L., & Hare, B. (2012). Bonobos and chimpanzees infer the
target of another’s attention. Animal Behaviour, 83, 345–353. doi:
10.1016/j.anbehav.2011.10.026
McKinley, J., & Sambrook, T. D. (2000). Use of human-given cues by
domestic dogs (Canis familiaris) and horses (Equus caballus). Animal
Cognition, 3, 13–22. doi:10.1007/s100710050046
Melis, A. P., Call, J., & Tomasello, M. (2006). Chimpanzees (Pan troglodytes) conceal visual and auditory information from others. Journal of
Comparative Psychology, 120, 154 –162. doi:10.1037/0735-7036.120.2
.154.
Miklósi, A., Kubinyi, E., Topal, J., Gacsi, M., Viranyi, Z., & Csanyi, V.
(2003). A simple reason for a big difference: Wolves do not look back
at humans, but dogs do. Current Biology, 13, 763–766. doi:10.1016/
S0960-9822(03)00263-X
Moll, H., Koring, C., Carpenter, M., & Tomasello, M. (2006). Infants
determine others’ focus of attention by pragmatics and exclusion. Journal of Cognition and Development, 7, 411– 430. doi:10.1207/
s15327647jcd0703_9
Moll, H., & Meltzoff, A. (2011). Perspective-taking and its foundation in
joint attention. In J. Roessler (Ed.), Perception, causation, and objectivity. Issues in philosophy and psychology (pp. 286 –304). Oxford:
Oxford University Press. doi:10.1093/acprof:oso/9780199692040.003
.0016
Osthaus, B., Marlow, D., & Ducat, P. (2010). Minding the gap: Spatial
perseveration error in dogs. Animal Cognition, 13, 881– 885. doi:
10.1007/s10071-010-0331-z
Petter, M., Musolino, E., Roberts, W. A., & Cole, M. (2009). Can dogs
(Canis familiaris) detect human deception? Behavioural Processes, 82,
109 –118. doi:10.1016/j.beproc.2009.07.002
Pilley, J. W., & Reid, A. K. (2011). Border collie comprehends object
names as verbal referents. Behavioural Processes, 86, 184 –195. doi:
10.1016/j.beproc.2010.11.007
Pongrácz, P., Miklosi, A., Kubinyi, E., Gurobi, K., Topal, J., & Csanyi, V.
(2001). Social learning in dogs: The effect of a human demonstrator on
the performance of dogs (Canis familiaris) in a detour task. Animal
Behaviour, 62, 1109 –1117. doi:10.1006/anbe.2001.1866
Povinelli, D. J., Reaux, J. E., Bierschwale, D. T., Allain, A. D., & Simon,
B. B. (1997). Exploitation of pointing as a referential gesture in young
children, but not adolescent chimpanzees. Cognitive Development, 12,
423– 461. doi:10.1016/S0885-2014(97)90017-4
Proops, L., Walton, M., & McComb, K. (2010). The use of human-given
cues by domestic horses, Equus caballus, during an object choice task.
Animal Behaviour, 79, 1205–1209. doi:10.1016/j.anbehav.2010.02.015
Riedel, J., Buttelmann, D., Call, J., & Tomasello, M. (2006). Domestic
dogs (Canis familiaris) use a physical marker to locate hidden food.
Animal Cognition, 9, 27–35. doi:10.1007/s10071-005-0256-0
Riedel, J., Schumann, K., Kaminski, J., Call, J., & Tomasello, M. (2008).
The early ontogeny of human– dog communication. Animal Behaviour,
75, 1003–1014. doi:10.1016/j.anbehav.2007.08.010
Scheider, L., Grassmann, S., Kaminski, J., & Tomasello, M. (2011).
Domestic dogs use contextual information and tone of voice when
following a human pointing gesture. PLoS ONE, 6, e21676. doi:10.1371/
journal.pone.0021676
Schwab, C., & Huber, L. (2006). Obey or not obey? Dogs (Canis familiaris) behave differently in response to attentional states of their owners.
Journal of Comparative Psychology, 120, 169 –175. doi:10.1037/07357036.120.3.169
Smet, A. F., & Byrne, R. W. (2013). African elephants can use human
pointing cues to find hidden food. Current Biology, 23, 2033–2037.
doi:10.1016/j.cub.2013.08.037
Soproni, K., Miklosi, A., Topal, J., & Csanyi, V. (2001). Comprehension
of human communicative signs in pet dogs (Canis familiaris). Journal of
Comparative Psychology, 115, 122–126. doi:10.1037/0735-7036.115.2
.122
Soproni, K., Miklósi, A., Topál, J., & Csányi, V. (2002). Dogs’ (Canis
familiaris) responsiveness to human pointing gestures. Journal of Comparative Psychology, 116, 27–34. doi:10.1037/0735-7036.116.1.27
Szetei, V., Miklosi, A., Topal, J., & Csanyi, V. (2003). When dogs seem to
lose their nose: An investigation on the use of visual and olfactory cues
in communicative context between dog and owner. Applied Animal
Behaviour Science, 83, 141–152. doi:10.1016/S0168-1591(03)00114-X
Tapp, P. D., Siwak, C. T., Estrada, J., Head, E., Muggenburg, B. A.,
Cotman, C. W., & Milgram, N. W. (2003). Size and reversal learning in
the beagle dog as a measure of executive function and inhibitory control
in aging. Learning & Memory, 10, 64 –73. doi:10.1101/lm.54403
Téglás, E., Gergely, A., Kupán, K., Miklósi, Á., & Topál, J. (2012). Dogs’
gaze following is tuned to human communicative signals. Current Biology, 22, 209 –212. doi:10.1016/j.cub.2011.12.018
Tomasello, M. (2008). Origins of human communication. Cambridge, MA:
MIT Press.
Tomasello, M., Call, J., & Gluckman, A. (1997). Comprehension of novel
communicative signs by apes and human children. Child Development,
68, 1067–1080. doi:10.2307/1132292
DOG POINT FOLLOWING AND VISUAL PERSPECTIVE TAKING
Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at
infant pointing. Child Development, 78, 705–722. doi:10.1111/j.14678624.2007.01025.x
Tomasello, M., & Haberl, K. (2003). Understanding attention: 12-and
18-month-olds know what is new for other persons. Developmental
Psychology, 39, 906 –912. doi:10.1037/0012-1649.39.5.906
Tomasello, M., Hare, B., & Agnetta, B. (1999). Chimpanzees, Pan troglodytes, follow gaze direction geometrically. Animal Behaviour, 58,
769 –777. doi:10.1006/anbe.1999.1192
Topál, J., Erdõhegyi, Á., Mányik, R., & Miklósi, Á. (2006). Mindreading
in a dog: An adaptation of a primate “mental attribution” study. International Journal of Psychology and Psychological Therapy, 6, 365–379.
Udell, M. A., Dorey, N. R., & Wynne, C. D. (2008). Wolves outperform
dogs in following human social cues. Animal Behaviour, 76, 1767–1773.
doi:10.1016/j.anbehav.2008.07.028
Udell, M. A., Giglio, R., & Wynne, C. (2008). Domestic dogs (Canis
familiaris) use human gestures but not nonhuman tokens to find hidden
food. Journal of Comparative Psychology, 122, 84 –93. doi:10.1037/
0735-7036.122.1.84
Udell, M. a. R., & Wynne, C. D. L. (2010). Ontogeny and phylogeny: Both
are essential to human-sensitive behaviour in the genus canis. Animal
Behaviour, 79, e9 – e14.
Udell, M., Hall, N. J., Morrison, J., Dorey, N. R., & Wynne, C. D. (2013).
Point topography and within-session learning are important predictors of
297
pet dogs’(Canis lupus familiaris) performance on human guided tasks.
Revista Argentina de Ciencias del Comportamiento, 5, 3–20.
Udell, M. A. R., Spencer, J. M., Dorey, N. R., & Wynne, C. D. L. (2012).
Human-socialized wolves follow diverse human gestures . . . and they
may not be alone. International Journal of Comparative Psychology, 25,
97–117.
Udell, M. A. R., Dorey, N. R., & Wynne, C. D. L. (2011). Can your dog
read your mind? Understanding the causes of canine perspective taking.
Learning & Behavior, 39, 289 –302. doi:10.3758/s13420-011-0034-6
Virányi, Z., Gácsi, M., Kubinyi, E., Topál, J., Belényi, B., Ujfalussy, D., &
Miklósi, A. (2008). Comprehension of human pointing gestures in young
human-reared wolves (Canis lupus) and dogs (Canis familiaris). Animal
Cognition, 11, 373–387. doi:10.1007/s10071-007-0127-y
Virányi, Z., Topál, J., Miklósi, Á., & Csányi, V. (2006). A nonverbal test
of knowledge attribution: A comparative study on dogs and children.
Animal Cognition, 9, 13–26. doi:10.1007/s10071-005-0257-z
Wobber, V., & Hare, B. (2009). Testing the social dog hypothesis: Are
dogs also more skilled than chimpanzees in non-communicative social
tasks? Behavioural Processes, 81, 423– 428. doi:10.1016/j.beproc.2009
.04.003
Received July 22, 2013
Revision received December 9, 2013
Accepted December 11, 2013 䡲