The role of pictures and gestures as nonverbal aids in preschoolers

Contemporary Educational Psychology 38 (2013) 109–117
Contents lists available at SciVerse ScienceDirect
Contemporary Educational Psychology
journal homepage: www.elsevier.com/locate/cedpsych
The role of pictures and gestures as nonverbal aids in preschoolers’ word learning
in a novel language
Meredith L. Rowe ⇑, Rebecca D. Silverman, Bridget E. Mullan
University of Maryland, College Park, United States
a r t i c l e
i n f o
Article history:
Available online 10 December 2012
Keywords:
Word learning
Vocabulary
Nonverbal aids
Gesture
Pictures
a b s t r a c t
Previous research suggests that presenting redundant nonverbal semantic information in the form of gestures and/or pictures may aid word learning in first and foreign languages. But do nonverbal supports
help all learners equally? We address this issue by examining the role of gestures and pictures as nonverbal supports for word learning in a novel (e.g. original/pretend) language in a sample of 62 preschoolers who differ in language abilities, language background, and gender. We tested children’s ability to
learn novel words for familiar objects using a within-subjects design with three conditions: word-only;
word + gesture; word + picture. Children were assessed on English translation, immediate comprehension and follow-up comprehension 1 week later. Overall performance on the tasks differed by characteristics of the learners. The importance of considering the interplay between learner characteristics and
instructional strategies is discussed.
Ó 2012 Elsevier Inc. All rights reserved.
1. Introduction
As many teachers are aware, learning can be facilitated when
redundant information is presented in two forms. For example,
teachers may hold their hands out wide while explaining the concept ‘‘big’’, or they may use a picture to help describe the moon in
the sky. Paivio (1971, 1986) proposes a Dual Coding Theory to account for the advantages of presenting information in two modalities. This theory posits that verbal and nonverbal information are
processed in two separate, mutually supportive systems. Two systems allow information to be more readily retrieved, resulting in
better recall for information presented in two modalities over input merely presented in one modality (e.g., either verbally or nonverbally). Further, Dual Coding Theory claims that the ways in
which verbal and nonverbal mechanisms contribute to learning
will vary with the specific task and stimulus characteristics, past
and present events and individual differences (Clark & Paivio,
1991). In the current study, we examine the extent to which nonverbal information, specifically pictures and gestures, aids preschoolers’ word learning in a novel (e.g. original/pretend)
language, and whether or not individual differences in child language ability, language background, and gender affect word learning across conditions.
Word learning in itself is difficult because words are arbitrary
symbols with no inherent relationship to their referents (Quine,
⇑ Corresponding author. Address: Department of Human Development, 3304
Benjamin Building, University of Maryland, College Park, MD 20742, United States.
E-mail address: [email protected] (M.L. Rowe).
0361-476X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.cedpsych.2012.12.001
1960). That is, there is nothing about the word ‘‘table’’ in English
that connects it to the object table. Nonverbal supports may be particularly helpful for word learning because redundant semantic
information may provide more robust representation of concepts
associated with words. Pictures and gestures are two potential avenues through which to offer this nonverbal support.
1.1. Pictures as nonverbal supports
Beginning around 18 months of age, children can understand
the symbolic nature of pictures and can generalize words learned
through the labeling of a picture to real objects in the world. For
example, Preissler and Carey (2004) taught 2-year-olds a new label
‘‘whisk’’ by pairing the word with a simple drawing of the object.
When children were shown the picture and a real whisk and asked
to identify the ‘‘whisk’’, children chose the real object and not the
picture, indicating that children understood the word to refer to
the object and not the picture alone. Similarly, Ganea, Pickard,
and DeLoache (2008) found that 15- and 18-month-old children
were able to learn novel names for new objects during a picture
book labeling interaction. Further, the children were able to extend
their learning of the novel word from the picture to the picture’s
real referent. Thus, young children can learn words from pictures
and generalize to real world objects, suggesting the use of pictures
in word learning is an appropriate instructional strategy.
Indeed, many intervention studies and studies of multifaceted
vocabulary instruction include the use of pictures to support word
learning. For example, in a recent kindergarten study (Loftus,
Coyne, McCoach, Zipoli, & Pullen, 2010), interventionists reviewed
110
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
words introduced during storybook reading by showing children
pictures of the target word from the book while providing them
with a definition of the word. For another example, in a kindergarten study by Silverman (2007), teachers reinforced word learning
by showing illustrations including representations of target words
from the book in which the words were introduced as well as photographs representing target words in other contexts outside of the
book. In a recent preschool study (Pollard-Durodola et al., 2011),
teachers introduced new words before storybook reading by providing simple definitions and showing picture cards with representations of the target words. As these examples show, there is a
tradition of including pictures to support word learning in vocabulary intervention. However, the empirical research on the role of
pictures in word learning is limited and findings are mixed.
In a classroom study on the effectiveness of using pictures and
objects in instructing science lessons with older children, Best,
Dockrell, and Braisby (2006) found that children made the largest
gains in science vocabulary knowledge when pictures or objects
were used in combination with semantic scaffolding, in comparison
to children who had received only verbal semantic support. Finally,
work with adults suggests similar results. Mayer (1999) conducted
several studies designed to teach young adult students how things
work (i.e., a car’s brakes), and found that students who received an
explanation presented as words and pictures or animations outperformed students who received an explanation in words alone. However, a more recent study which taught second grade students
science terms, names for musical instruments, and animal and habitat words, found no advantage of learning words when paired with
pictures than when presented as words alone (Cohen & Johnson,
2011). None of the above studies, however, considered individual
differences of the learners in the analysis.
Some studies suggest pictures are useful tools for teaching
vocabulary to children learning English as a second language. In
a research synthesis of studies on the topic, Gersten and Baker
(2000) show that studies that taught vocabulary to English Language Learners by using pictures showed improved results over
other methods. Further, Silverman and Hines (2009) studied the
use of multimedia to support vocabulary and content learning in
elementary school. They found that English language learners
(ELLs) in kindergarten through second grade who saw short video
clips that supplemented teacher-led instruction of words in the
context of storybooks made greater gains in their vocabulary than
ELLs who had not had multimedia-enhanced instruction. The addition of video to the lesson closed the gap between ELLs and nonELLs in knowledge of words targeted during the lesson, and it narrowed the gap in general vocabulary knowledge. Importantly, the
use of multimedia had greater learning effects for the ELLs than
the monolingual English-speaking children, highlighting the
importance of considering student characteristics when implementing instructional strategies.
Thus, using pictures to teach vocabulary is prevalent in classroom environments for both native language learning and foreign
language learning. However, there are few empirical studies on
the topic, and studies that do investigate the role of pictures in
word learning offer mixed findings and typically do not consider
individual differences in the learners as a moderator. In the current
study, we examine the role of pictures in novel word learning and
ask whether the role of pictures differs for children from different
language backgrounds, language abilities and gender.
1.2. Gestures as nonverbal supports
Research shows that teachers gesture when teaching vocabulary (Lazaraton, 2004). This is not surprising, as gestures are a natural part of speech, and gesture and speech form an integrated
communicative system (McNeill, 1992). Iconic gestures in particu-
lar can serve as useful nonverbal aids in that they visually represent the concepts to which they refer – for example, flapping the
arms to represent a bird flying (McNeill, 1992). In a study of toddlers’ vocabulary acquisition, Capone and McGregor (2005) found
that while all children learned the new words in their experiment,
children required less assistance to recall the words taught with
iconic gestures than the words presented in speech alone. The
authors use this finding to propose that the use of gesture in
instruction leads to a richer semantic knowledge of new words.
A study of kindergarten students’ novel word acquisition (Weismer
& Helsketh, 1993) demonstrated that the use of gestures during the
instruction of prepositions had a significant positive effect on children’s vocabulary learning. This was evident for children with typical language development as well as children with specific
language impairments. However, this positive effect was only significant when children were assessed with a comprehension task;
the gestures had no significant effect on novel word production of
children from either group.
Gesture appears to aid foreign language learning as well: Kelly,
McDevitt, and Esch (2009) taught monolingual English-speaking
adults Japanese verbs with iconic gestures and found that adult
L2 learners acquired new words most effectively when the new
words were taught with gestures that reinforced their meanings.
Macedonia, Muller, and Friederici (2011) found that Germanspeaking adults were better able to learn words in a novel language
when paired with iconic gestures than with meaningless gestures.
Tellier (2008) taught English words to monolingual French-speaking children and found that children learned the words better when
the words were taught with accompanying gestures. However, in
this study the children not only saw the gestures but they also produced them. Similarly, Allen (1995) found that university students
at an American university who were taught expressions paired with
emblematic gestures (and who also enacted the gestures themselves) in their foreign language French classes learned those
expressions better compared to students who did not see or enact
gestures paired with the expressions. Thus, from toddlers to adults,
observing and enacting gestures can facilitate language learning.
Several studies that use gestures as nonverbal aids in more
complex, non word-learning tasks have relevant findings as well.
Valenzeno, Alibali, and Klatzky (2003) showed preschool children
videotaped lessons with and without gestures about the concept
of symmetry. Children who saw the verbal plus gesture lesson performed better on a post-test of symmetrical judgment than children who saw the verbal only lesson. Similarly, Church, AymanNolley, and Mahootian (2004) used English instructional videos
with and without gestures to teach the concept of conservation
to both monolingual English-speaking and native Spanish-speaking ELL students (7-year-olds). The researchers found that using
gesture in classroom instruction increased comprehension of native English speakers as well as ELLs, but it was particularly helpful
for the ELLs. McNeil, Alibali, and Evans (2000) used gestures as a
nonverbal aid with instructions explaining a block building task.
They found that the redundant gestures aided in comprehension
of more complex messages, but not simpler messages, suggesting
that the effect of gesture might depend on the complexity of the
task for the learner. Finally, Cohen and Otterbein (1992) found that
adult subjects were better able to recall sentences when those sentences were presented with gesture than without. In sum, this previous work suggests that providing redundant semantic
information in the form of pictures or gestures can be advantageous for both word learning and concept instruction.
1.3. The role of learner characteristics
It is clear from the literature reviewed above that the effect of
nonverbal aids in learning may differ for learners who differ in
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
important ways (e.g., Church et al., 2004; Silverman & Hines, 2009).
One important learner characteristic is language background, specifically whether the child is exposed to more than one language
on a regular basis. There is a large body of work on the effects of
bilingualism on cognition. This work suggests that while young bilinguals may lag behind their peers in vocabulary when assessed in
one language, they often show advantages in other areas such as
metalinguistic awareness and cognitive control (Bialystok, 2001).
Because of these documented differences, when designing instructional strategies, it is important to consider that the effects may or
may not be similar for monolingual children versus children regularly exposed to more than one language.
Another learner characteristic that is relevant here is the child’s
general language proficiency as an index of their word-learning
abilities. As mentioned above, previous work on gestures found
the nonverbal aids to be more helpful in comprehension when
the task was more complex for the learner, and less useful or necessary when the task was less complex (McNeil et al., 2000). Based
on this complexity hypothesis, the authors claim that gestures
should be more crucial to speech comprehension for novices than
for experts (McNeil et al., 2000). Therefore, nonverbal aids may be
most useful when the task is hard for the child, or in the case of the
current study, for a child who has lower language abilities to draw
upon than for a child with greater language abilities.
In line with the above complexity hypothesis, it has been argued that processing redundant information from multiple simultaneous sources of information (e.g., verbal and nonverbal) may in
fact cause cognitive overload or competition for cognitive resources and thus hinder learning for some learners (Sweller,
1988). More specifically, Kalyuga, Ayres, Chandler, and Sweller
(2003) claim that the expertise of the learner is a key variable in
these types of learning tasks, and that the redundant information
presented in two modalities is most beneficial for inexperienced
learners and potentially inhibits learning in experienced learners.
This phenomenon has been called the Expertise Reversal Effect
(Kalyuga et al., 2003), which rests on the premise that experts have
sophisticated schemas which they bring to the learning task and
thus may not need additional information or guidance from redundant information. Further, this additional information may distract
from the working memory necessary to employ their schema and
result in cognitive overload. Dual Coding Theory, discussed earlier
(Paivio, 1971, 1986), claims that presenting information in two
modalities lightens cognitive load and enhances retrieval and
learning. However, the theory also claims that individual differences may contribute to the ways in which verbal and nonverbal
mechanisms contribute to learning (Clark & Paivio, 1991). As noted
above, the level of expertise of the learner is an important individual difference to consider in this work (e.g., Kalyuga et al., 2003).
In young children, language ability and language background
often go hand-in-hand. That is, on average young bilingual children
lag behind their monolingual peers in vocabulary measure in one
language (L2; Bialystok, 2001; Oller, Pearson, & Cobo-Lewis,
2007; Pearson, Fernandez, & Oller, 1993). However, there is also
a large literature showing that both monolingual (e.g., Fenson
et al., 1994) and bilingual (De Houwer, 1995, 2007; Pearson,
2007) children vary extensively in their early language abilities.
Therefore, while language background and language ability (in
L2) are likely related, they are different constructs and both worthy
of consideration.
Another learner characteristic that varies and is worthy of consideration is child gender. Bornstein, Hahn, and Haynes (2004)
found that during the preschool years, girls outperform boys on
language measures such as the Reynell Developmental Language
Scales – Revised (RDLS; Reynell, 1985), the language subscale of
the Vineland Adaptive Behavior Scales (Sparrow, Balla, & Cicchetti,
1984), and the Test of Language Development (TOLD; Newcomer &
111
Hammill, 1988). In addition, girls’ speech has been shown to be
more complex than boys’ speech when vocabulary and syntax
are measured as Mean Length of Utterance and Type/Token Ratio
(Morisset, Barnard, & Booth, 1995). Moreover, girls are found to
outperform boys on story sequencing tasks (Shipman, 1972).
1.4. The current study
In the current study, we build on this prior work by examining
the role of pictures and gestures in preschool children’s ability to
learn words for familiar items in a task set up like a foreign-language learning game. We examine observed gestures, rather than
observed and enacted gestures, so as to compare gestures to pictures as visual nonverbal aids. We sampled children who vary in
gender, language ability and language background. We hypothesize that nonverbal aids (both pictures and gestures) should help
children learn the words, but that the effects of nonverbal aids
might vary based on children’s individual differences. We test
these hypotheses by posing the following research questions:
(1) What is the role of nonverbal aids (pictures and gestures) in
preschoolers’ word learning in a novel (e.g., original/pretend) language?
(2) Does the effect of nonverbal aids differ based on child language ability, language background or gender?
2. Method
2.1. Participants
Seventy-two children participated in the experiment, which
was conducted at a university-affiliated preschool in the MidAtlantic region of the United States (mean age 4 years 8 months,
SD = 10 months). Informed consent was obtained from the parents,
and all procedures were approved by our Institutional Review
Board. Data from 10 of the children were not included in our analyses for various reasons including: failure to pass the training portion of the experiment (n = 7), intentionally giving wrong answers
(n = 1), and not paying attention when words were taught (n = 2).
The final analytic sample consisted of 62 children. These 62 children were just over half of the children in the school at the time,
and the demographic background of this sample of children
matched the breakdown for the school as a whole. The school is
a private school and thus caters primarily to a middle-class population, yet financial assistance is provided for 10–15% of the children in the school. Of the 62 participants in the analytic sample,
31 were female and 31 were male. Thirty-six children were monolingual English speakers, and 26 children had exposure to one or
more languages other than English in the home as reported by parents. The languages children were exposed to varied and included
French, Spanish, German, Mandarin, Romanian, Icelandic, Danish,
Portuguese, Nepali, Hindi, Japanese, Russian, Kyrgyz, Vietnamese,
Lithuanian, and Farsi. We did not assess the extent of children’s
knowledge of the non-English language(s) spoken in the home.
From now on we refer to this group of children as dual language
learners (DLLs; Office of Head Start, 2008), as they are regularly
communicated with in more than one language in the home environment. Forty-six of the 62 children participated in the follow-up
task exactly 1 week after the initial experiment (mean age 4 years
6 months, SD = 8 months). Twenty-three were female and 25 were
male; there were 27 monolingual children and 19 dual language
learners in the follow-up. We restricted the follow-up task to exactly 1 week after the initial experiment so that duration to the follow-up would not be an additional variable. Attrition was due to
absences because of illnesses, a snow day, and an unanticipated
class field trip.
112
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
2.2. Procedure
Previous to the experiment, the researcher (third author) spent
time in each classroom in order to become familiar with the children whose parents consented for them to participate. The assent
of each child was obtained before taking him/her from the classroom to participate in the experiment. Each child was trained
and tested individually, in a research room designated for this purpose. Sessions were audio recorded for accuracy and lasted approximately 15 min. In order to verify that the child knew the English
names of the objects used in the experiment, the researcher first
showed the child six small objects and asked him/her to identify
the objects. Next, the researcher introduced the child to Max, a
stuffed ‘‘alien’’ toy who is ‘‘from outer space’’. She explained that
Max ‘‘speaks a different language than we do’’ and that ‘‘he has different names for things’’ than the (English) names we use (method
adapted from Weismer and Helsketh (1993)). The researcher then
used two new objects, different from the objects to be used in the
experiment, as practice for learning novel words for known objects.
She showed the first object to the child and asked, ‘‘What is this?’’
After the child identified the object (such as a doll), the researcher
explained that ‘‘in Max’s language, a lun is a doll.’’ She then asked,
‘‘So what is a lun?’’ If the child did not immediately understand, the
researcher rephrased and re-explained the novel word concept as
many times as were needed. This procedure was repeated with
the other practice object to ensure that the child understood the
concept of different names for the same item.
A within-subjects design was used for the experimental trials.
Each child was taught two new words in each of three conditions
described below, resulting in a total of six test words. The test objects were chosen because of their familiarity to preschool-aged
children and their ability to be represented both pictorially and
through iconic gesture. The objects were not used during teaching
but were used during assessment. The novel words had a consonant–vowel–consonant (CVC) structure. CVC labels for these objects were created so that no phonemes between the English
word and the CVC word overlapped. Pictures were black and white
images, all in a similar style and approximately 5 by 8 in. Gestures
were iconic; each gesture represented the object itself and was
produced using character viewpoint such that the experimenter
was acting on the object (i.e., talking on phone). See Table 1 for a
list of the English words, the corresponding CVC words, and the
pictures and gestures used.
2.3. Novel word instruction
The experimenter taught each novel CVC word in one of three
conditions as follows: just the word alone (W), the word with a
matching picture (W + P), or the word with a matching iconic gesture (W + G). She told the child, ‘‘In Max’s language, a mip is a
book.’’ In this sentence, the novel word mip was presented in one
of the three conditions (W, W + P, or W + G). She then asked the
child, ‘‘Can you say mip?’’ This question was posed without the
inclusion of picture or gesture. The above process was repeated
once, to ensure that the child was listening and to give the child
a better opportunity to learn the new word. Finally, the researcher
said, ‘‘So, in Max’s language, a mip (W, W + P, or W + G) is a . . .’’,
pausing for the child to complete the sentence with the appropriate English translation. She responded to the child with ‘‘Yes/No,
it’s a book.’’ This teaching procedure was then repeated for two
other novel words, for a total set of three novel words (one presented in each condition of W, W + P, or W + G).
After the first set of three novel words was taught and tested (as
described in the following section), identical presentation conditions were used to teach a second set of three novel words and
were followed by assessment of these three words. Thus, a total
of six novel words was taught and assessed, two words per condition (W, W + P, W + G).1 The conditions and order in which the
words are presented were counterbalanced across subjects.
2.4. Novel word testing
After the first set of three words was taught, the experimenter
reviewed the three words that the child had learned. She reviewed
the words in the same order and conditions as they were presented
during teaching; nonverbal aids were shown with their associated
words. For example, if the words taught were mip (book) in the
word condition, jik (hat) in the picture condition, and naz (toothbrush) in the gesture condition, the experimenter would say:
‘‘We just learned that a mip (W) is a book. We learned that a jik
(W + P) is a hat. And we learned that a naz (W + G) is a toothbrush’’.
Throughout the teaching and review process, the child was exposed to each novel word six times; four of these exposures were
in an experimental condition (W, W + P, or W + G for each word)
and two exposures were in the questions aiming to elicit child production of the novel word (i.e., ‘‘Can you say mip?).
Children were tested on English translation and comprehension.
For each novel word, the researcher asked the child, ‘‘Tell me, what
is a mip?’’ The child was required to produce the corresponding
English word. Novel words were tested in a different order than
that in which they were presented, to ensure that the child did
not merely memorize the order of the English words. Following
this translation task for the first set of three novel words, the researcher tested the child on comprehension of those words. She
showed the child the three objects, representing the words that
were just taught. She asked, ‘‘Can you give Max his mip?’’ The child
had to choose which of the three objects corresponded to the novel
word. This comprehension task provided more scaffolding for the
child, since he or she now only needed to choose between three
possibilities and did not have to articulate the word in English.
After each novel word was asked and the child made a selection,
the objects were removed from the table and reshuffled for presentation in the next testing item. Novel words in this assessment
were presented in the same order as in the preceding production
assessment. Ending with the less challenging comprehension task
provided the child with a greater chance to succeed and have a
feeling of accomplishment at the conclusion of the assessments.
2.5. Follow-up task
Exactly 1 week after the initial testing session, a follow-up session was completed to determine children’s novel word learning
retention. In this session, only comprehension was assessed. The
researcher brought each child back into the research room, where
she had arranged Max and the array of six test objects on the table.
She asked the child to try to remember the ‘‘words we learned last
week’’, reminding him or her that it is okay if he or she had forgotten them. The researcher then asked the child, ‘‘Can you give Max
his mip?’’ After the child chose an object, the researcher reordered
the objects on the table. These steps were repeated for each of the
six novel words. For the follow-up, words were presented in the order they had been taught, and the child chose from all six items at a
1
Pilot testing showed that increasing the number of words and the use of
additional conditions (such as a Repeated Speech condition, in which the novel word
was verbally presented twice but without visual or gestural aid) were too cognitively
demanding for children at this age. As a result, the novel words and presentation
conditions were narrowed to those described above. Further, pilot testing revealed
that a production task was too difficult if we asked the child to produce the novel
word rather than the English equivalent, thus we assessed translation to ensure that
some children could get it correct. Testing translation has been used before, yet is
considered a measure of passive rather than active vocabulary learning (Tellier, 2008).
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
113
Table 1
English words, corresponding CVC novel words, and pictures and gestures used.
English
CVC
Picture
Iconic gesture
Book
mip
Both hands held out, palms together. Hands opened outward, palms facing up, mimicking the opening of a book
Hat
jik
Both hands raised above head. Both hands brought down to ears, forming fists, pulling on an imaginary hat
Cell phone
naz
Right hand in fist, but thumb and pinky finger stuck out, mimicking the shape of a telephone. Hand brought to side of head,
thumb to ear and pinky to mouth
Bird
wug
Both arms held straight and moved up to shoulder level and down to sides one time only, mimicking a bird flapping its
wings
Scissors
pel
Right hand held out, palm toward experimenter and index and middle fingers extended and pointed to the side. Fingers
opened and shut two times while hand moved from right to left, mimicking motion of scissors
Toothbrush
dax
Right hand, palm toward experimenter, right index and middle fingers extended together and pointed to the side. Fingers
held in front of experimenter’s mouth and moved back and forth two times, mimicking brushing one’s teeth with a
toothbrush
time rather than two sets of three as was done during the initial
comprehension testing.
2.6. Measures
Children’s responses to the tasks were recorded on-line on a
checklist by the experimenter, who was trained to do this reliably
with pilot subjects. Observations of the child’s behavior, as well as
any notable comments the child said, were also recorded and used
only to determine whether or not to include the child’s data in
analysis (e.g., if child wasn’t watching when experimenter taught
the words then data was excluded; n = 2). The child received a
score from 0 to 6 for number of correct responses in both translation and comprehension tasks. A participant’s score was then broken into six parts: Number of correct responses in the translation
task in each condition (W, W + P, W + G), and number of correct responses in the comprehension tasks in each condition (W, W + P,
W + G). Raw scores were then translated into proportions for translation and comprehension in each condition by dividing the total
number correct in each condition by the total number taught in
each condition.
Information on the children’s language abilities was obtained
through teacher assessment using the Speech and Language
Assessment Scale (SLAS; Hadley & Rice, 1993). This rating system
requires parents or teachers to assess each child’s language skills
as compared to his or her age-matched peers, answering each
question on a scale from 1 (very low) to 7 (very high). The SLAS
measures skills in assertiveness, responsiveness, semantics, syntax,
articulation, and comprehension. Example items that teachers
rated include the child’s ability to follow directions spoken to
him/her, the child’s ability to keep a conversation going with other
children, and the number of words the child knows. The SLAS is
found to have moderate to high construct validity when compared
to children’s scores on other language measures such as the RDLS
(Reynell, 1985), the Peabody Picture Vocabulary Test (PPVT; Dunn
& Dunn, 1981), the Goldman–Fristoe Test of Articulation (Goldman
& Fristoe, 1986), descriptive measures of verbal interactions based
on the Social Interactive Coding System (Rice, Sell, & Hadley, 1990),
and productive Mean Length of Utterance measures during interaction (Hadley & Rice, 1993; Weinberg, 1991). Teachers filled out the
SLAS during the same month that the children were participating
in the study. Scoring the SLAS is straightforward and involves tallying/summing answers. The SLAS forms were scored by a research
assistant. A second assistant scored thirty of the forms and agreed
with the first assistant on 100% of the scores. The children’s scores
on this measure were normally distributed with a mean of 4.53
(SD = 1.12), and scores on each of the subtests were highly related
to one another. Thus, total scores are used in analyses.
2.7. Analytic plan
First, we present descriptive information on the individual word
items, followed by descriptive statistics and correlations for the
overall learning in each condition and background factors. Next,
we conducted one-way Analysis of Variance (ANOVA) to determine
whether there were differences between monolingual children and
DLLs on the various measures. Finally, we used repeated measures
ANOVA to compare the presentation effects of words, words and
pictures, and words and gestures on word learning. Three separate
sets of analyses were conducted. The three outcomes were (a)
translation, (b) immediate comprehension, and (c) follow-up comprehension. Since all children received instruction in all conditions,
analyses included condition as a within subjects factor with three
levels: Word (W), Word + Picture (WP), Word + Gesture (WG).
Gender (i.e., Male/Female), language background (i.e., Monolingual/DLL), and SLAS (i.e., a continuous variable on with a range of
0–7) were each included as between-subjects factors. Analyses
tested for the interaction between the two language-related variables: language background and SLAS. Huyn–Feldt–Lecoutre Epsilon was used as the adjusted p-value and considered significant
at the p < .05 level. Effect sizes (partial eta squared) were calculated to quantify differences among the conditions. Note that partial eta squared values of .01, .06, and .14 are considered small,
medium, and large effects, respectively (Cohen, 1988).
114
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
3. Results
3.1. Descriptive statistics: Word learning on individual items
Overall, across translation, immediate comprehension, and follow-up comprehension, there was some variability in the extent
to which the 6 novel CVC words were learned. Mip (i.e., book)
was most likely to be learned (57% of the time) and dax (i.e., toothbrush) was least likely to be learned (35% of the time). As evident
from Table 1, the pictures were purposefully very consistent-looking across items, yet there was some variability in the extent to
which the different novel words were learned when paired with
pictures. For both translation and comprehension, mip (i.e., book)
was learned most in the picture condition (32% translation, 38%
comprehension) and naz (i.e., cell phone) was learned least often
in the picture condition (16% translation, 29% comprehension).
We made every effort to keep the gestures similar across items
in terms of using character viewpoint and the same number of
movements for each gesture. Looking at the gesture condition on
its own, there was some variability in the degree to which words
were learned across gestures. For translation, mip (i.e., book) was
most often learned in the gesture condition (34%) and pel (i.e., scissors) was least often learned (19%). For comprehension, jik (i.e.,
hat) was most often learned (37%) in the gesture condition and
wug (i.e., bird) was least often learned (29%). The fact that the gestures that were most and least often learned were different across
the translation and comprehension tasks suggests that there was
not one gesture that was most or least salient for learning in this
study. Further, the novel CVC words were counter-balanced across
condition, so these results do not affect the following analyses of
condition.
3.2. Descriptive statistics: Word learning in each condition
Descriptive statistics for learning in each condition and by language background are displayed in Table 2. On the immediate testing (n = 62), children averaged 41.7% correct for translation and
55.1% correct for comprehension. On the follow-up testing
(n = 46), children averaged 31.5% correct on comprehension. Overall performance on translation, comprehension, and follow-up
comprehension did not differ for monolingual children and dual
language learners.
Overall language ability in English as measured on the SLAS was
significantly, positively correlated with overall performance on
translation (r = .33, p < .01) and immediate comprehension
(r = .34, p < .01), but not follow-up comprehension (r = .25,
p = .09). Thus, children with higher scores on the SLAS, which indicates overall English language ability, tended to have higher scores
on the immediate translation and comprehension tasks. The correlations between language background and translation, immediate
comprehension, and follow-up comprehension were
.02
(p = .86), .17 (p = .37), and .03 (p = .83), respectively. The correlation between SLAS and language background was significant
(r = .27, p < .05), with monolingual children scoring higher on
the SLAS, on average, than dual language learners. For gender, correlations with language background and translation, immediate
comprehension, and follow-up comprehension were
.06
(p = .66), .06 (p = .62), and .25 (p = .10), respectively.
3.3. Repeated measures ANOVA
3.3.1. Translation
On translation, there was no difference by condition
(F(2, 114) = 3.00, p = ns). There was no difference among conditions
by language background (F(2, 114) = 0.53, p = ns) or SLAS
(F(2, 114) = 2.41, p = ns), and there was no interaction among conditions by language background and SLAS (F(2, 114) = 0.17, p = ns).
However, there was a difference among conditions by gender
(F(2, 114) = 4.50, p < .05). For the Word only condition, the least
square means (LSM) was .45 for males and .33 for females. For
the Word + Picture condition, the LSM was .35 for males and .56
for females. For the Word + Gesture condition, the LSM was .45
for males and .43 for females. Post-hoc tests reveal that there
was no difference between the genders for the Word only and
Word + Gesture conditions, but females significantly outperformed
males in the Word + Picture condition (p < .05). The values for partial eta squared for condition, gender, language background, SLAS,
and the interaction between language and SLAS are .09, .12, .02,
.07, and .01, respectively.
3.3.2. Immediate comprehension
On immediate comprehension, there was a significant difference by condition (F(2, 114) = 3.11 p < .05), and a significant difference among conditions based on gender (F(2, 114) = 3.82, p < .05),
language background (F(2, 114) = 5.23, p < .01), and SLAS
(F(2, 114) = 3.30, p < .05). There was also an interaction among conditions by language background and SLAS (F(2, 114) = 4.39, p < .05).
To interpret this interaction, it is helpful to consider prototypical
data points. At SLAS = 3, the LSMs for monolinguals and DLLs,
respectively, are .37 and .29 in the word only condition (p = ns),
.48 and .49 in the word + picture condition (p = ns), and .26 and
.73 in the word + gesture condition (p < .01). At SLAS = 6, the LSMs
for monolinguals and DLLS, respectively, are .74 and .79 in the
word only condition (p = ns), .70 and .56 in the word + picture condition (p = ns), and .71 and .44 in the word + gesture condition
(p = ns). The results of this saturated model with the interaction
Table 2
Means and standard deviations (in parentheses) for translation, comprehension and follow-up comprehension by condition (top) and by language background (bottom).
Immediate testing (n = 62)
Translation
Comprehension
Follow-up testing (n = 46)
Comprehension
Word
Word + Picture
Word + Gesture
.37 (.35)
.55 (.40)
.46 (.38)
.57 (.38)
.43 (.32)
.56 (.36)
.26 (.33)
.37 (.36)
.30 (.31)
Monolingual
Word
Immediate testing
Translation
Comprehension
n = 36
.35 (.39)
.60 (.41)
Follow-up testing
Comprehension
n = 27
.31 (.37)
DLL
Word + Picture
Word + Gesture
Word
Word + Picture
Word + Gesture
.51 (.39)
.61 (.38)
.39 (.32)
.53 (.40)
n = 26
.40 (.28)
.48 (.39)
.38 (.36)
.52 (.39)
.48 (.33)
.62 (.29)
.39 (.38)
.26 (.29)
n = 19
.18 (.25)
.34 (.34)
.37 (.33)
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
115
Fig. 1. Interaction effect between language background (monolingual vs. DLL), and language ability (High vs. Low SLAS) on immediate comprehension. Bars represent leastsquared means. Low = SLAS score 3; HIGH = SLAS score 6.
are presented in Fig. 1. The difference among conditions by gender
was manifested in the word only condition wherein the LSM for
males was .46 and the LSM for females was .65 (p < .05). There
were no differences by gender for the word + picture or
word + gesture conditions. The values for partial eta squared for
condition, gender, language background, SLAS, and the interaction between language and SLAS are .07, .09, .12, .08, and .10,
respectively.
3.3.3. Follow-up comprehension
On follow-up comprehension (n = 46), there was no difference
by condition (F(2, 82 = 2.18, p = ns). There was no difference among
conditions by gender (F(2, 82) = 1.09, p = ns) or language background (F(2, 82) = .27, p = ns). There was an interaction among conditions by SLAS (F(2, 82) = 3.24, p < .05), but there was no
interaction among conditions by language background and SLAS
(F(2, 82) = .04, p = ns). While there was no difference for children
with higher and lower levels of SLAS in the word only and
word + gesture conditions, children with higher SLAS scores performed significantly better than children with lower SLAS scores
in the word + picture condition. The values for partial eta squared
for condition, gender, language background, SLAS, and the interaction between language and SLAS are .09, .05, .01, .13, and <.01,
respectively.
4. Discussion
The goal of the current study was to investigate the role of nonverbal aids, specifically pictures and gestures, in preschoolers’
word learning in a novel language. This study adds to the previous
literature in its focus on whether the effectiveness of nonverbal
aids in word learning varies depending on characteristics of the
learners, particularly language ability, language background, and
gender. Indeed, we found that overall performance on the comprehension tasks was related to the children’s English language abilities as rated by their teachers. Thus, the tasks proved more
challenging for children with lower English language abilities than
for children with higher English language abilities as compared to
their peers. We found no direct correlation between language
background (monolingual vs. DLL) and task performance, yet language background was an important control to include in the models, as effects of condition differed based on language background.
We found an effect of gender for translation benefitting girls in the
picture condition. We briefly review the key results below and dis-
cuss the significance of these findings and directions for future
research.
As noted above, girls performed significantly better than boys
on the translation task when learning words paired with pictures.
Recall that in the translation task the children were required to
give the English label for the novel object. Thus, when presented
with pictures as a nonverbal aid paired with the novel word, girls
were better able to provide the corresponding English translation
than boys. We included gender in this study because of previous
work citing girls as outperforming boys in language tasks (e.g.,
Morisset et al., 1995). However, in this sample, girls and boys did
not differ in overall language ability as rated by teachers or in language background. Thus, the gender effect here is not driven by
girls having greater language skills. Future research should explore
gender differences further, as these results indicate that different
learning conditions might benefit girls more than boys.
In the immediate comprehension task, there was an interesting
significant interaction between condition, language background
and language ability, emphasizing the importance of taking these
factors into consideration in experimental studies of this sort. In
interpreting this interaction, we draw on the complexity hypothesis presented by McNeil et al. (2000), which claims that providing
redundant information in the form of gesture is beneficial when
the task is complex for the learner, as well as the expertise reversal
effect suggested by Kalyuga et al. (2003), which also claims that
providing redundant information in two forms might be helpful
for novices yet potentially impede learning for more advanced
learners. We take performance on the word-only condition in this
study to be a proxy for how hard the task is in general for the different groups of learners. We found that the dual language learners
with high English-language abilities were the group that performed highest on the word-only condition, suggesting the task itself was easiest for that group (Fig. 1). Accordingly, neither of the
nonverbal aids increased performance for this group of children.
This finding is in line with the expertise reversal effect in that
while these children did not perform significantly worse on the
conditions that included nonverbal aids, the addition of the nonverbal aid did not help performance. Alternatively, the DLL children with low English-language abilities performed lowest on
the word-only condition, suggesting this was a more complex task
for that group (Fig. 1). Consequently, word learning for this group
was enhanced by the nonverbal aids, specifically gestures. The current study thus supports the complexity hypothesis (McNeil et al.,
2000) where gestures aided in comprehension on more complex
tasks but not on easier tasks, and extends the hypothesis by
116
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
suggesting that characteristics of the learner can be used to determine complexity of the task in addition to changing the task itself.
The follow-up comprehension task given as a measure of longterm recall after 1 week showed interesting results as well that differed for the high versus low English language-ability children.
Most noteworthy is the finding that while the low-language ability
children performed poorly and did not differ across conditions on
the follow-up, the high-language ability children were more likely
to retain words learned in the word + picture condition than in the
word + gesture condition or the word-only condition. Thus, for the
high language ability children, there was a beneficial effect on
word retention after 1 week for the words learned in the picture
condition. This interesting finding suggests that perhaps the picture condition places the least demands on working memory during encoding of the novel word. One possible explanation is that in
both the word only and the word + gesture conditions, the learner
needs to expend resources visualizing the target object. The pictures may ease that process by eliminating the need for visualization, thus supporting retention. Indeed, studies show that learning
and memory benefit when information is represented in ways that
minimize demands on working memory (Mayer & Moreno, 1998).
Results from this study support Clark and Paivio’s (1990) assertion that the effect of combining verbal and nonverbal supports
will vary based on individual differences. This study also shows
that the effect of specific kinds of nonverbal supports may also vary
by individual differences. Though pictures and gestures are commonly used to support word learning (e.g., Church et al., 2004;
Kelly et al., 2009; Loftus et al., 2010; Pollard-Durodola et al.,
2011), little research has been conducted isolating and comparing
the effects of these two forms of nonverbal support. Results from
the current study suggest that use of various nonverbal aids can
support word learning for children from different language backgrounds, abilities and gender. However, much more work is needed
to fully understand the interactions between learner characteristics and the effect of nonverbal aids. Specifically, it may be that
combining pictures and gestures is more effective for some students than either nonverbal aid alone, and that as suggested by
Kalyuga et al. (2003), nonverbal aids in general may be most helpful for novices rather than for experts. The present study provides a
foundation for future work in this direction by establishing that
there are differential effects of nonverbal aids for children from different backgrounds and abilities.
Further, previous research on memorization in adults suggests
that enacting an action while learning it enhances recall for that
action (e.g., Engelkamp & Cohen, 1991), and that individual differences play a role with younger adults performing better than older
adults on these memory tasks (Nyberg, Persson, & Nilsson, 2002),
especially when the task is difficult. Thus, it may be that if the children were asked to perform the gesture themselves during the
learning and recall phases of the experiment, they would have
had higher rates of learning (e.g., Tellier, 2008). The current study
was concerned with comparing pictures and gestures as nonverbal
visual aids, but future work will examine the added role of enacting gesture in the word learning process.
There are some limitations of the current study that deserve
mention. First, we were not able to collect detailed language background information on the participating children, thus we grouped
children into monolingual and DLL categories based solely on parent report of the child’s exposure to languages at home. This is a
gross measure and does not allow us to explore potential differences in degrees of bilingualism. Further, the SLAS measure of language ability was given only in English, thus the low-SLAS DLL
children may be quite heterogeneous since we do not have a measure of their language ability in their other language. Second, we
were not able to complete the follow-up testing with all children
due to several unanticipated events (snow day, field trip), resulting
in a smaller than ideal follow-up sample. Third, we did find that
some of our novel words proved easier to learn than others. While
this did not affect the current results because the words were
counterbalanced across conditions in a within-subjects design, it
does suggest that careful attention be paid to stimuli in studies
of this sort. Finally, due to our experimental design, our findings
are most relevant to issues of learning vocabulary in a second (or
third) language. It is not clear the extent to which similar results
would be found if the design were geared toward first language
acquisition, or the extent to which the results would translate to
more natural learning contexts.
Despite these limitations, the results presented here have
important implications for educational and psychological research
and for educators. By 2030 it is estimated that 40% of school children in the United States will be bilingual, or exposed to a language
other than English in the home (Hoff, 2009). Many of these children enter mainstream classrooms with monolingual peers and,
on average, have English language skills below their monolingual
peers at school entry (Castro, Páez, Dickinson, & Frede, 2011). Thus,
it is important to understand the extent to which instructional
strategies may facilitate learning for children from different language backgrounds and for children who vary in their English language abilities. Implications for foreign language learning are
evident as well, specifically that the child’s current language level
in his or her first language may dictate the types of strategies that
could work best for him or her. The current study suggests that differentiated instruction within the classroom would benefit students. For example, based on differences in background factors,
some students could be taught using words only and others using
nonverbal aids. However, in the area of word learning, more research is needed to uncover the degree to which pictures and gestures can facilitate the learning process.
Acknowledgments
We thank the participating children and staff at the Center for
Young Children, University of Maryland, and Jeffrey Harring for
statistical advice. This research was supported by start-up funds
to the first author from the University of Maryland.
References
Allen, L. Q. (1995). The effects of emblematic gestures on the development and
access of mental representations of French expressions. The Modern Language
Journal, 79(4), 521–529.
Best, R. M., Dockrell, J. E., & Braisby, N. (2006). Lexical acquisition in elementary
science classes. Journal of Educational Psychology, 98(4), 824–838.
Bialystok, E. (2001). Bilingualism in development: Language, literacy, and cognition.
New York: Cambridge University Press.
Bornstein, M. H., Hahn, C., & Haynes, O. M. (2004). Specific and general language
performance across early childhood: Stability and gender considerations. First
Language, 24, 267–304.
Capone, N. C., & McGregor, K. K. (2005). The effect of semantic representation on
Toddlers’ word retrieval. Journal of Speech, Language & Hearing Research, 48(6),
1468–1480. http://dx.doi.org/10.1044/1092-4388(2005/102.
Castro, D. C., Páez, M. M., Dickinson, D. K., & Frede, E. (2011). Promoting language
and literacy in young dual language learners: Research, practice and policy.
Child Development Perspectives, 5, 15–21.
Church, R. B., Ayman-Nolley, S., & Mahootian, S. (2004). The role of gesture in
bilingual education: Does gesture enhance learning? Bilingual Education and
Bilingualism, 7(4), 303–319.
Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational
Psychology Review, 3(3), 149–210.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.).
Hillsdale, NJ: Erlbaum.
Cohen, M. T., & Johnson, H. L. (2011). Improving the acquisition of novel vocabulary
through the use of imagery interventions. Early Childhood Education Journal, 38,
357–366.
Cohen, R. L., & Otterbein, N. (1992). The mnemonic effect of speech gestures:
Pantomimic and non-pantomimic gestures compared. European Journal of
Cognitive Psychology, 4(2), 113–139.
De Houwer, A. (2007). Parental language input patterns and children’s bilingual use.
Applied Psycholinguistics, 28, 411–466.
M.L. Rowe et al. / Contemporary Educational Psychology 38 (2013) 109–117
De Houwer, A. (1995). Bilingual language acquisition. In P. Fletcher & B.
MacWhinney (Eds.), The handbook of child language (pp. 219–250). Oxford,
UK: Blackwell.
Dunn, L. M., & Dunn, L. M. (1981). Peabody picture vocabulary test – revised. Circle
Pines, MN: American Guidance Service.
Engelkamp, J., & Cohen, R. L. (1991). Current issues in memory of action events.
Psychological Research, 53, 175–182.
Fenson, L., Dale, P., Reznick, J. S., Bates, E., Thal, D., & Pethick, S. (1994). Variability in
early communicative development. Monographs for the Society for Research in
Child Development, 59(5, Serial No. 242).
Ganea, P., Pickard, M. B., & DeLoache, J. (2008). Transfer between picture books and
the real world by very young children. Journal of Cognition and Development, 9,
46–66.
Gersten, R., & Baker, S. (2000). What we know about effective instructional practices
for English-language learners. Exceptional Children, 66(4), 454–470.
Goldman, R., & Fristoe, M. (1986). The Goldman–Fristoe test of articulation. Circle
Pines, MN: American Guidance Service.
Hadley, P. A., & Rice, M. L. (1993). Parental judgments of preschoolers’ speech and
language development: A resource for assessment and IEP planning. Seminars in
Speech and Language, 14, 278–288.
Hoff, E. (2009). Language development (4th ed.). Belmont, California: Wadsworth/
Cengage Learning.
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect.
Educational Psychologist, 38(1), 23–31.
Kelly, S. D., McDevitt, T., & Esch, M. (2009). Brief training with co-speech gesture
lends a hand to word learning in a foreign language. Language and Cognitive
Processes, 24(2), 313–334.
Lazaraton, A. (2004). Gesture and speech in the vocabulary explanations of one ESL
teacher: A microanalytic inquiry. Language Learning, 54(1), 79–117.
Loftus, S. M., Coyne, M. D., McCoach, D., Zipoli, R., & Pullen, P. C. (2010). Effects of a
supplemental vocabulary intervention on the word knowledge of kindergarten
students at risk for language and literacy difficulties. Learning Disabilities
Research & Practice, 25(3), 124–136.
Macedonia, M., Muller, K., & Friederici, A. D. (2011). He impact of iconic gestures on
foreign language word learning and its neural substrate. Human Brain Mapping,
32, 982–998.
Mayer, R. E. (1999). Research-based principles for the design of instructional
messages: The case for multimedia explanations. Document Design: Journal of
Research and Problems Solving in Organizational Communication, 1, 7–20.
Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning:
Evidence for dual processing systems in working memory. Journal of Educational
Psychology, 90, 312–320.
McNeil, N. M., Alibali, M. W., & Evans, J. L. (2000). The role of gesture in children’s
comprehension of spoken language: Now they need It, now they don’t. Journal
of Nonverbal Behavior, 24(2), 131–150.
McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago, IL:
University of Chicago Press.
Morisset, C. E., Barnard, K. E., & Booth, C. L. (1995). Toddlers’ language development:
Sex differences within social risk. Developmental Psychology, 31(5), 851–865.
Newcomer, P., & Hammill, D. D. (1988). Test of language development, primary (2nd
ed.). Austin, TX: Pro-Ed.
Nyberg, L., Persson, J., & Nilsson, L.-G. (2002). Individual differences in memory
enhancement by encoding enactment: Relationships to adult age and biological
factors. Neuroscience and Biobehavioral Reviews, 26, 835–839.
Office of Head Start (2008, February). Dual language learning: What does it take?
Retrieved from the Early Childhood Learning & Knowledge Center, Office of
117
Head Start, U.S. Department of Health and Human Services: <http://
eclkc.ohs.acf.hhs.gov/hslc/tta-system/teaching/eecd/
Dual%20Language%20Learners%20and%20Their%20Families/
Learning%20in%20Two%20Languages>.
Oller, D. K., Pearson, B. Z., & Cobo-Lewis, A. B. (2007). Profile effects in early bilingual
language and literacy. Applied Psycholinguistics, 28, 191–230.
Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart, and
Winston.
Paivio, A. (1986). Mental representation: A dual coding approach. Oxford, UK: Oxford
University Press.
Pearson, B. Z. (2007). Social factors in childhood bilingualism in the United States.
Applied Psycholinguistics, 28, 399–410.
Pearson, B. Z., Fernandez, S. C., & Oller, D. K. (1993). Lexical development in bilingual
infants and toddlers: Comparison to monolingual norms. Language Learning, 43,
93–120.
Pollard-Durodola, S., Gonzalez, J., Simmons, D., Kwok, O., Taylor, A., Davis, M., et al.
(2011). The effects of an intensive shared book-reading intervention for
preschool children at risk for vocabulary delay. Exceptional Children, 77(2),
161–183.
Preissler, M. A., & Carey, S. (2004). Do both pictures and words function as symbols
for 18- and 24-month-old children? Journal of Cognition and Development, 5(2),
185–212.
Quine, W. V. O. (1960). Word and object. Cambridge, MA: Technology Press of the
Massachusetts Institute of Technology.
Reynell, J. K. (1985). Reynell developmental language scales – revised. Windsor,
England: Nfer-Nelson.
Rice, M. L., Sell, M. A., & Hadley, P. A. (1990). The social interactive coding system:
An on-line, clinically relevant, descriptive tool. Language, Speech, and Hearing
Services in Schools, 21, 2–14.
Shipman, V. C. (1972). Disadvantaged children and their first school experiences
(Educational testing service head start longitudinal study, report no. PR-72-18).
Princeton, NJ: Educational Testing Service.
Silverman, R. D. (2007). Vocabulary development of English-language and Englishonly learners in kindergarten. The Elementary School Journal, 107(4), 365–383.
Silverman, R., & Hines, S. (2009). The effects of multimedia-enhanced instruction on
the vocabulary of English-language learners and non-English-language learners
in pre-kindergarten through second grade. Journal of Educational Psychology,
101(2), 305–314.
Sparrow, S. S., Balla, D. A., & Cicchetti, D. V. (1984). Vineland adaptive behavior scales:
Interview edition, expanded form manual. Circle Pines, MN: American Guidance
Service.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning.
Cognitive science, 12(2), 257–285.
Tellier, M. (2008). The effect of gestures on second language memorisation by young
children. Gesture, 8(2), 219–235.
Valenzeno, L., Alibali, M., & Klatzky, R. (2003). Teachers’ gestures facilitate students’
learning: A lesson in symmetry. Contemporary Educational Psychology, 28,
187–204.
Weinberg, A. M. (1991). Construct validity of the speech and language assessment
scale: A tool for recording parent judgments. Unpublished master’s thesis.
Department of Speech-Language-Hearing, University of Kansas, Lawrence, KS.
Weismer, S. E., & Helsketh, L. J. (1993). The influence of prosodic and gestural clues
on novel word acquisition by children with specific language impairment.
Journal of Speech and Hearing Research, 36(5), 1013.