Cultural and Home Language Influences on Children`s Responses

Cultural and Home Language Influences
on Children’s Responses to Science
Assessments
Aurolyn Luykx
University of Texas at El Paso
Okhee Lee
Margarette Mahotiere
Benjamin Lester
University of Miami
Juliet Hart
College of William and Mary
Rachael Deaktor
Background: A critical issue in academic assessment is the effect of children’s language and culture on their measured performance. Research on this topic has
rarely focused on science education, because science is commonly (though erroneously) assumed to be “culture free.”
Students’ scientific understandings are influenced by the cultural values, experiences, and epistemologies of their home communities. Efforts to minimize cultural
bias include designing tests to be “culturally neutral” and, conversely, tailoring
assessments to specific cultural groups; both approaches are theoretically and practically problematic. Several studies have focused on testing accommodations for
English language learners (ELLs), but accommodations raise validity and feasibility issues and are limited by “English-only” policies.
Teachers College Record Volume 109, Number 4, April 2007, pp. 897–926
Copyright © by Teachers College, Columbia University
0161-4681
898 Teachers College Record
This article stresses the linkages between language and culture, drawing on contemporary literacy theory and research on scientific communities as well as groups
traditionally marginalized from science.
Objective: To examine how children’s prior linguistic and cultural knowledge
mediates their engagement with school science, as reflected in their responses on science assessments.
Participants: Over 1,500 students from six elementary schools serving diverse populations.
Research Design: Project-developed assessments included items requiring students
to explain scientific phenomena. Scoring revealed that students misinterpreted
some items, and scorers had difficulty understanding some students’ responses.
Project personnel then undertook qualitative discourse analysis of responses on all
tests.
Findings: Analysis revealed phonological/orthographic and semantic interference
from students’ home languages; responses reflecting students’ cultural beliefs and
practices; and “languacultural” features related to genre, authorial voice, pragmatic framing, and textual organization.
Conclusions/Recommendations: Science tests inevitably contain tacit cultural and
linguistic knowledge that is not equally accessible to all students. Using “real-life
scenarios” in assessment items may confuse students whose lives do not reflect
mainstream norms. Furthermore, English-medium assessments are unlikely to
accurately measure ELLs’ science knowledge.
Teachers can learn to recognize factors that impede ELLs from grasping or
expressing science concepts clearly. They should also ensure that all students understand the discursive and textual conventions inherent in assessment instruments.
Linguistic and cultural factors shape science knowledge not only of students but
of teachers, scientists, and test developers. Uncovering the factors shaping students’
academic performance requires fine-grained qualitative analysis and collaboration
across disciplinary boundaries.
In recent decades, the increasing attention to cultural and linguistic
diversity within the United States has highlighted the role of children’s
culture and language in their appropriation of academic content.
Educational research on these questions has focused mostly on classroom
interaction and teaching/learning styles, rather than on specific academic disciplines or subject-area knowledge. Similarly, research on specific
subject areas has seldom considered the role of language and culture in
children’s learning, and this is especially true with regard to science and
math learning. One reason for this is the common assumption that science, along with mathematics, constitutes a universally valid, “culturefree” body of knowledge that remains fundamentally unchanged even
Cultural and Home Language Influences on Children’s Responses 899
when it is taken up by different social or cultural groups (American
Association for the Advancement of Science [AAAS], 1989; Matthews,
1998).1
This study, which is part of a larger longitudinal research project,
examines cultural and home language influences in elementary students’
constructed responses to science assessments. Bridging the divide
between “culturalist” and “disciplinary” approaches, it illustrates some of
the ways in which children’s engagement with scientific information is
mediated by their prior linguistic and cultural knowledge. The research
questions were not devised a priori; rather, they emerged in the process
of data collection for other purposes (i.e., coding of student responses on
pretests and posttests). Analysis of the data will show that science constitutes neither a culture-free enterprise nor a universally consistent body of
knowledge. Rather, scientific concepts, as well as the assessment instruments designed to measure students’ understanding of them, are infused
with specific (though tacit) cultural and linguistic knowledge that is not
equally accessible to all groups of students. The results of the study have
implications not only for the accurate assessment of students’ science
learning but also for how science knowledge and instruction are perceived by teachers, students, policymakers, and educational researchers.
THEORETICAL FRAMEWORK: VALID AND EQUITABLE ASSESSMENT
IN MULTICULTURAL/MULTILINGUAL SETTINGS
All children bring to the learning process their own ways of interpreting
the natural and social worlds, acquired from their cultural environments,
discursive traditions, and personal circumstances. For “mainstream” children (i.e., White, middle-class, native English speakers), the linguistic
and cultural knowledge they acquire at home is largely continuous with
the expectations and assumptions of the school. Nevertheless, other children also bring “funds of knowledge” (Moll, 1992; Vélez-Ibáñez &
Greenberg, 1988) from their home environments that can serve as intellectual resources for science learning (Lee, 2002; Warren, Ballenger,
Ogonowski, Rosebery, & Hudicourt-Barnes, 2001). To be most effective,
science instruction and assessment should take into account students’
prior knowledge and the intellectual resources they bring to the pedagogical encounter.
At the same time, some children’s linguistic and cultural traditions may
be inconsistent with the scientific orientation toward knowledge, the
nature of specific science disciplines, or the ways in which science disciplines are taught in school. Such inconsistencies may create difficulties
for students learning science and for the teachers trying to teach them
900 Teachers College Record
(Aikenhead & Jegede, 1999; Atwater, 1994; Lee, 1999; Moje, Collazo,
Carillo, & Marx, 2001). It is thus important for teachers to consider students’ prior knowledge and to articulate its relationship with school science to make science accessible and meaningful for all children
(Cobern, 1998; Cobern & Aikenhead, 1998; Lee, 2003, 2004; Lee &
Fradd, 1998; Warren et al., 2001).
A critical issue concerning valid and equitable assessment in multicultural/multilingual settings is how to address linguistic and cultural influences on students’ measured performance. Solano-Flores and NelsonBarber (2001) asserted that the ways students make sense of science test
items are influenced by the values, beliefs, experiences, communication
patterns, teaching and learning styles, and epistemologies originating in
their cultural backgrounds and the socioeconomic conditions in which
they live. Furthermore, children of differing cultural backgrounds often
have different ways of expressing their ideas, which may mask their
knowledge and abilities in the eyes of teachers unfamiliar with the linguistic and cultural norms of students’ home communities. Although
mainstream children are also subject to cultural influences, the linguistic
and cultural knowledge that mediates their academic performance is
more likely to parallel that which guides the actions and interpretations
of teachers, researchers, and test developers. Thus, the tacit knowledge
of mainstream children is more likely to facilitate their performance than
to interfere with it.
Linguistic and cultural factors in students’ test performance—as in
learning more generally—are tightly intertwined. However, most studies
have opted to focus on one or the other. Although some of the examples
analyzed below focus on features that are primarily linguistic (e.g.,
phonological/orthographic confusions), and others highlight nonlinguistic cultural influences (e.g., the living conditions of low-income children), of particular interest are those influences that we call “languacultural” (after Agar, 1994). This term reflects a theoretical stance that holds
that examining the organic linkages between language and culture is
more analytically productive than positing a categorical distinction
between the two. Approached from this perspective, the analysis of linguistic behavior (e.g., pencil-and-paper science tests) reveals how cultural
meanings are expressed through linguistic forms, and also how linguistic
forms are interpreted in the light of existing cultural frames.
Below, we examine the existing literature analyzing the effects of children’s cultural and linguistic diversity on academic assessment. The studies are grouped according to their focus on “linguistic” or “cultural” factors, as conceived by the researchers themselves. We then discuss
Cultural and Home Language Influences on Children’s Responses 901
research that has approached assessment from a languacultural perspective, albeit without employing Agar’s terminology.
LINGUISTIC FACTORS IN ACADEMIC ASSESSMENT
Attempts to devise equitable assessments for English language learners
(ELLs) have encountered numerous difficulties; among these is the fact
that ELLs’ developing English proficiency is usually conceived (and measured) in terms of their mastery of a set of technical skills, rather than of
the cultural frameworks and communicative competencies associated
with the dominant language community. Research on academic assessment of ELLs has focused mostly on the effectiveness of various testing
accommodations (e.g., use of bilingual dictionaries or subject-specific
glossaries, extra time to complete assessments; Abedi, 2004; Abedi,
Hofstetter, & Lord, 2004; O’Sullivan, Lauko, Grigg, Qian, & Zhang, 2003;
Shepard, Taylor, & Betebenner, 1998). Although assessments can be
made more comprehensible to ELLs by avoiding complex grammatical
constructions, polysemic terms (i.e., terms with more than one meaning), and idiomatic expressions (Kirschner, Spector-Cohen, & Wexler,
1992), such accommodations are not regularly employed and may not
reflect the type of English used in instruction.
A more substantial accommodation would be to translate assessment
instruments into students’ home languages. However, this raises issues of
validity relative to the English-language versions and may still fail to give
an accurate picture of students’ content knowledge if instruction has
been carried out in English or if students have not developed literacy
skills in the home language. In addition, because interpretive cultural
frameworks vary from one speech community to another, translating
assessment instruments would not eliminate all sources of possible confusion and may well introduce new ones. In response to these concerns,
some researchers have argued that consideration of students’ home languages should guide the entire assessment process, including test development, test review, test use, and test interpretation (O’Malley & Valdez
Pierce, 1994; Short, 1993; Solano-Flores & Trumbull, 2003). Alternatively,
giving ELLs the same items in both English and their native language has
the potential to produce more fine-grained understandings of the interactions among first- and second-language proficiency, students’ content
knowledge, and the linguistic and content demands of test items (SolanoFlores, Lara, Sexton, & Navarrete, 2001). However, these more nuanced
perspectives have so far gained little ground in policy and assessment circles.
902 Teachers College Record
CULTURAL FACTORS IN ACADEMIC ASSESSMENT
Whereas the examination of linguistic influences in assessment carries its
own methodological challenges, the issues become even more complicated when one considers the simultaneous influence of children’s varying cultural backgrounds. Different languages present themselves to the
teacher (and to the researcher) as more bounded and discrete than different cultures, though neither language nor culture is as bounded or
discrete as is often assumed. Assessment occurs in one language or
another (or, rarely, some combination thereof) and in most cases it is a
relatively simple matter to identify children’s home language.2 In contrast, the constitutive elements and boundaries of a culture are harder to
define. Teachers and students seldom operate within the parameters of a
single, discrete culture; rather, they are subject to a multitude of cultural
influences from numerous sources on a daily basis. Academic assessments
are themselves cultural products associated with a particular cultural tradition, though the cultural basis of assessment practices often goes unrecognized. “Translating” assessment instruments from one cultural tradition to another, even when these share the same language, is an even
more subtle and slippery task than translating them from one language
to another. Despite the pervasive nature of culture and the fact that it
shapes nearly all human activities in ways specific to each social group,
many educators continue to believe that culturally neutral assessments
are possible; in contrast, the impossibility of “linguistically neutral assessments” is more intuitively obvious.
Efforts to reduce cultural bias in academic assessments have generally
taken one of two opposing directions: (1) development of assessments
tailored to the cultural knowledge of specific student groups and (2)
development of supposedly neutral assessments, in which the reliance on
tacit cultural knowledge is minimized. There are drawbacks to each of
these approaches. The first requires in-depth knowledge of students’ cultural worlds and the integration of potentially conflicting fields of knowledge, not to mention the difficulties posed by the presence, within the
same classroom, of children with different cultural backgrounds and different degrees of assimilation to the mainstream. Solano-Flores and
Nelson-Barber (2001) proposed the notion of “cultural validity” to frame
sociocultural influences on how students make sense of and respond to
science items. They identified five areas in which the notion of cultural
validity can contribute to improving science assessment: student epistemology, student language proficiency, cultural worldviews, cultural communication and socialization styles, and student life context and values.
They suggested that “ideally, if cultural validity issues were addressed
Cultural and Home Language Influences on Children’s Responses 903
properly at the inception of an assessment and throughout its entire
process of development, there would be no cultural bias and providing
accommodations for cultural minorities would not be necessary” (p.
557). Still, the question of what would be entailed in “properly addressing” such issues is far from simple.
In contrast, the other approach holds that assessment instruments can
be designed to be culturally neutral—that is, that all or most of the tacit
cultural knowledge that goes into their design and use can be identified
and removed, so that the instruments measure only the content knowledge being assessed. This view is not in line with research on the relationship between culture and cognition, which holds that cultural frameworks are constantly, though largely unconsciously, mobilized in the
interpretation and organization of new knowledge (e.g., Moll, 1992;
Rogoff, 1990; Rogoff & Lave, 1984). This does not mean that attempts to
remove obvious instances of cultural bias from assessments are futile.
Certainly, the reliance on tacit knowledge that places children of particular backgrounds at a disadvantage is to be avoided to the degree possible. Our argument is that the degree to which this is possible is significantly less than what designers and users of academic assessments may
assume.
“LANGUACULTURAL” FACTORS IN ACADEMIC ASSESSMENT
Though it is sometimes analytically convenient to treat linguistic issues in
academic assessment as separate from cultural issues, the two domains
are closely intertwined, and addressing one without the other may seriously misrepresent the processes by which knowledge is constructed and
expressed. Languages are tightly bound to the social and cultural contexts in which they are used; lexical, morphological, and grammatical elements embody culturally specific ways of conceptualizing the natural and
social world. Similarly, the organization of discourse (i.e., longer
“chunks” of speech or writing extending beyond the sentence level)
varies widely, even among cultural groups that share the same language;
this is a common source of misunderstanding in classrooms and other
intercultural settings (Cazden, 2001; Corson, 2001; Heath, 1983;
Michaels, 1981; Scollon & Scollon, 1995). Agar (1994) coined the term
languaculture to express this organic link between language and culture,
arguing that neither can be productively studied without consideration
of the other.
Some cultural differences are explicitly coded in language, others exist
largely outside of language, and yet others are linked to covert linguistic
categories that speakers employ without being consciously aware of them.
904 Teachers College Record
With regard to written language, which constitutes the data set on which
the current study is based, contemporary literacy theorists have shown
that a considerable amount of cultural knowledge is required to interpret
even seemingly simple texts (Boyarin, 1992; Gee, 1998; Street, 1993). The
ways in which students organize their own texts reflect the discursive
norms of their home communities and the norms of academic writing
transmitted by the school (Ivanic, 1998; Michaels, 1981). In light of these
theoretical advances, it makes sense to examine cultural and linguistic
factors in assessment as related aspects of a single broader phenomenon.
Some features of the test responses described in this article can be
ascribed to purely structural linguistic factors (e.g., orthographic/phonological difficulties), whereas others are related to aspects of culture that
are not closely associated with specific linguistic forms. However, several
responses display influences that can be considered languacultural, inasmuch as they reflect linguistic differences that are rooted in cultural differences or cultural differences that are encoded in specific features of a
particular language.
PURPOSE OF THE STUDY
This study is part of a larger, longitudinal research project examining the
process and impact of an instructional intervention to promote science
learning and English proficiency, oral and written, with a culturally and
linguistically diverse student population in six elementary schools in a
large urban school district. The study focuses on cultural and linguistic
interference in the open-ended responses of third- and fourth-grade students on paper-and-pencil science tests administered at the beginning
and end of the 2001–2002 school year. Specifically, we examine children’s
responses linked to (1) linguistic influences in terms of phonological,
orthographic or semantic features from children’s home language; (2)
cultural influences in terms of specific knowledge or beliefs deriving
from children’s homes and communities, and implicit cultural assumptions underlying students’ responses; and (3) languacultural features of
children’s written discourse (e.g., genre, authorial voice, and pragmatic
framing of responses). Data analysis focused on how these factors shaped
children’s interpretations of test items, their written responses, and
teachers’ potential evaluations of these responses. We dedicate the most
attention to the third category because it is the most frequently ignored
in assessment research and the most in need of theoretical development
based on analysis of empirical data. The results have broad implications
for the valid and equitable assessment of nonmainstream children in
school science and other subject areas.
Cultural and Home Language Influences on Children’s Responses 905
METHODS
RESEARCH SETTING
The six participating schools were chosen in part for the linguistic and
cultural variability of their respective student bodies. Two of the six
schools served predominantly Hispanic students (92% and 87% respectively); one of these had many students who were newly arrived or firstgeneration immigrants (47% LEP3) from low-socioeconomic-status (SES)
homes (85% receiving free or reduced lunch), whereas at the other, most
students were U.S. born (only 19% LEP) and from low- to middle-SES
homes (44% free or reduced lunch). Two other schools had large numbers of Haitian American (41% and 37%, respectively) and African
American students (28% and 53%). In both of these schools, many students were LEP (46% and 26%), and most were from low-SES homes
(95% and 99% free or reduced lunch). Most of the students at the two
remaining schools were native English speakers (10% and 1% LEP) of
White, Hispanic, or African American descent and were from middle-SES
homes (19% and 16% free or reduced lunch).
In all 52 third- and fourth-grade classrooms in the six participating
schools, the project introduced an inquiry-based science curriculum consisting of two instructional units per grade level (Measurement and
Matter for third grade, and the Water Cycle and Weather for fourth
grade). The units are organized around various hands-on activities
designed to promote children’s participation in science inquiry while
also developing their English proficiency and literacy skills. The project’s
aim is to make science accessible to all children, including those with limited English proficiency, by opening avenues to science learning that are
less heavily dependent on mastery of English. At the same time, children
from cultural or socioeconomic backgrounds that do not emphasize, or
have historically been marginalized from, scientific inquiry as practiced
in scientific communities have an opportunity to become familiar with
inquiry; they gradually progress from a greater dependence on teacher
guidance toward greater student initiative and responsibility (Duran,
Dugan, & Weffer, 1998; Fradd & Lee, 1999; Moje et al., 2001; Songer, Lee,
& McDonald, 2003). The project’s treatment of science, children’s home
language and culture, and development of English proficiency has been
detailed elsewhere (Hart & Lee, 2003; Lee, Hart, Cuevas, & Enders, 2004;
Luykx, Cuevas, Lambert, & Lee, 2004).
Participating teachers received all necessary supplies, student booklets,
and teachers’ guides for carrying out the lessons. They attended four fullday professional development workshops over the course of the school
906 Teachers College Record
year, focusing on teaching inquiry-based science, incorporating English
language and literacy development into science instruction, and mediating science instruction with elements of students’ home languages and
cultures.
DATA COLLECTION AND ANALYSIS
Within the larger project, written tests were developed for each science
unit to measure students’ mastery of key science concepts and “big
ideas,” such as patterns, systems, models, and relationships (Lee,
Deaktor, Hart, Cuevas, & Enders, 2005). The tests also measured students’ ability to conduct science inquiry, using (1) relatively structured
inquiry tasks (similar to 1996 National Assessment of Educational
Progress [NAEP] performance tasks) in which students construct graphs
and tables from the data provided, give an explanation for the data, and
draw a conclusion; and (2) relatively open-ended inquiry tasks in which
students generate hypotheses, design investigations, and plan procedures. Each test included both short-answer items and items requiring
longer, constructed responses in which students were asked to explain
scientific phenomena or their own reasoning.
Data sources for the present study included student responses from a
total of approximately 6,000 tests (two pretests and two posttests administered to over 1,500 third- and fourth-grade students). Teachers were
asked to follow standard procedures for test administration, which
included reading the items aloud for children with reading difficulties
and allowing ELLs to write their answers in either English or in their
home language.
Qualitative analysis of cultural and home language influence in students’ written responses to test items was not contemplated in the original research design. In the process of scoring students’ responses, it was
noted that some children seemed to interpret certain test items differently than the test developers had originally intended. Additionally, scorers’ difficulty in discerning the content of some children’s (particularly
ELLs’) responses led to the speculation that teachers would likely have
similar difficulties. Scorers also noticed differences in the ways that students framed their answers, which seemed related to their cultural backgrounds.
The coding system emerged over time, for various reasons. First, as
mentioned before, this sort of analysis was not originally contemplated in
the research. Informal coding and analysis began without clearly defined
categories or research questions as project personnel noted and discussed interesting test responses that appeared to be related to students’
Cultural and Home Language Influences on Children’s Responses 907
cultural or linguistic backgrounds. Second, the research team was reluctant to organize the growing data set into well-defined categories until
such categories emerged from the collected examples themselves. Third,
the large quantity of student tests to be examined required the participation of several members of the research team, most of whom lacked experience with cross-cultural or discourse analysis. This resulted in frequent
uncertainty among team members as to what constituted relevant data.
Thus, the analytic categories emerged over time as team members
arrived at a more precise understanding of what we were seeking.
Over the course of this process, it became clear that coders’ ability to
detect linguistic and cultural influences in test responses was closely
linked not only to their specific academic/professional training but also
to their experience in negotiating different languages and cultures
within their own lives. In the end, student responses were coded by five
team members whose combined expertise included student assessment,
Teaching English to Speakers of Other Languages (TESOL), cultural
and linguistic anthropology, and Latin American studies. All but one
were bilingual (either English-Spanish or Creole-English) and had extensive experience “crossing borders” between different countries and cultural-linguistic groups.
As more examples of cultural and home language influence came to
light, team members began to carry out a more systematic and thorough
analysis of all student tests. Examples were recorded in four categories:
(1) use of code-switching or languages other than English; (2) phonological, orthographic, and semantic interference from ELLs’ home language; (3) content exhibiting influence from students’ home culture,
whether by explicit reference to their home life or by implicit assumptions that contrasted with those of the test developers; and (4) discursive
features related to authorial voice, genre, or pragmatic framing of
responses. The first category was eventually discarded because those
responses were not particularly problematic in assessing students’ performance and were less relevant to the issues of interpretation (both students’ interpretation of the test questions and teachers’/coders’ interpretation of students’ responses) that eventually became the focus of the
study.
As the different categories and subcategories took shape, major patterns specific to particular cultural and linguistic groups were identified.
These patterns are described and illustrated below. No statistical analysis
was conducted because, although the total number of responses indicating cultural or home language influence was considerable, it was quite
small relative to the entire data set (all responses on 6,000 student tests).
In addition, many responses were ambiguous or incomprehensible,
908 Teachers College Record
which made precise categorization difficult. Greater insight into students’ interpretations of test items might well have been gained by interviewing individual children about their answers to specific items, but this
was not feasible within the context of the study. Therefore, all our interpretations were based on textual analysis of students’ written responses,
supplemented by teachers’ comments during professional development
workshops (at which some of the preliminary data were presented).
RESULTS
Below, examples of linguistic, cultural, and languacultural influences in
students’ responses are presented and analyzed. Additionally, test items
themselves are analyzed for the ways in which they rely on tacit knowledge that may not be equally shared by all students. For each category,
examples were chosen on the basis of how frequently they appeared or
how well they highlighted the complexities of interpretation suggested by
the data set as a whole.
LINGUISTIC INFLUENCES
For children from non-English-language backgrounds, home language
influences, combined with limited English proficiency, may interfere with
their ability to correctly interpret test items and respond in ways that are
comprehensible to teachers. For example, ELLs may write their
responses in English but with spellings that reflect the phonological
structures or orthographic conventions of the home language. If teachers are unable to see through such interference to students’ intended
meanings, limited English proficiency may be mistaken for lack of science content knowledge. Semantic differences between cognate terms in
different languages are another source of difficulty, as is the fact that
many terms have both commonly used general meanings and specialized
scientific meanings. This latter factor can lead to confusion among native
English speakers as well as ELLs. The examples analyzed below are
divided between those that reflect phonological or orthographic interference from the home language, and those that indicate semantic confusion stemming either from cross-linguistic associations or from the polysemic nature of the science terms in question.
Phonological/orthographic
Teachers participating in the study were instructed to allow ELLs to
respond in their home language if they so desired. Some children did
Cultural and Home Language Influences on Children’s Responses 909
write answers partially or completely in Spanish, and a very few included
elements of Haitian Creole. Most responses by ELLs were in English but
with spellings reflecting the phonology or orthography of the home language. When students’ spellings differ widely from the standard English
ones, teachers may have difficulty comprehending their responses and,
consequently, assessing students’ level of content knowledge. This holds
especially, but not exclusively, for teachers who lack knowledge of students’ home language or who are unfamiliar with the phonetic values
that particular letters represent in Spanish or Creole. Some children’s
responses, like the following ones concerning a wind simulation demonstration, show a partial grasp of content knowledge that would likely go
unrecognized by teachers without the time or expertise to see through
the home language influence (in this case, from Spanish):
“…the waro gos to the nodo baro.” [the water goes to the other
bottle]
“Meibi the to spribrment the to or abaut eor.” [Maybe the two experiments the two are about air.]
Semantic
Students frequently interpreted science terms with reference to their
everyday meanings rather than their specialized scientific meanings. For
example, they consistently had difficulty with the question, “How would
you record your information?” and offered responses such as “with a tape
recorder” or “using a radio.” This tendency was not limited to ELLs and
is probably common among children unfamiliar with science content,
procedures, and discourse. Correspondingly, it decreased as the children
increased their mastery of scientific terminology. Other examples
included students confusing gas (state of matter) with gasoline, scientific
instruments with musical instruments, and states of matter with geopolitical states.
Although these examples are attributable to the polysemic nature of
much scientific terminology and thus occur even among monolingual
English speakers, other examples demonstrated clear links to students’
home language. To a question about how long one can play outside if it
is now 4:00 p.m. and one has to be home for dinner by 6:00 p.m., some
Haitian students indicated puzzlement over the idea of eating dinner at
that hour. Mahotiere, who joined the research team in the third year of
the project, later explained that in Haiti, the cognate term for “dinner”
(díne) refers to the meal eaten at midday, as indeed the word dinner does
910 Teachers College Record
in some regions of the United States. The common use, in some Spanishspeaking countries, of the word gaseosa (“gaseous”) to refer to soft drinks
(a liquid) was another source of confusion. Also common among
Spanish-speaking children was confusion of the abbreviations F
(Fahrenheit) and C (Celsius) with the Spanish abbreviations for frío
(cold) and caliente (hot); in addition to using the same letters, both sets
of abbreviations give information about temperature. (A few Haitian children made the same error, because the abbreviations for the French
terms, used on bathroom faucets and such, are also F [froid] and C
[chaud].) Furthermore, this confusion was persistent in at least one child
who was reasonably fluent in English, judging from his written answer
(“yes because F is cold tempeture and if you don’t [wear a sweater] you
will be cold”).
CULTURAL INFLUENCES
Cultural influences in students’ test responses were somewhat harder to
divide into clear subcategories, but a few definite themes did emerge.
References to practices, norms, and beliefs characteristic of students’
home environments showed up in numerous responses and were not limited to nonmainstream children. Additionally, some examples pointed to
implicit cultural assumptions that were evidently not shared between the
children and the test developers. As many of the analyzed examples show,
SES is a major factor shaping children’s cultural expectations and interpretive frameworks.
The most obvious cultural influences in students’ responses were those
directly linked to the propositional content of the test items. Some children, when faced with test items for which they apparently lacked the
necessary science knowledge, referred instead to cultural beliefs or experiences from their home environments. For example, in response to a
question about where condensed water droplets came from, one low-SES
child answered, “A leak in the roof.” In response to the question, “Where
did the [evaporated] water go?” another low-SES child wrote, “Someone
stole it.” Other children responded to questions about the weather with
“God makes it rain” or “God makes the wind.” The theme of deference
to adults’ expectations appeared in several responses from Haitian students. For example, the aforementioned question concerning elapsed
time evoked responses such as “I’m a good girl” or “I’ll do my homework
before I go outside.” A few students offered written apologies, such as
“I’m sorry I don’t know the answer.”
Although examples like these clearly demonstrate the influence of children’s cultural backgrounds on their interpretation and expression of
Cultural and Home Language Influences on Children’s Responses 911
science content, they do not appear to be closely linked to particular linguistic structures. In contrast, the next section illustrates how languacultural conventions around discursive genres, visual organization of content, and pragmatic framing can give rise to unintended interpretations
of test questions, and responses that locate children in particular cultural
relationships vis-à-vis content knowledge and/or a hypothetical audience.
LANGUACULTURAL INFLUENCES
The previous discussion of linguistic and cultural influences dealt mainly
with misinterpretations of propositional content, whether of test items
(as misinterpreted by students) or of students’ responses (as misinterpreted by teachers/scorers). In this section, we discuss students’ apparent confusion around discursive conventions for the interpretation and
production of scientific texts. These languacultural examples include
such phenomena as confusion over academic genres, textual and graphic
conventions around the visual organization of content, and culturally
specific ways of framing responses. In test items, as in everyday verbal
interaction, significant information is often not explicitly expressed,
under the oft-mistaken assumption that all children will interpret a
chunk of discourse according to the same cultural frame.
Genre Confusion
Some children responded to the tests’ problem scenarios as if they were
stories rather than narrative vehicles for science questions. For example,
when children were asked to draw a conclusion about an inquiry activity
concerning evaporation and heat (undertaken by a hypothetical child
named Laura), many interpreted the term conclusion in a more literary
sense, giving responses like “Now Laura knows all about heat and evaporation,” or “Laura is trying to get the science project done so she can get
an A.” Such responses may reflect that in third and fourth grade, language arts terms, some of which have alternative meanings in the context
of science instruction, are introduced with some emphasis.
Other responses revealed students’ difficulty in abstracting the relevant
question from the surrounding scenario (i.e., isolating the science content from those elements meant only as narrative vehicles for that content). For example, one item presented students with a scenario in which
a girl, Marie, must choose the better container in which to leave her pet
fish while she travels on vacation. The item explained that water will evaporate from the container while Marie is gone and that the rate of evaporation is related to the width of the container. Some children, rather than
912 Teachers College Record
focusing on the question of how fast water would evaporate from each
container, focused instead on the plight of the fish, suggesting, for example, that “They could solf [sic] the problem by putting the two fish
together.” This represents another kind of genre confusion; children
familiar with this type of test item know that interpreting and responding
correctly requires one to identify the science question one is supposed to
answer, and then isolate the relevant information from the surrounding
text. In contrast, in the “story” genre, which is so prevalent in early elementary education, inquiring or speculating about narrative elements is
often appropriate and even encouraged by teachers.
Other children interpreted test items based on real-world scenarios, in
terms of the social expectations around the scenario rather than the
implied science question. For example, regarding the question about
elapsed time (which posits that the student must be home by 6:00 p.m.),
some wrote responses like, “I have to be home by 3:00.” Similarly, another
question read, “You hear the weatherman say it is 93° Fahrenheit this
afternoon. Do you think you will need a sweater if you go outside to play?
Explain your answer.” Although the “correct” answer would be “No,
because 93° Fahrenheit is hot,” some Hispanic children responded in
more pragmatic terms, indicating that a sweater would in fact be necessary “because it might get cold,” “it might rain,” “you need to find out”
(suggesting that weather forecasts are not always accurate), or “Yes,
because you always take a sweater when you go outside” (suggesting that
parents’ admonitions take priority over actual weather conditions).
These children apparently failed to recognize that, despite what the test
items actually said, they were not asking what time students had to be
home or whether they should take a sweater to play outside. Identifying
the “real” question underlying the literal one is a form of communicative
competence linked to children’s familiarity with this particular textual
genre. The conventions and expectations around linguistic genres often
constitute tacit cultural information that is not explicitly taught in the
classroom.4
Textual Conventions
Numerous responses reflected students’ lack of familiarity with the textual conventions of assessment items. Consider the following items,
which were accompanied by a weather map showing high and low temperatures for several U.S. cities:
Locate the city of New York on the map.
Cultural and Home Language Influences on Children’s Responses 913
5. a. What is the high temperature?
b. What is the low temperature?
To the experienced test taker, it is clear that questions a and b are subelements of the question set 5 and that both refer to New York. However,
some children answered “92°” or “62°,” which were the highest and lowest temperatures for the entire map, and did not refer to New York.
Several others answered “Miami” for 5a and “Minneapolis” for 5b,
because these were the cities displaying the highest and lowest temperatures, respectively. What these children did (i.e., compare temperatures
among all the cities on the map) was actually more difficult than what
they were supposed to do (i.e., simply note the temperatures in New
York). Evidently, they did not grasp the hierarchical relationships among
the three sentences, instead reading each as a stand-alone item that
referred to the map but not to surrounding sentences.
Answering this type of question requires a fairly subtle knowledge of
how both scientific information and school science tests are conventionally organized and how the contextual boundaries for interpreting test
items are signaled visually. To correctly interpret the items, children must
realize that (1) hierarchical number/letter systems (e.g., 5a, 5b) and
indentation of lower level elements are used to organize test items into
semantically related sets; (2) words in boldface preceding a numbered set
are not test items, but instructions for answering the items to follow; and
(3) the instructions determine the contextual boundaries within which
the items in the associated set are to be interpreted (i.e., the two questions about temperature refer only to New York, not to the map as a
whole).
When the prior knowledge that is needed to correctly interpret the
items is explicitly stated in this way, it becomes clear that children are
being asked to do much more than simply locate cities on the map and
write down the readings for each city. Genre-specific conventions for the
visual organization of information are seldom explained in tests themselves, nor are they usually explained by teachers; rather, children are
expected to learn them implicitly through repeated exposure to texts
that make use of them. This is a subtle task, especially for children for
whom basic literacy still constitutes a challenge, and such children tend
to be disproportionately from nonmainstream backgrounds. To complicate things further, textual conventions (e.g., use of bold text, hierarchical grouping of related questions, use of interrogative forms) are sometimes used inconsistently both within and across tests.
914 Teachers College Record
Pragmatic framing/authorial voice
The assessment instruments in question also followed implicit discourse
conventions requiring students to “decode” items’ intended, academic
meanings rather than their literal meanings. In linguistic terms, students
needed to recognize the pragmatic framework within which items were
to be interpreted, rather than simply taking words at face value. For
example, the aforementioned question on elapsed time stated, “Your parents tell you to be home at 6:00 p.m. for dinner. It is 4:00 p.m. How much
time do you have to get home for dinner?” Most students had little trouble deducing that the answer was “2 hours” but were stymied by the follow-up question, “Show your work.” Many students simply left the item
blank; others drew pictures of dinner tables or of themselves playing outside while a parental figure stood expectantly in the doorway of a house.
A likely reason for this is that the problem does not really require any
“work,” but rather a mental calculation so simple as to be intuitive, even
for third graders. What the follow-up question is really asking is for the
student to translate the concrete (albeit hypothetical) scenario into an
abstract mathematical operation (i.e., 4 + ? = 6); however, the instruction
“Show your work” does not communicate this clearly unless one is familiar with the conventions of this sort of test item.5
Other languacultural phenomena include students’ use of discourse
markers or framing devices, indicated below in bold type, to position
themselves in a particular relationship to the reader or to the content in
question. Note the contrasts with regard to students’ “authorial voice”
among these responses to the question, “Now help Laura write her conclusion about evaporation and heat.”
“The Water evpreit and that ho it is.” [The water evaporates and that’s
how it is.]
“Well, you see, the water evaporated with the hotness.”
“So, in conclusion, heat attracts water up slowly.”
The first response suggests an attitude of impatience with the test (or
science inquiry?) itself—water just evaporates, and questions of why or
how it does so are irrelevant. The second response, by invoking a hypothetical interlocutor (“you”), positions the student as confidently explaining the phenomenon in question in a conversational tone. The third
response evokes a more formal verbal register and also dovetails with the
inquiry framework in which the lessons were presented (problem,
Cultural and Home Language Influences on Children’s Responses 915
hypothesis, procedures, data collection and analysis, conclusion).
A few other children (monolingual English speakers from middle-SES
backgrounds) also prefaced their responses with discourse markers that
seemed to reflect a certain authorial confidence (e.g., “hello” or “of
course”). Although these examples were too infrequent to suggest any
particular distribution across different groups of students, they do suggest relationships between author and reader (sociability, shared knowledge of the content to follow) that can clearly be considered languacultural. Such issues of voice in written test responses are unlikely to substantially affect teachers’ assessment of children’s science knowledge,6 but do
reveal differing levels of mastery of academic discourse conventions, and
ways in which different students position themselves with regard to both
science knowledge and testing situations.
DISCUSSION AND IMPLICATIONS
DISCUSSION
By illustrating how children’s home language, cultural knowledge, and
discourse conventions influence their performance on science assessments, this study has demonstrated some of the challenges involved in
designing valid and equitable assessments for students from diverse cultural and linguistic backgrounds. Relative to the entire data set, only a
small number of test responses indicated that students were led astray by
cross-linguistic associations, underlying cultural assumptions, contextual
or visual clues, or the embedding of test items in real-life scenarios.
Furthermore, such responses were more common on pretests than on
posttests. As children’s science knowledge and English proficiency
increased, and as they became acculturated into academic/scientific genres and discourse conventions, they learned to focus more narrowly on
the science content of test items, and the frequency of linguistic, cultural,
and languacultural interference decreased. Nevertheless, the existence
of such examples led to several important observations about assessment
of linguistically and culturally diverse students.
First, when the language of assessment is one that students have yet to
master, limited proficiency in that language can masquerade as limited
content knowledge. Second, scientific terms have connotations and
points of contact with everyday vocabulary that vary significantly from
one language to another. Third, when dealing with a culturally heterogeneous group of children, cultural “grounding” of test items, whether purposeful or inadvertent, may contextualize those items for some children
while decontextualizing them for others. Finally, academic assessments
916 Teachers College Record
inevitably contain a considerable amount of implicit languacultural
knowledge, which different groups of children may not share. Indeed,
our own elaboration of test items was both more culturally embedded
and more deeply influenced by the conventions of academic English
than we had realized.
For students who are assessed in a language they have not yet mastered,
there is no easy solution to the problem of valid and equitable assessment. The science concepts, discursive genres, and assessment practices
common to U.S. schools are inextricably tied to the use of standard
American English. Thus, until students have mastered that language—
which generally takes quite a bit longer than the 1–2 years of ESOL
instruction they receive—English-medium assessments cannot be
assumed to provide an accurate picture of their science knowledge. On
the other hand, assessing students in the home language raises problems
of validity, resources, and compatibility with the language of instruction
(Solano-Flores & Trumbull, 2003). With the spread of “English-only” legislation that prioritizes children’s acquisition of English over their subject-area knowledge, possible solutions to this dilemma are further constrained (Abedi, 2004; Abedi et al., 2004; Gutiérrez et al., 2002).
Languacultural influences were evident not only in students’ responses
to test items but also in the items themselves. Test items were constructed
to represent key concepts and inquiry practices from the instructional
units and were presented in the context of real-life situations in the hope
of enhancing meaning and relevance for students (García & Pearson,
1994; Ruiz-Primo & Shavelson, 1996). However, the embedding of specific cultural knowledge in test items led some children to interpret the
items differently than expected. The analyzed examples demonstrate
that assessment instruments are imbued with tacit cultural knowledge at
nearly every stage of their construction. Such knowledge includes the
visual clues indicating relationships between textual elements (e.g., the
weather map questions), genre-specific meanings of particular terms
(e.g., conclusion), social expectations associated with particular situations
(e.g., whether one should take a sweater when going outside to play), and
conventionalized, rather than literal, understandings of phrases (e.g.,
“Show your work”). Tacit cultural knowledge also aids students in distinguishing relevant scientific information from “background” information,
as they are required to do in scenario-type problems.
In short, cultural and home language influences are not simply “baggage” that nonmainstream children bring to the classroom, to the possible detriment of their academic performance. Rather, they are part and
parcel of instructional and assessment practices; indeed, they are integral
to virtually every aspect of the pedagogical encounter, though largely
Cultural and Home Language Influences on Children’s Responses 917
unconscious for the actors involved. It is only when the cultural assumptions, linguistic associations, and discursive conventions that educators
take for granted contrast with those held by students that they become
visible.
The results of this study suggest that the goal of designing culturally
neutral assessments is unrealistic. Test developers are faced with a multitude of decisions concerning formatting, wording, visual cues, and textual organization. Many of these decisions are made unconsciously, and
each involves the incorporation of culture-specific knowledge that is
often crucial to the correct interpretation of test items. Although obvious
cultural bias should be avoided, test developers cannot be expected to
identify and purge all culture-specific elements from assessment instruments. Such a task is impossible, given the inherently cultural nature of
such instruments and the amount of cultural knowledge that operates
below the level of conscious awareness. Adopting an ideal of culturally
neutral assessments also runs counter to the current emphasis on making
instruction and assessment meaningful and relevant to students, because
“meaning” and “relevance” are culturally determined and inevitably refer
to cultural knowledge.
On the other hand, the results of the study also cast doubt on the aim
of creating “culturally relevant” assessments that contextualize science
problems in real-life scenarios (cf. Solano-Flores & Nelson-Barber, 2001).
Clearly, what constitutes a real-life scenario for some children may be far
removed from the life experience of others. For this reason, some
researchers have suggested that academic assessments should emphasize
content and procedures taught within the classroom, rather than
attempting to make links to experiences outside the classroom
(Hamilton, 1998; Shavelson, Baxter, & Pine, 1992). However, that proposal rests on the assumption that all children have equal access to quality classroom instruction, which is often not the case (Eder, 1982/2000;
Kozol, 1991; Noguera, 2003; Oakes, 1988; Wiley & Wright, 2004).
The cultural, linguistic, and languacultural influences evident both in
test items and in students’ responses reveal how scientific information is
inevitably embedded within particular interpretive frameworks. The
examples analyzed in this study indicate some of the discrepancies
between the frameworks of educators (teachers, researchers, and test
developers) and those of children from minority or immigrant communities. These discrepancies are a result of the different “funds of knowledge” (Moll, 1992) that each group brings to the scientific task at hand.
In some cases, children’s own funds of knowledge may lead them away
from, rather than toward, the intended interpretations of test items.
918 Teachers College Record
IMPLICATIONS FOR CLASSROOM PRACTICES
The issues discussed herein have direct implications for instruction and
assessment, because they may distort teachers’ perceptions of students’
level of content knowledge (usually downward, but conceivably upward
as well). This highlights the need for teachers to be attentive to potential
cultural bias in curriculum materials and assessment instruments, common linguistic confusions around specific science terms, and the relative
independence of students’ content knowledge from their level of English
proficiency (Shaw, 1997; Shepard et al., 1998; Solano-Flores & NelsonBarber, 2001; Solano-Flores & Trumbull, 2003).
Despite the very real obstacles to providing valid and equitable assessment for ELLs, there are steps that teachers can take to minimize the
problem. Even teachers who do not speak the home language(s) of their
students can learn to spot home language interference in children’s writing and to be alert to the cross-linguistic confusions that may impede
ELLs from grasping science concepts or clearly expressing what they
know. Indeed, greater awareness of such factors and time to attend to
them are often lacking among teachers who do speak their students’
home language. It should never be assumed that children’s content
knowledge or academic ability is in direct relation to their English proficiency, although an increasing number of U.S. schools tend to operate
under this assumption.
Beyond the more obvious instances of cultural bias and linguistic interference, several of the analyzed examples illustrate the more subtle kinds
of languacultural knowledge regularly embedded in test items. Although
such knowledge is outside the bounds of the science content considered
to be the focus of instruction, it is no less crucial to children’s successful
academic performance. Therefore, it is important that teachers become
aware of the discursive, textual, and graphic conventions that guide the
construction of assessment instruments, and take steps to assure that all
students understand these conventions. When this is not done, assessments cannot be assumed to accurately reflect the knowledge or abilities
of all students.
The data suggest that probing students’ misinterpretations of test items
could provide a basis for teaching the cultural, discursive, and graphic
conventions that assessment instruments require them to know. For
example, if teachers were to explore with students how one discerns the
relationships among textual elements (like those referring to the weather
map), teachers themselves might come to recognize that the ability to
interpret test items correctly is based on culturally specific graphic conventions and textual markers, rather than on a transparent universal
Cultural and Home Language Influences on Children’s Responses 919
logic or visual intuition.7 This is not to suggest that most teachers are
unaware of such conventions. Significantly, the examples in question
came from the pretest, before children had been taught the graphic conventions for interpreting weather maps.
Careful analysis of children’s written work can provide a window on
their lives outside of school and on the relationship between what they
know and what teachers want them to know. Teachers must become
aware of this relationship if they are to address the continuities and discontinuities between children’s prior cultural knowledge and school science (Lee, 2002, 2003; Solano-Flores & Nelson-Barber, 2001).
Recognition of the ways in which linguistic, cultural, and languacultural
factors shape the science knowledge not only of students but also of
teachers, scientists, curriculum specialists, and test developers may also
contribute to increased recognition of the cultural and linguistic
resources that children from diverse backgrounds bring to the science
classroom.
IMPLICATIONS FOR FUTURE RESEARCH
Much of the existing knowledge base concerning achievement gaps
among different demographic groups—whether racial/ethnic, linguistic,
or socioeconomic—is derived from large-scale research studies that
depend on standardized assessments of students’ content knowledge in
subject areas. Inasmuch as large-scale studies, by their nature, involve
more heterogeneous populations, the question of what approach is most
likely to ensure valid and equitable assessment represents a fundamental
dilemma for researchers aiming to combine large-scale student assessment with a concern for cultural and linguistic diversity.
In this article, we have addressed two contrasting approaches to the
search for more valid and equitable assessment of culturally diverse student groups—that which seeks to minimize the cultural content of assessment instruments in order to make them as “culturally neutral” as possible, and that which seeks to make assessments more culturally relevant to
students. Although both approaches have significant limitations, systematic and comparative study of the two could yield important insights into
how to make academic assessments more equitable and more valid. A
prerequisite for such study is more fine-grained analysis of the tacit
knowledge that is commonly embedded in science assessments, and of
the ways in which different groups of students interpret scientific concepts, terminology, and texts. Existing and ongoing work that is relevant
to these questions includes research on the epistemologies of scientific
communities (Eisenhart, 1996; Traweek, 1988) and cultural groups that
920 Teachers College Record
have traditionally been marginalized from science (Cobern & Loving,
2001; Siegel, 2002; Snively & Corsiglia, 2001; Stanley & Brickhouse, 1994,
2001). Also relevant are studies of how the discursive and textual conventions characteristic of particular written genres affect readers’ interpretations and how these interpretations differ among different kinds of readers (Boyarin, 1992; Street, 1993).
To dig deeper into the factors contributing to achievement gaps
among different student groups, large-scale studies are essential. At the
same time, uncovering the cultural and linguistic frameworks influencing
students’ academic performance requires deep qualitative analyses that
are difficult or impossible to carry out with large numbers of children.
Given the fragmentation and hyperspecialization of educational scholarship, the many kinds of disciplinary expertise needed to untangle these
questions are seldom found within a single researcher or even team of
researchers. Research that gets to the heart of these questions requires
the collaboration of scholars who too seldom communicate across their
disciplinary boundaries: science educators, assessment specialists, linguists, anthropologists, discourse analysts, statisticians, and perhaps others as well. Combining such a disparate collection of disciplines and
research methodologies is a formidable task; this very fact has, no doubt,
contributed to the persistence of assessment inequities. Much basic work
remains to be done on how different groups conceive and organize scientific knowledge and how students’ knowledge relates to their academic
performance. It is hoped that the community of scholars taking an interest in these matters will eventually prove as diverse as the children
demanding their attention.
This work is supported by the National Science Foundation, the U.S. Department of Education, and
the National Institute of Health (Grant No. REC-0089231). Any opinions, findings, conclusions, or
recommendations expressed in this publication are those of the authors and do not necessarily reflect the
position, policy, or endorsement of the funding agencies.
Notes
1 This persistent assumption has been challenged from a variety of perspectives,
including multiculturalist (Atwater, 1993, 1996; Brickhouse, 1994; Hodson, 1993;
Rodríguez, 1997), sociocultural (Lemke, 2001; O’Loughlin, 1992), indigenous (Aikenhead,
2001; Ogawa, 1995; Snively & Corsiglia, 2001), feminist (Haraway, 1990, 1991; Keller, 1982),
and postmodernist (Dreyfus & Rabinow, 1982; Norman, 1998; Rouse, 1996; Spivak, 1993),
as well as views emerging from critical theory (Calabrese Barton, 2001; Tobin, Seiler, &
Smith, 1999) and concerns for civil rights and social justice (Tate, 2001).
2 A telling exception was the “Ebonics” controversy that occurred in California in the
late 1990s. The relationship between standard English and the dialect spoken by many
African American students, and the best ways to deal with nonstandard dialects in the classroom, are persistent problems in U.S. education (see Ogbu, 1999; Perry & Delpit, 1998).
3 Limited English proficient; this is the designation used by the school district.
Cultural and Home Language Influences on Children’s Responses 921
4 Children’s tendency to mentally put themselves into the scenario and respond as
they actually would in that situation, rather than tease out the hypothetical science question, was not limited to nonmainstream students. Several monolingual English speakers
from middle-SES backgrounds responded to a question on how they would record scientific
information with, “I’d ask Jeeves,” “look it up on the Internet,” or “go on the computer.” It
is also possible that such responses represent a fallback strategy when children lack the content knowledge to answer the question but feel pressured to respond anyway.
5 Similarly, on other items that asked the student to “show your work,” some African
American children responded with “I know ‘cause I know” or “I know it’s right.”
6 However, mastery of the corresponding oral conventions may influence teachers’
perceptions of students’ knowledge to a much greater degree (Cazden, 2001; Heath, 1983;
Michaels, 1981; Philips, 1983). A case in point was an oral presentation by a group of fourth
graders at a high-performing school serving mostly children of professionals. After one girl
read a fairly complex passage about the science topic under study, which she had downloaded from the Internet but did not seem to fully understand, a boy in the group followed
up with, “To put that in plain English. . . ” and then proceeded to informally paraphrase
what the girl had read. His confident use of this rather grown-up expression to frame his
contribution seemed to make a positive impression on his audience (including the teacher)
even though his paraphrase did not in fact accurately reflect the content presented by his
teammate.
7 Our thanks to an anonymous reviewer for pointing this out.
References
Abedi, J. (2004). The No Child Left Behind Act and English language learners: Assessment
and accountability issues. Educational Researcher, 33(1), 4–14.
Abedi, J., Hofstetter, C. H., & Lord, C. (2004). Assessment accommodations for English language learners: Implications for policy-based empirical research. Review of Educational
Research, 74, 1–28.
Agar, M. (1994). Language shock: Understanding the culture of conversation. New York: William
Morrow.
American Association for the Advancement of Science. (1989). Science for all Americans. New
York: Oxford University Press.
Aikenhead, G. S. (2001). Integrating Western and aboriginal sciences: Cross-cultural science teaching. Research in Science Education, 31, 337–355.
Aikenhead, G. S., & Jegede, O. J. (1999). Cross-cultural science education: A cognitive
explanation of a cultural phenomenon. Journal of Research in Science Teaching, 36,
269–287.
Atwater, M. M. (1993). Multicultural science education: Perspectives, definitions, and
research agenda. Science Education, 77, 661–668.
Atwater, M. M. (1994). Research on cultural diversity in the classroom. In D. L. Gabel (Ed.),
Handbook of research on science teaching and learning (pp. 558–576). New York: Macmillan.
Atwater, M. M. (1996). Social constructivism: Infusion into the multicultural science education research agenda. Journal of Research in Science Teaching, 33, 821–837.
Boyarin, J. (1992). (Ed.). The ethnography of reading. Berkeley: University of California Press.
Brenner, M. E. (1998). Adding cognition to the formula for culturally relevant instruction
in mathematics. Anthropology and Education Quarterly, 29, 213–244.
Brickhouse, N. (1994). Bringing in the outsiders: Reshaping the sciences of the future.
Curriculum Studies, 26, 401–416.
Calabrese Barton, A. (2001). Science education in urban settings: Seeking new ways of
praxis through critical ethnography. Journal of Research in Science Teaching, 38, 899–917.
922 Teachers College Record
Cazden, C. (2001). Classroom discourse: The language of teaching and learning.
Portsmouth, NH: Heinemann.
Cobern, W. W. (Ed.). (1998). Socio-cultural perspectives on science education. Boston:
Kluwer Academic.
Cobern, W. W., & Aikenhead, G. S. (1998). Cultural aspects of learning science. In B. Fraser
& K. Tobin (Eds.), International handbook of science education: Part one (pp. 39–52).
Dordrecht, The Netherlands: Kluwer Academic.
Cobern, W. W., & Loving, C. C. (2001). Defining “science” in a multicultural world:
Implications for science education. Science Education, 85, 50–67.
Corson, D. (2001). Language diversity and education. Mahwah, NJ: Erlbaum.
Dreyfus, H., & Rabinow, P. (1982). Michel Foucault: Beyond structuralism and hermeneutics.
Chicago: University of Chicago Press.
Duran, B. J., Dugan, T., & Weffer, R. (1998). Language minority students in high school:
The role of language in learning biology concepts. Science Education, 82(3), 311-341.
Eder, D. (2000). Ability grouping as a self-fulfilling prophecy: A microanalysis of teacher-student interaction. In B. Levinson et al. (Eds.), Schooling the symbolic animal: Social and cultural dimensions of education (pp. 248–259). Lanham, MD: Rowman & Littlefield.
(Original work published 1982)
Eisenhart, M. (1996). The production of biologists at school and work: Making scientists,
conservationists, or flowery bone-heads? In B. A. Levinson, D. E. Foley, & D. C. Holland
(Eds.), The cultural production of the educated person: Critical ethnographies of schooling and
local practice (pp. 169–185). Albany: State University of New York Press.
Eisenhart, M., Finkel, E., & Marion, S. F. (1996). Creating the conditions for scientific literacy: A re-examination. American Educational Research Journal, 33, 261-295.
Fradd, S. H., & Lee, O. (1999). Teachers’ roles in promoting science inquiry with students
from diverse language backgrounds. Educational Researcher, 28(6), 4-20, 42.
García, G. E., & Pearson, P. D. (1994). Assessment and diversity. In L. Darling-Hammond
(Ed.), Review of research in education (Vol. 20, pp. 337–383). Washington, DC: American
Educational Research Association.
Gutiérrez, K. D., Asato, J., Pacheco, M., Moll, L. C., Olson, K., Horng, E. L., et al. (2002).
“Sounding American”: The consequences of new reforms on English language learners.
Reading Research Quarterly, 37, 328–343.
Hamilton, L. S. (1998). Gender differences on high school science achievement tests: Do
format and content matter? Educational Evaluation and Policy Analysis, 20, 179–195.
Haraway, D. (1990) Primate visions: Gender, race and nature in the world of modern science. New
York: Routledge.
Haraway, D. (1991). Simians, cyborgs and women: The reinvention of nature. New York:
Routledge.
Hart, J., & Lee, O. (2003). Teacher professional development to improve science and literacy achievement of English language learners. Bilingual Research Journal, 27, 475–501.
Heath, S. B. (1983). Ways with words. Cambridge, England: Cambridge University Press.
Helms, J. E. (1992). Why is there no study of cultural equivalence in standardized cognitive
ability testing? American Psychologist, 47, 1083–1101.
Hodson, D. (1993). In search of a rationale for multicultural science education. Science
Education, 77, 685–711.
Ivanic, R. (1998). Writing and identity: The discoursal construction of identity in academic
writing. Philadelphia: John Benjamins.
Keller, E. (1982). Feminism and science. Signs: Journal of Women in Culture and Society,
7, 589–602.
Cultural and Home Language Influences on Children’s Responses 923
Kirschner, M., Spector-Cohen E., & Wexler, C. (1992). Avoiding obstacles to student comprehension of test questions. TESOL Quarterly, 26, 537–556.
Kozol, J. (1991). Savage inequalities: Children in America’s schools. New York: Crown.
Lee, O. (1999). Equity implications based on the conceptions of science achievement in
major reform documents. Review of Educational Research, 69, 83–115.
Lee, O. (2002). Science inquiry for elementary students from diverse backgrounds. In W.
G. Secada (Ed.), Review of Research in Education (Vol. 26, pp. 23–69). Washington, DC:
American Educational Research Association.
Lee, O. (2003). Equity for culturally and linguistically diverse students in science education:
A research agenda. Teachers College Record, 105, 465–489.
Lee, O., Deaktor, R. A., Hart, J. E., Cuevas, P., & Enders, C. (2005). An instructional intervention’s impact on the science and literacy achievement of culturally and linguistically
diverse elementary students. Journal of Research in Science Teaching, 42(8), 857-887.
Lee, O., & Fradd, S. H. (1998). Science for all, including students from non-English language backgrounds. Educational Researcher, 27(3), 12–21.
Lee, O., Hart, J., Cuevas, P., & Enders, C. (2004). Professional development in inquiry-based
science for elementary teachers of diverse student groups. Journal of Research in Science
Teaching, 41, 1021–1043.
Lemke, J. L. (2001). Articulating communities: Sociocultural perspectives on science education. Journal of Research in Science Teaching, 38, 296–316.
Luykx, A., Cuevas, P., Lambert, J., & Lee, O. (2004). Unpacking teachers’ “resistance” to
integrating students’ language and culture into elementary science instruction. In A.
Rodríguez & R. S. Kitchen (Eds.), Preparing prospective mathematics and science teachers to
teach for diversity: Promising strategies for transformative action (pp. 119–141). Mahwah, NJ:
Erlbaum.
Matthews, M. R. (1998). The nature of science and science teaching. In B. Fraser & K. Tobin
(Eds.), International handbook of science education: Part two (pp. 981–1000). Dordrecht,
The Netherlands: Kluwer Academic.
Merino, B., & Hammond, L. (2001). How do teachers facilitate writing for bilingual learners in “sheltered constructivist” science? Electronic Journal in Science and Literacy, 1(1).
Michaels, S. (1981). Sharing time: Children’s narrative styles and differential access to literacy. Language in Society, 10, 432–442.
Moje, E., Collazo, T., Carillo, R., & Marx, R. W. (2001). “Maestro, what is quality?”:
Examining competing discourses in project-based science. Journal of Research in Science
Teaching, 38, 469–495.
Moll, L. (1992). Bilingual classroom studies and community analysis: Some recent trends.
Educational Researcher, 21(2), 20–24.
Noguera, P. (2003). City schools and the American dream: Reclaiming the promise of public education. New York: Teachers College Press.
Norman, O. (1998). Marginalized discourses and scientific literacy. Journal of Research in
Science Teaching, 35, 365–374.
Oakes, J. (1988). Tracking in mathematics and science education: A structural contribution
to unequal schooling. In L. Weis (Ed.), Class, race and gender in American education (pp.
106–125). Albany: State University of New York Press.
Ogawa, M. (1995). Science education in a multiscience perspective. Science Education, 79,
583–593.
Ogbu, J. U. (1999). Beyond language: Ebonics, proper English, and identity in a BlackAmerican speech community. American Educational Research Journal, 36, 147–184.
924 Teachers College Record
O’Loughlin, M. (1992). Rethinking science education: Beyond Piagetian constructivism
toward a sociocultural model of teaching and learning. Journal of Research in Science
Teaching, 29, 791–820.
O’Malley, J. M., & Valdez Pierce, L. (1994). State assessment policies, practices, and language minority students. Educational Assessment, 2, 213–255.
O’Sullivan, C. Y., Lauko, M. A., Grigg, W. S., Qian, J., & Zhang, J. (2003). The nation’s report
card: Science 2000. Washington, DC: U.S. Department of Education, Institute of
Education Sciences.
Perry, T., & Delpit, L. (1998). The real Ebonics debate: Power, language, and the education of
African-American children. Boston: Beacon Press.
Philips, S. (1983). The invisible culture: Communication in classroom and community on the Warm
Springs Indian Reservation. New York: Longman.
Rodríguez, A. (1997). The dangerous discourse of invisibility: A critique of the NRC’s
National Science Education Standards. Journal of Research in Science Teaching, 34, 19–37.
Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context. New York:
Oxford University Press.
Rogoff, B., & Lave, J. (Eds.). (1984). Everyday cognition: Development in social context.
Cambridge, MA: Harvard University Press.
Rouse, J. (1996). Engaging science: How to understand its practices philosophically. Ithaca, NY:
Cornell University Press.
Ruiz-Primo, M. A., & Shavelson, R. J. (1996). Rhetoric and reality in science performance
assessments: An update. Journal of Research in Science Teaching, 33(10), 1045-1063.
Schmitt, A. P., & Dorans, N. J. (1990). Differential item functioning for minority examinees
on the SAT. Journal of Educational Measurement, 27, 67–81.
Scollon, R., & Scollon, S. W. (1995). Intercultural communication: A discourse approach. Oxford,
England: Blackwell.
Shavelson, R. J., Baxter, G. P., & Pine, J. (1992). Performance assessments: Political rhetoric
and measurement reality. Educational Researcher, 21(1), 22–27.
Shaw, J. M. (1997). Threats to the validity of science performance assessments for English
language learners. Journal of Research in Science Teaching, 34(7), 721-743.
Shepard, L., Taylor, G., & Betebenner, D. (1998). Inclusion of limited-English-proficient students
in Rhode Island’s grade 4 mathematics performance assessment (CSE Technical Report No.
486). Los Angeles: University of California, National Center for Research on Evaluation,
Standards, and Student Testing.
Short, D. J. (1993). Assessing integrated language and content instruction. TESOL Quarterly,
27, 627–656.
Siegel, H. (2002). Multiculturalism, universalism, and science education: In search of common ground. Science Education, 86, 803–820.
Snively, G., & Corsiglia, J. (2001). Discovering indigenous science: Implications for science
education. Science Education, 85, 6–34.
Solano-Flores, G., Lara, J., Sexton, U., & Navarrete, C. (2001). Testing English language learners: A sampler of student responses to science and mathematics test items. Washington, DC:
Council of Chief State School Officers.
Solano-Flores, G., & Nelson-Barber, S. (2001). On the cultural validity of science assessments. Journal of Research in Science Teaching, 38, 553–573.
Solano-Flores, G., & Trumbull, E. (2003). Examining language in context: The need for
new research and practice paradigms in the testing of English-language learners.
Educational Researcher, 32(2), 3–13.
Songer, N. B., Lee, H-S., & McDonald, S. (2003). Research towards an expanded understanding of inquiry science beyond one idealized standard. Science Education, 87(4), 490516.
Cultural and Home Language Influences on Children’s Responses 925
Spivak, G. C. (1993). Outside in the teaching machine. New York: Routledge.
Stanley, W. B., & Brickhouse, N. (1994). Multiculturalism, universalism, and science education. Science Education, 78, 387–398.
Stanley, W., & Brickhouse, N. (2001). Teaching sciences: The multicultural question
revisited. Science Education, 85, 35–49.
Street, B. (1993). Introduction: The new literacy studies. In B. Street (Ed.), Cross-cultural
approaches to literacy (pp. 1–21). Cambridge, England: Cambridge University Press.
Tate, W. (2001). Science education as civil right: Urban schools and opportunity-to-learn
considerations. Journal of Research in Science Teaching, 38, 1015–1028.
Tobin, K., Seiler, G., & Smith, M. W. (1999). Educating science teachers for the sociocultural diversity of urban schools. Research in Science Education, 29, 69–88.
Traweek, S. (1988). Beamtimes and lifetimes: The world of high energy physicists. Cambridge, MA:
Harvard University Press.
Vélez-Ibáñez, C. G., & Greenberg, J. B. (1988). Formation and transformation of funds of
knowledge among U.S.-Mexican households. Anthropology and Education Quarterly, 23,
313–335.
Warren, B., Ballenger, C., Ogonowski, M., Rosebery, A., & Hudicourt-Barnes, J. (2001). Rethinking diversity in learning science: The logic of everyday language. Journal of Research
in Science Teaching, 38, 529–552.
Wiley, T. G., & Wright, W. E. (2004). Against the undertow: Language-minority education
policy and politics in the “age of accountability.” Educational Policy, 18(1), 142-168.
AUROLYN LUYKX is joint associate professor of Anthropology and
Teacher Education at the University of Texas at El Paso. Her research
interests include critical pedagogy, bilingual and intercultural education,
language policy and planning, and indigenous education in the United
States and Latin America. Among her other publications are “Measuring
Instructional Congruence in Elementary Science Classrooms:
Pedagogical and Methodological Components of a Theoretical
Framework,” with Okhee Lee (Journal of Research in Science Teaching 44(3),
2007), and “Unpacking Teachers’ ‘Resistance’ to Integrating Students’
Language and Culture Into Elementary Science Instruction,” with Okhee
Lee, in Preparing Mathematics and Science Teachers for Diverse Classrooms:
Promising Strategies for Transformative Pedagogy, edited by A. Rodríguez & R.
S. Kitchen (Lawrence Erlbaum Associates, 2004).
OKHEE LEE is a professor in the School of Education, University of
Miami. Her research areas include elementary science education, language and culture, and teacher education. She was awarded a 1993-95
National Academy of Education Spencer Post-doctoral Fellowship. She
received the Distinguished Career Award from the American Educational
Research Association (AERA) Standing Committee for Scholars of Color
in Education in 2003. She serves on editorial boards for major education
926 Teachers College Record
research journals as well as advisory boards for science education reform
projects.
MARGARETTE MAHOTIERE is a Senior Research Associate in the
School of Education at the University of Miami. Previously she was a
teacher of English to Speakers of Other Languages. She is currently
working on a research project funded by the National Science
Foundation on improving science education for English language learners. Her research interests include language acquisition among English
language learners and the intersection of language, culture, and science
learning.
BENJAMIN T. LESTER is a middle grades ESOL teacher in the Cherokee
County School District in Canton, Georgia and a doctoral candidate at
the University of Miami in Coral Gables, Florida. His research focuses on
the empowerment, agency, and academic success of language minority
students through classroom based instructional practices.
JULIET HART is a former teacher of students with emotional/behavioral
disorders; she earned her doctoral degree in Special Education and
TESOL at the University of Miami in 2003. Since that time she has been
an Assistant Professor of Special Education at the College of William and
Mary, and currently is a visiting assistant professor in Special Education
at the University of Kansas. Her primary research interests include language, literacy, and multicultural issues in special education, child psychopathology, and classroom adaptations/strategies for students with disabilities in inclusive settings. She has several published and forthcoming
articles on special education and diversity topics in the journals
Intervention in School and Clinic, Remedial and Special Education, and the
Journal of Research in Science Teaching.
RACHAEL DEAKTOR formerly held the position of Senior Research
Associate in the School of Education at the University of Miami. In this
capacity, she contributed to research projects in the areas of elementary
science education, language and culture, and program evaluation. Her
other publications include “An instructional intervention’s impact on
the science and literacy achievement of culturally and linguistically
diverse students” with Okhee Lee, et al. and “Improving science inquiry:
Lessons learned from children of diverse backgrounds” with Peggy
Cuevas, et al., both in the Journal of Research in Science Teaching. Ms.
Deaktor currently works for a Boston-based educational publishing company.