Standardized Achievement Tests and English Language Learners

EDUCATIONAL ASSESSMENT, 8(3), 231–257
Copyright © 2002, Lawrence Erlbaum Associates, Inc.
Standardized Achievement Tests
and English Language Learners:
Psychometrics Issues
Jamal Abedi
Graduate School of Education and Information Studies
CRESST/University of California, Los Angeles
Using existing data from several locations across the U.S., this study examined the
impact of students’ language background on the outcome of achievement tests. The
results of the analyses indicated that students’ assessment results might be confounded by their language background variables. English language learners (ELLs)
generally perform lower than non-ELL students on reading, science, and math–a
strong indication of the impact of English language proficiency on assessment.
Moreover, the level of impact of language proficiency on assessment of ELL students
is greater in the content areas with higher language demand. For example, analyses
showed that ELL and non-ELL students had the greatest performance differences in
the language-related subscales of tests in areas such as reading. The gap between the
performance of ELL and non-ELL students was smaller in science and virtually nonexistent in the math computation subscale, where language presumably has the least
impact on item comprehension.
The results of our analyses also indicated that test item responses by ELL students, particularly ELL students at the lower end of the English proficiency spectrum, suffered from low reliability. That is, the language background of students may
add another dimension to the assessment outcome that may be a source of measurement error in the assessment for English language learners.
Further, the correlation between standardized achievement test scores and external criterion measures was significantly larger for the non-ELL students than for the
ELL students. Analyses of the structural relationships between individual items and
between items and the total test scores showed a major difference between ELL and
non-ELL students. Structural models for ELL students demonstrated lower statistical
Requests for reprints should be sent to Jamal Abedi, UCLA–CSE/CRESST, 300 Charles E.
Young Drive North, GSE&IS Building, 3rd Floor, Los Angeles, CA 90095–1522. E-mail: jabedi@
cse.ucla.edu
232
ABEDI
fit. The factor loadings were generally lower for ELL students, and the correlations
between the latent content-based variables were also weaker for them.
We speculate that language factors may be a source of construct-irrelevant variance in standardized achievement tests (Messick, 1994) and may affect their construct validity.
Due to the rapidly changing demographics of the U.S. population, fairness and validity issues in assessment are becoming top priorities in the national agenda. Between 1990 and 1997, the number of U.S. residents not born in the United States
increased by 30%, from 19.8 million to 25.8 million (Hakuta & Beatty, 2000). According to the Survey of the States’ Limited English Proficient Students and Available Educational Programs and Services 1999–2000 Summary Report, over 4.4
million limited English proficient1 students were enrolled in public schools (National Clearinghouse for English Language Acquisition and Language Instruction
Educational Programs, 2002). To provide fair assessment and uphold standards on
instruction for every child in this country, both federal (e.g., No Child Left Behind
Act of 2001) and state legislation now require the inclusion of all students, including ELLs, into large-scale assessments (Abedi, Lord, Hofstetter, & Baker, 2000;
Mazzeo, Carlson, Voelkl, & Lutkus, 2000). Such inclusion requirements have
prompted new interest in modifying assessments to improve the level of English
language learners’ participation and to enhance validity and equitability of inferences drawn from the assessments themselves.
Standardized, high-stakes achievement tests are frequently used for assessment
and classification of ELL students, as well as for accountability purposes. They
shape instruction and student learning (Linn, 1995). About 40% of districts and
schools use achievement tests for assigning ELL students to specific instructional
services within a school, and over 70% of districts and schools use achievement
tests to reclassify students from ELL status (Zehler, Hopstock, Fleischman, &
Greniuk, 1994).
However, as most standardized, content-based tests (such as science and math
tests) are administered in English and normed on native English-speaking test populations, they may inadvertently function as English language proficiency tests.
English language learners may be unfamiliar with the linguistically complex structure of test questions, may not recognize vocabulary terms, or may mistakenly in-
1The term English language learner (ELL) refers to students who are not native speakers of English
and are not as proficient in English as native speakers. A subgroup of these students with a lower level
of English proficiency is referred to as limited English proficient (LEP). The term LEP is used primarily
by government-funded programs to classify students as well as by the National Assessment of Educational Progress (NAEP) for determining inclusion criteria. In this article we use ELL to refer to students
who are not native English speakers and who are not reclassified as fluent in English.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
233
terpret an item literally (Duran, 1989; Garcia, 1991). They may also perform less
well on tests because they read more slowly (Mestre, 1988).
Thus, language factors are likely to reduce the validity and reliability of inferences drawn about students’ content-based knowledge, as stated in the Standards
for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education [AERA, APA, & NCME], 1999):
For all test takers, any test that employs language is, in part, a measure of their language skills. This is of particular concern for test takers whose first language is not
the language of the test. Test use with individuals who have not sufficiently acquired
the language of the test may introduce construct-irrelevant components to the testing
process. In such instances, test results may not reflect accurately the qualities and
competencies intended to be measured. … Therefore it is important to consider language background in developing, selecting, and administering tests and in interpreting test performance. (p. 91)
As indicated earlier, a major criticism of standardized achievement tests is the exclusion of ELL students from the norming group for these tests. Linn (1995) refers to
the issues associated with inclusion of all students as one of the three most notable of
the new features of this reform effort. The inclusion of all students in its assessments
has also been among the major issues for NAEP (see, e.g., Mazzeo et al., 2000).
Navarrette and Gustke (1996) expressed several concerns about the exclusion of
ELL students from the norming groups of standardized achievement tests:
Not including students from linguistically diverse backgrounds in the norming
group, not considering the match or mismatch between a student’s cultural and
school experiences, and not ensuring for English proficiency have led to justified accusations of bias and unfairness in testing. (p. 2)
Findings from a series of studies conducted by the National Center for Research
on Evaluation, Standards, and Student Testing (CRESST) on the impact of students’
language background on their performance indicated that (a) student language background affects students’ performance in content-based areas such as math and science, and (b) the linguistic complexity of test items may threaten the validity and reliability of achievement tests, particularly for ELL students (see Abedi & Leon, 1999;
Abedi, Leon, & Mirocha, 2001; Abedi & Lord, 2001; Abedi et al., 2000).
Thus, the literature on the assessment of ELLs clearly suggests that language
factors confound the test results of English language learners. However, the literature is not clear on the level of impact that language factors may have on different
content areas. That is, would the impact level of language on test outcomes differ
across the different content areas? Another issue concerns the impact level of lan-
234
ABEDI
guage factors on the validity and reliability of content-based assessments for
ELLs. Available data from four large school sites in the nation enabled us to explore these issues in greater detail.
METHODOLOGY
Research Questions
1. Could the performance difference between ELL and non-ELL students be
partly explained by language factors in the assessment?
2. Could the linguistic complexity of test items as a possible source of measurement error influence the reliability of the assessment?
3. Could the linguistic complexity of test items as a possible source of construct-irrelevant variance influence the validity of the assessment?
Data Sources
The data for this study were obtained from four locations across the U.S. To assure
anonymity, these data sites are referred to as Sites 1 to 4. Item-level standardized
achievement test data and background information were obtained for participating
students. The background variables included gender, ethnicity, free/reduced price
lunch participation, parent education, student ELL status, and students with disabilities (SD) status.
Table 1 summarizes some of the main characteristics of the four data sites. As
data in Table 1 show, there were similarities and differences among the four data
sites. All sites used standardized tests for measuring students’ achievement in English and other content-based areas, but they differed in the type of test administered. Although all sites had an index of students’ English language proficiency
status (ELL or bilingual status), and they all provided some student background information, they differed in the type of language proficiency index used and the
type of background variables provided. These differences limited our ability to
perform identical analyses at the different sites for cross-validation purposes.
However, there were enough similarities in the data structures at the four different
sites to allow for meaningful comparisons.
The following is a brief description of each of the four data sites.
Site 1. Site 1 is a large urban school district. Data on the Iowa Tests of Basic
Skills (ITBS) were obtained for Grades 3 through 8 in 1999. No information was
available on students’ ELL status; however, students were categorized as to
whether or not they were receiving bilingual services. Among the 36,065 students
in the Grade 3 population, 7,270 (about one in five) of these students were receiv-
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
235
TABLE 1
Summary of Characteristics of the Four Data Sites
Data Site
Site 1
Site 2
Site 3
Site 4
Location type
Large urban district Entire state Large urban district Entire state
Total number of students,
430,914
5,844,111
approx. 200,000
187,969
K–12
Percent of ELL, K–12
15.6
24.1
N/A
6.9
Language designation
Bilingual/
ELL/non-ELL
ELL/non-ELL
ELL/non-ELL
nonbilingual
Grades data available
1–8
2–11
10, 11
3, 6, 8, 10
Achievement tests used
ITBS
SAT9
SAT9
SAT9
Language proficiency
N/A
LAS
N/A
LAS
tests used
Accommodation data
N/A
N/A
N/A
N/A
Years data available
1999
1998
1998
1998
Note. ELL = English language learner; ITBS = Iowa Tests of Basic Skills; SAT9 = Stanford
Achievement Test, 9th edition; LAS = Language Assessment Scales; N/A = not available.
ing bilingual services. In Grade 6 there were 28,313 students in the population,
with 3,341 (11.8%) receiving bilingual services. In Grade 8 there were 25,406 students in the population, and 2,306—fewer than one in ten (9.1%)—were receiving
bilingual services.
Site 2. Site 2 is a state with a very large number of ELL students. There were
a total of 414,169 students in the Grade 2 population of the state, and 125,109
(30.2%) of these students were ELLs. In Grade 7 there were 349,581 students, of
whom 73,993 (21.2%) were ELL students. In Grade 9 there were 309,930 students,
and 57,991 (18.7%) were ELL students. Stanford Achievement Test, 9th edition
(Stanford 9) test data were obtained for all students in Grades 2 to 11 who were enrolled in the statewide public schools for the 1997–1998 academic year.
Site 3. Site 3 is an urban school district. Stanford 9 test data were available
for all students in Grades 10 and 11 for the 1997–1998 academic year. Accommodation data were obtained from the district and included both the type and number
of accommodations received. There were 12,919 students in the Grade 10 population, and 431 (3.3%) of these students were ELLs. In Grade 11 there were 9,803
students in the population, of whom 339 (3.5%) were ELL students.
Site 4. Site 4 is a state with a large number of ELL students. Access was provided to Stanford 9 summary test data for all students in Grades 3, 6, 8, and 10 who
were enrolled in the state’s public schools for the 1997–1998 academic year. There
were a total of 13,810 students in the Grade 3 population of the state, and 1,065
(7.7%) of these students were ELLs. In Grade 6 there were 12,998 students in the
236
ABEDI
population, of whom 813 (6.3%) were ELL students. In Grade 8 there were 12,400
students, and 807 (6.5%) were ELL students.
Design and Statistical Approach
To provide responses to the research questions outlined previously, data from the
four sites were analyzed. There were some differences in the type and format of the
data across the four sites; however, similar analyses were performed on the four
data sets, and the four sites were used as cross-validation samples.
The main hypothesis of this study focused on the possible impact of students’
language background on their performance. Therefore, the focus of the analyses
was on the comparison between the level of performance of ELL and non-ELL students. However, to develop an understanding about the role of other contributing
factors in the assessment of ELL students, comparisons were also made between
students with respect to other background variables, such as family income and
parent education. Students’ mean normal-curve equivalent (NCE) scores on different subscales of standardized achievement tests were compared across subgroups
using analysis of variance and t tests in a multiple-comparison framework.
To examine the impact of language on the reliability of tests and on the level of
measurement error, internal consistency coefficients were computed for different
tests across categories by students’ ELL status and other background variables,
such as family income and parent education. This approach was based on the assumption that test items within each strand or subscale were measuring the same
construct; that is, they were unidimensional (see Cortina, 1993). To study the impact of language factors on the validity of tests, the structural equation approach
was used (Bollen, 1989). Through the application of multiple-group factor analyses, the internal structural relationship of test items and the relationships of test
scores with external criteria were examined.
It must be noted at this point that in some of our data sites, we had access to the
data for the entire student population. Therefore, application of inferential statistical techniques was not necessary. However, to be consistent with the analyses for
the other sites that provided data for subgroups of the population, as well as the entire population, we report statistical analyses for all four data sites. Findings from
these analyses are presented next.
RESULTS
Three main research questions guided the analyses and reporting of the results.
These questions were based on (a) issues concerning content-based performance
differences between ELLs and non-ELLs due to language factors, (b) the impact of
language factors on reliability of the tests, and (c) the impact of language factors on
the validity of the tests.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
237
The results of analyses are reported in three sections: (a) performance differences between ELL and non-ELL students, (b) impact of language factors on reliability, and (c) validity.
Performance Differences Between ELL and Non-ELL
Students Due to Possible Impact of Language Factors
The results of analyses of data from the four sites consistently suggested that ELL
students performed substantially lower than non-ELL students. However, the performance gap between ELL and non-ELL students was not the same across the
content areas. In content areas with a higher level of language demand (e.g., reading and writing), the performance gap between ELL and non-ELL students was the
highest, whereas in content areas with less language demand (e.g., math and science), the performance gap was much smaller and in some cases was almost nonexistent (e.g., math computation).
To present a picture of the performance gap trend between ELL and non-ELL
students, we report the descriptive statistics on the site with the largest ELL population for two grades, an early elementary grade and a secondary school grade. To
conserve space, we have summarized the results of the descriptive analyses for the
other three sites.
Table 2 presents the number and percentage of students in Grades 2 and 9 in Site
2 who took the Stanford 9 tests in reading, math, and science, by student ELL and
disability status.
TABLE 2
Site 2 Grades 2 and 9 Stanford 9 Frequencies for Students
Students With a Normal Curve Equivalent Score
Grade 2
SD only
ELL only
ELL and SD
Non-ELL/Non-SD
All students
Grade 9
SD only
ELL only
ELL and SD
Non-ELL/Non-SD
All students
Note.
All Students
Reading
Math
n
%
n
%
n
%
n
%
17,506
120,480
4,629
271,554
414,169
4.2
29.1
1.1
65.6
100.0
15,051
97,862
3,537
252,696
369,146
4.1
26.5
1.0
68.5
100.0
16,720
114,519
4,221
267,397
402,857
4.2
28.4
1.0
66.4
100.0
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
18,750
53,457
4,534
233,189
309,930
6.0
17.2
1.5
75.2
100.0
16,732
48,801
3,919
224,215
293,667
5.7
16.6
1.3
76.4
100.0
17,350
50,666
4,149
226,393
298,558
5.8
17.0
1.4
75.8
100.0
17,313
50,179
4,108
225,457
297,057
5.8
16.9
1.4
75.9
100.0
SD = students with disabilities; ELL = English language learner.
Science
238
ABEDI
As data in Table 2 show, over 29% of all Grade 2 students at Site 2 who participated in Stanford 9 testing were ELL students. This percentage point (29.1%) may
not represent the actual percentage of ELL students at Site 2 because some ELL
students did not participate in the assessment due to language barriers. The percentage of ELL students who participated in the Stanford 9 testing was 17.2% for
Grade 9, which was substantially lower than for Grade 2 (29.1%). There were
slight differences between percentages of ELL students across the different content areas in this site.
The large number of ELL students in this site provided a unique opportunity
to perform analyses at the subgroup level to examine the impact of students’
background variables on academic achievement. Table 3 presents means, standard deviations, and numbers of students in reading, math, and science for Stanford 9 test scores by subgroups of students. In addition to data by students’ ELL
status, we included subgroup data by school lunch program (a proxy for family
income) and parent education, which were highly confounded with students’
ELL status.
In general, the results of analyses reported in Table 3 indicate that:
• ELL students performed substantially lower than non-ELL students, particularly in content areas with more language demand such as reading. For example,
the mean reading score for ELL students in Grade 2 was 31.6 (SD = 15.9, N =
97,862) compared with a mean of 49.3 (SD = 19.7, N = 252,696) for non-ELL students. This difference was significant beyond the .01 nominal level (t = 250.6, df =
350,556, p < .001).2
• The performance gap between ELL and non-ELL students was smaller in the
lower grades. For example, there was a 17.7-point difference between ELL and
non-ELL students in Grade 2 reading mean scores as compared with a 22-point
difference for students in Grade 9.
• The performance gap between ELL and non-ELL students decreased when
the level of language demand of test items decreased. For example, for Grade 9 students, the performance gap between ELL and non-ELL students in reading was 22
points, as compared to 15.4 points in math.
The results of analyses also show that other background variables affect test
performance. Background variables such as family income (as measured by participation in free/reduced price lunch program) and parent education may not be directly related to students’ ELL status, but are confounded with it.
2Since we are working with the population of students in this site, no statistical comparison is needed.
Even a minor difference would be real. However, following tradition, we conducted some statistical significance testing. To control for multiple comparisons, we used the Benjamini–Hochberg False Discovery Rate procedure. For a description of this procedure see Benjamini and Hochberg (1994).
TABLE 3
Site 2 Grade 2 Stanford 9 Subsection Scores
Grade 2
Subgroup/Grade
ELL status
ELL
M
SD
N
Non-ELL
M
SD
N
School lunch
Free/reduced price
M
SD
N
No free/reduced price
M
SD
N
Parent education
Not high school grad
M
SD
N
High school graduate
M
SD
N
Some college
M
SD
N
College graduate
M
SD
N
Post graduate studies
M
SD
N
Note.
Grade 9
Reading
Math
Science
Reading
Math
Science
31.6
15.9
97,862
37.7
19.7
114,519
NA
NA
NA
24.0
12.5
48,801
38.1
15.2
50,666
34.9
12.8
50,179
49.3
19.7
252,696
50.4
21.9
267,397
NA
NA
NA
46.0
18.0
224,215
53.5
19.4
226,393
49.2
16.1
225,457
35.4
17.5
106,999
38.8
20.1
121,461
NA
NA
NA
32.0
16.2
56,499
42.5
16.4
57,961
39.4
14.3
57,553
47.0
20.6
304,092
48.5
22.4
327,409
NA
NA
NA
42.6
19.7
338,285
50.7
20.1
343,480
47.0
17.0
341,663
30.1
15.3
54,855
34.7
19.1
63,960
NA
NA
NA
29.2
15.0
69,934
39.6
15.1
71,697
37.3
13.5
71,183
40.5
18.1
93,031
42.6
20.3
101,276
NA
NA
NA
35.6
17.0
71,986
44.1
17.1
73,187
41.7
14.9
72,810
48.8
18.6
66,530
50.3
20.6
70,381
NA
NA
NA
44.6
17.2
70,364
51.6
18.1
70,971
48.2
15.4
70,687
56.5
18.5
54,391
58.4
20.6
56,451
NA
NA
NA
48.1
18.5
87,654
56.3
19.6
88,241
51.5
16.4
87,956
62.1
18.7
25,571
64.1
20.4
26,367
NA
NA
NA
57.6
19.6
34,987
65.8
20.7
35,087
58.8
17.1
35,022
ELL = English languge learner.
239
240
ABEDI
Students who did not participate in the free/reduced price lunch program had
higher mean scores in all subject areas than those who did participate in the program. For example, the average NCE score for reading for Grade 2 students who
participated in the free/reduced price lunch program was 35.4 (SD = 17.5, N =
106,999), as compared with an average score of 47.0 (SD = 20.6, N = 304,092) for
those who did not participate in the program. The difference was statistically significant (t = 177.8, df = 411,089, p < .001). For Grade 9 students participating in the
free/reduced price lunch program, the average NCE score for reading was 32.0 (SD
= 16.2, N = 56,499), as compared with an average of 42.6 (SD = 19.7, N = 338,285)
for those who did not participate in the program. The difference between the performances of the two groups was statistically significant (t = 139.2, df = 394,755, p
< .001).
The results also indicate that parent education has a substantial impact on the
Stanford 9 test scores. For example, the average NCE score for reading for Grade 2
students of parents with low education (not high school graduate) was 30.1 (SD =
15.3, N = 54,855), as compared with an average of 62.1 (SD = 18.7, N = 25,571) for
students of parents with high education (post graduate education). This difference
was statistically significant (t = 238.8, df = 80,424, p < .001). For Grade 9 students,
the average NCE score for reading for the low parent education category was 29.2
(SD = 15.0, N = 69,934). For students with parents in the high education category,
the average was 57.6 (SD = 19.6, N = 34,987). This difference was statistically significant (t = 238.4, df = 104,919, p < .001). The results of our analyses also suggest
that family income and parent education are confounded with students’ ELL status. Table 4 presents frequencies and percentages of family income (free/reduced
price lunch program) and parent education by ELL status.
TABLE 4
Site 2 Free/Reduced Price Lunch Status and Parent Education
by ELL Status
No Free/Reduced Price Lunch–
Parent Education
Non-ELL
ELL
Total
Free/Reduced Price Lunch–
Parent Education
Not
HS Grad
Post Grad
Total
Not
HS Grad
Post Grad
Total
Grand
Total
20,738
26.2%
15,384
19.4%
36,122
45.6%
22,410
28.2%
976
1.2%
23,386
29.5%
43,148
54.4%
16,360
20.6%
59,508
75.1%
9,763
12.3%
8,648
10.9%
18,411
23.2%
980
1.3%
358
0.5%
1,338
1.7%
10,743
13.6%
9,006
11.4%
19,749
24.9%
53,891
68.0%
25,366
32.0%
79,257
100.0%
Note. Percentages reported are based on the total number of students. Not HS Grad = not high school graduate; Post Grad = post graduate education; ELL = English language learner.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
241
A chi-square of 12,096.72, which shows confounding of these variables, was
significant beyond the .01 nominal level (χ2 = 12,096.72, p < .001). A square contingency coefficient of .132 presents a rough estimate of the proportion of common
variance (or confounding) among the three variables. These results suggest that a
greater percentage of ELL students are from families with lower income and lower
education. For example, 95% of ELL students had parents with low education,
whereas only 57% of non-ELL students had parents with low education. Thirty-six
percent of all ELL students participated in the free/reduced price lunch program as
compared with only 20% of non-ELL students.
However, the results of analyses in this study suggest that among these background variables, language factors show a greater impact on assessment, much
greater than family income or parent education.
To make a more clear comparison between the performance of subgroups of
students (e.g., by ELL status, family income, and parent education) in different
content areas, a Disparity Index (DI) was computed. For example, to compute DI
by students’ ELL status, the mean score for ELL students was subtracted from the
mean for non-ELL students. The difference was then divided by the mean for ELL
students, and the result was multiplied by 100. Table 5 shows the DI by student
ELL status, as well as by school lunch program and parent education, for Grades 2
and 7, for Site 2, in four content areas.3 Similar results were obtained for other
grades (see Abedi & Leon, 1999).
As the data in Table 5 show, the average DI for ELL status over reading, math,
language, and spelling for Grade 2 was 48.1 (i.e., over all four subject areas,
non-ELL students outperformed ELL students by 48.1%). For Grade 7, the DI was
74.8. We also computed DI by school lunch program and parent education. The DI
for school lunch program for Grade 2 students was 29.6. That is, students who did
not participate in the school lunch program outperformed students who participated in the program by 29.6%. For Grade 7, the DI was 35.2. We also compared
the performance of students with the lowest level of parent education with students
and the highest level of parent education. The DI for parent education for Grade 2
was 99.3; that is, children of parents with the highest level of education (post graduate education) outperformed children of parents with lower levels of education
(“no education” or “elementary level education”) by 99.3%. The DI for Grade 7 by
parent education was 83.5.
By comparing the math DI with the DIs of the language-related subscales (reading, language, and spelling), we can see the impact of language on students’ performance. The DIs for all categories (ELL status, school lunch, and parent education)
were smaller for math and larger for reading. For example, for Grade 2 students,
the DI (non-ELL vs. ELL) was 55.8 in reading (non-ELL students outperformed
3We have presented the results for Grade 7 rather than Grade 9 to cover a larger range of students in
different grades.
242
ABEDI
TABLE 5
Site 2 Grades 2 and 7 Disparity Indexes (DI) by ELL Status,
Free/Reduced Price Lunch, and Parent Education
DI
Grade 2
ELL/Non-ELL
Free/reduced lunch
Parent education
Grade 7
ELL/Non-ELL
Free/reduced lunch
Parent education
Note.
Reading
Math
Language
Spelling
Average
Difference
55.8
32.7
106.3
33.5
25.1
84.9
60.2
35.2
118.5
42.8
25.3
87.5
48.1
29.6
99.3
19.4
6.0
15.8
96.9
47.2
98.4
50.4
29.5
76.2
70.7
32.9
79.0
81.1
31.1
80.5
74.8
35.2
83.5
32.5
7.7
9.8
ELL = English language learner.
ELL students by 55.8%), 60.2 in language, and 42.8 in spelling, as compared with
a DI of 33.5 in math. For Grade 7 students, the DIs (non-ELL vs. ELL) were 96.9
for reading, 70.7 for language, and 81.1 for spelling, compared to 50.4 for math.
The DIs for school lunch program (nonparticipant vs. participant in free/reduced
price lunch) for Grade 2 students were 32.7 for reading, 35.2 for language, and
25.3 for spelling, as compared with 25.1 for math.
However, the difference between DIs for math and language-related subscales
was largest across the ELL categories. In Table 5, we included these DI differences
under the “Difference” column. The DI difference by ELL status4 was 19.4 for Grade
2 and 32.5 for Grade 7, as compared with the school lunch program DI differences of
6.0 and 7.7, respectively, and the parent-education DI differences of 15.8 and 9.8, respectively. Once again, these data suggest that language factors may have a more
profound impact on the assessment outcome than other background variables, such
as family income and parent education, particularly for ELL students.
To shed light on the impact of language factors on assessment, analyses by math
subscales were conducted and will be presented. Standardized achievement tests
such as the Stanford 9 and ITBS include in their tests different math subscales that
have varying degrees of language demand. These subscales range from testing
math analytical skills, concepts and estimation, and problem solving with a relatively higher level of language demand to testing math computation with a minimal level of language demand. If the hypothesis concerning the impact of language
on content-based performance is tenable, then the performance difference between
ELL and non-ELL students should be at the minimum level in content-based tests
with a minimal level of language demand, such as math computation. This was exactly what the results of our analyses showed.
4This “DI difference” was computed as follows: The three language-related DIs (reading, language,
and spelling) were averaged. The result was then subtracted from the DI for math.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
243
TABLE 6
Site 1 Disparity Indexes of Nonbilingual Over Bilingual Students
on Math and Reading
Test
Level
Primary
Grade
9
3
10
4
11
5
12
6
13
7
14
8
Average of all levels/grades
Math Concepts
and Estimation
Math Problem Solving
and Data Interpretation
5.3
26.9
36.5
27.5
39.4
30.5
27.7
11.1
19.3
32.7
30.9
32.7
31.7
26.4
Math
Computation
–3.1
6.9
12.6
11.8
12.9
12.9
9.0
Reading
23.4
30.1
41.1
43.7
39.6
42.7
36.8
Data obtained from some of the sites in this study included different subscale
scores including math computation. Table 6 presents the DIs for bilingual students
compared with nonbilingual5 students by level and grade for math concepts and estimation, math problem solving, math computation, and reading in Site 1.
The results of the DI analyses shown in Table 6 present several interesting patterns:
1. The DIs indicated that the nonbilingual students generally outperformed the
bilingual students. However, the magnitude of the DIs depends, to a greater extent,
on the level of language demand of the test items. The DI for test items with less
language demand was smaller than for other items. For example, in Grade 3, bilingual students performed better on math computation, which has the lowest level of
language demand.
2. Major differences between bilingual and nonbilingual students were found
for students in Grades 3 and above. There seemed to be a positive relationship between the mean score differences and grade level, in that the difference increased
as the grade level increased, up to Grade 5. Starting with Grade 6, the DI was still
positive, but the rate of increase was not as systematic as before. For example, in
Grade 3, nonbilingual over bilingual students had DIs of 5.3 in math concepts and
estimation, 11.1 in math problem solving and data interpretation, –3.1 in math
computation, and 23.4 in reading. In Grade 4, these indexes increased to 26.9 for
math concepts and estimation, 19.3 for math problem solving and data interpretation, 6.9 for math computation, and 30.1 for reading. The indexes further increased
in Grade 5 to 36.5 for math concepts and estimation, 32.7 for math problem solving and data interpretation, 12.6 for math computation, and 41.1 for reading.
5This site did not provide information on students’ ELL status. Instead, we used students’ bilingual
status as a proxy for ELL status.
244
ABEDI
3. The largest gap between bilingual and nonbilingual students was in reading.
The next largest gaps were in the content areas that appear to have more language
demand. For example, the math concepts and estimation and the math problem
solving and data interpretation subsections seem to have more language demand
than the math computation subsection. Correspondingly, the DIs were higher for
those subsections. The average DI for Grades 3 through 8 was 27.7 for math concepts and estimation. That is, the mean of the nonbilingual group in math concepts
and estimation was 27.7% higher than the bilingual group mean. A similar trend
was observed in math problem solving and data interpretation; the average DI for
this subsection was 26.4. The average DI for math computation, however, was 9.0,
which was substantially lower than the corresponding DIs for the other two math
subsections. These results were consistent across the different data sites.
Table 7 reports the DIs, non-ELL versus ELL students, for reading, math total,
and the math calculation and math analytical subscales for Grades 3, 6, and 8 at
Site 4. Once again, the results of analyses clearly suggest the impact of language
factors on students’ performance, particularly in areas with more language demand. For example, in reading, ELL students had the largest performance gap with
non-ELL students. The average DI for reading across the three grades was 86.7, as
compared with the average performance gap of 33.4 for math total. Among the
math subscale scores, those with less language demand showed a smaller performance gap. The average DI was 41.0 for math analytical and 20.1 for math calculation. The math calculation DI was substantially less than the DI for reading (86.7)
and for math analytical (41.0). However, it must be indicated at this point that language demand and cognitive complexity of test items may also be confounded.
That is, items in the math calculation subscale may not only have less language demand, but they may also be less cognitively demanding than other math subscales,
such as math problem solving. This is a caveat in our discussion on the impact of
language on content-based assessments.
TABLE 7
Site 4 Disparity Indexes of Non-ELL Versus ELL Students in Reading
and Subscales of Math
Disparity Index
Grade
3
6
8
Average over the three grades
Note.
Reading
Math Total
Math Calculation
Math Analytical
53.4
81.6
125.2
86.7
25.8
37.6
36.9
33.4
12.9
22.2
25.2
20.1
32.8
46.1
44.0
41.0
ELL = English language learner.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
245
Possible Impact of Language Factors
on Reliability of Assessments
In classical test theory, reliability is defined as the ratio of the true-score variance
(σ2T) to observed-score variance (σ2X) (Allen & Yen, 1979). This observed score
variance (σ2X) is the sum of two components, the true-score variance (σ2T) and the
error variance (σ2E). In a perfectly reliable test, the error variance (σ2E) would be
zero; therefore, the true-score variance (σ2T) would be equal to the observed-score
variance.
However, in measurement with human participants there is always an error
component, whether large or small, which is referred to in classical test theory as
the measurement error (see Allen & Yen, 1979; Linn & Gronlund, 1995; Salvia &
Ysseldyke, 1998). Appropriate evaluation of the measurement error is important in
any type of assessment, whether in a traditional, multiple-choice approach or in
performance-based assessments (Linn, 1995; see also AERA, APA, & NCME,
1999). Many different sources (e.g., occasion, task, test administration conditions)
may contribute to measurement error in traditional, closed-ended assessment instruments. In addition to these sources, the reliability of performance assessment
measures suffers from yet another source of measurement error, variation in scoring of open-ended items. More important, in the assessment of ELL students, language factors may be another serious source of measurement error, due to unnecessary linguistic complexity in content-based areas. In the classical approach to
estimating reliability of assessment tools, the level of contribution of different
sources to measurement error may be indeterminable. Through the generalizability approach, one would be able to determine the extent of the variance each
individual source contributes (such as occasion, tasks, items, scorer, and language
factors) to the overall measurement error (see Cronbach, Gleser, Nanda, &
Rajaratnam, 1972; Shavelson & Webb, 1991).
To estimate reliability of the standardized achievement tests used in this study
and to investigate their measurement error, we considered different approaches.
Since parallel forms or test–retest data were not available, we decided to use an
internal consistency approach. The main limitation with the internal consistency
approach, however, is the assumption of unidimensionality. For example, the literature has indicated that the alpha coefficient, which is a measure of internal
consistency, is extremely sensitive to multidimensionality of test items (see, e.g.,
Abedi, 1996; Cortina, 1993). However, because the test items within each content area are assumed to measure the same construct, we believe this approach
may be appropriate for estimating reliability of the achievement tests used in this
study.
Because different data sites used different tests, and because within the individual sites, different test forms were used in different grades, these analyses were
performed separately for each site and each grade. Within each grade, we con-
246
ABEDI
ducted the internal consistency analyses separately for ELL and non-ELL students. The results obtained from analyses at different sites were consistent. Due to
space limitations, only the results from Site 2, the site with the largest number of
students, are presented. A complete report of the results of analyses can be found in
Abedi et al. (2001).
Language (and perhaps other variables, such as socioeconomic status and opportunity to learn) may cause a restriction of range in the score distribution that
may result in lower internal consistency.
Table 8 presents reliability (internal consistency) coefficients for the Stanford 9
data for Grade 2 students in Site 2. As the data in Table 8 show, non-ELL students
had higher coefficients than the ELL students. There was also a slight difference
between the alpha coefficients across the free/reduced price lunch categories.
Nonparticipants in the free/reduced price lunch program had slightly higher alphas
than the participating students. For example, the average reliability for the reading
subscale for the nonparticipant group was .913, as compared with an average reliability of .893 for the participant group (a difference of .021), and for ELL students
the average reliability was .856, as compared with an average reliability of .914 for
non-ELL students, a difference of .058 (non-ELLs refers to English only). The results of our analyses, which are consistent across the different sites, indicate that
the difference in internal consistency coefficients between ELL and non-ELL students is significantly larger than the difference between these coefficients across
the free/reduced price lunch and parent education categories.
Table 9 presents the reliability (internal consistency) coefficients for Grade 9
students. Comparing the internal consistency coefficients for Grade 9 students
with those for Grade 2 students (reported in Table 8) once again revealed that reliability coefficients for ELL students were lower than the coefficients for
non-ELL students. This was particularly true for students in higher grades,
where language has more impact on performance. In both Grade 3 and Grade 9,
reliabilities were lower for ELL students. However, in Grade 9, the difference
between reliability coefficients for ELL and non-ELL students was larger. For
example, for Grade 2, the difference between reliability coefficients for ELL and
non-ELL students was .058 in reading, .013 in math, and .062 in language, as
compared with the ELL/non-ELL reliability difference of .109 for reading, .096
for math, and .120 for language in Grade 9. The difference between the overall
reliability coefficient of ELL students and English-only students for Grade 9 was
.167, which was substantially higher than the respective difference of .043 in
Grade 2. Thus, the reliability gap between ELL and non-ELL students increases
with increase in the grade level. This may be due to the use of more complex
language structures in higher grades.
The results of these analyses strongly suggest that students’ language background factors have a profound effect on their assessment outcomes, above and beyond other background characteristics such as family income and parent education.
TABLE 8
Site 2 Grade 2 Stanford 9 Subscale Reliabilities
Non-ELL Students: Free Lunch
Participation
Subscale (No. of Items)
Yes
No
English Only
FEP
RFEP
ELL
Reading
Word study (48)
Vocabulary (30)
Reading comp. (30)
Average reliability
Math
Problem solving (45)
Procedures (28)
Average reliability
Language
Total (44)
N = 209,262
.917
.913
.908
.913
N = 220,971
.893
.892
.893
N = 218,003
.890
N = 58,485
.895
.897
.888
.893
N = 63,146
.881
.892
.887
N = 62,028
.866
N = 34,505
.916
.915
.910
.914
N = 249,000
.896
.891
.894
N = 245,384
.891
N = 29,771
.915
.906
.900
.907
N = 31,444
.886
.887
.887
N = 31,035
.883
N = 3,471
.920
.907
.899
.909
N = 3,673
.890
.895
.893
N = 3,612
.892
N = 101,399
.865
.857
.846
.856
N = 118,740
.871
.890
.881
N = 111,752
.829
Note.
ELL = English language learner; FEP = fluent English proficient; RFEP = redesignated fluent English proficient.
247
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
249
Validity
Research has indicated that complex language in content-based assessments for
nonnative speakers of English may reduce the validity and reliability of inferences drawn about students’ content-based knowledge. For example, results
from earlier CRESST language background studies (Abedi & Lord, 2001;
Abedi, Lord, & Hofstetter, 1998; Abedi et al., 2000; Abedi, Lord, & Plummer,
1997) provided support for a strong link between language factors and content-based performance. The linguistic factors in content-based assessments
(such as math and science) may be considered a source of construct-irrelevant
variance because they are not conceptually related to the content being assessed
(Messick, 1994):
With respect to distortion of task performance, some aspects of the task may require
skills or other attributes having nothing to do with the focal constructs in question, so
that deficiencies in the construct-irrelevant skills might prevent some students from
demonstrating the focal competencies. (p. 14)
To examine the impact of students’ language background on the validity of standardized achievement tests, analyses were performed to compare criterion validity
coefficients for ELL and non-ELL students and to examine differences between
the structural relationship of ELL and non-ELL groups.
Linguistic complexity of test items, as a possible source of construct-irrelevant
variance, may be a threat to the validity of achievement tests, because it could be a
source of measurement error in estimating the reliability of the tests. Intercorrelation between individual test items, the correlation between items and total
test score (the internal validity coefficient), and the correlation between item score
and total test score with the external criteria (the students’ other achievement data)
were computed. A significant difference across the ELL categories in the relationships between test items, between individual items and total test scores (internal
validity), and between overall test scores and external criteria may be indicative of
the impact of language on the validity of tests. Since language factors should not
influence the performance of non-ELL students, these relationships may be stronger for non-ELL students.
To examine the hypothesis regarding differences between ELL and non-ELL
students on the structural relationship of the test items, a series of structural equation models were created for Site 2 and Site 3 data. Fit indexes were compared
across ELL and non-ELL groups. The results generally indicated that the relationships between individual items, items with the total test score, and items with the
external criteria were higher for non-ELL students than for ELL students.
In creating the structural models, test items in each content area (e.g., reading, science, and math) were grouped as “parcels.” Figure 1 presents item par-
250
ABEDI
FIGURE 1
Latent variable model for reading, science, and math.
cels and latent variables for reading, math, and science for Site 2. As Figure 1
shows, the 54 reading items were grouped into four parcels. Each parcel was
constructed to systematically contain items with three degrees of item difficulty:
easy, difficult, and moderately difficult items (for a description of the item parcels and ways to create them, see Catell & Burdsal, 1975). A reading latent variable was constructed based on these four parcels.
Similarly, item parcels and latent variables for math and science were created
from the 48 math items and 40 science items by the same process. The correlations
between the reading, math and science latent variables were estimated. Models
were tested on randomly selected subsamples to demonstrate the cross-validation
of the results.
Table 10 shows the results of the structural models for Grade 9 at Site 2. Correlations of item parcels with the latent factors were consistently lower for ELL students than they were for non-ELL students. This finding was true for all parcels regardless of which grade or which sample of the population was tested. For
example, for Grade 9 ELL students, the correlations for the four reading parcels
ranged from a low of .719 to a high of .779 across the two samples (see Table 10).
In comparison, for non-ELL students, the correlations for the four reading parcels
ranged from a low of .832 to a high of .858 across the two samples. The item parcel
correlations were also larger for non-ELL students than for ELL students in math
and science. Again, these results were consistent across the different samples.
251
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
TABLE 10
Site 2 Grade 9 Stanford 9 Reading, Math,
and Science Structural Modeling Results (df = 51)
Non-ELL (N = 22,782)
Factor loadings
Reading comprehension
Parcel 1
Parcel 2
Parcel 3
Parcel 4
Math factor
Parcel 1
Parcel 2
Parcel 3
Parcel 4
Science factor
Parcel 1
Parcel 2
Parcel 3
Parcel 4
Factor correlation
Reading vs. Math
Reading vs. Science
Science vs. Math
Goodness of fit
Chi-square
NFI
NNFI
CFI
ELL (N = 4,872)
Sample 1
Sample 2
Sample 1
Sample 2
.852
.841
.835
.858
.853
.844
.832
.858
.723
.734
.766
.763
.719
.739
.779
.760
.818
.862
.843
.797
.821
.860
.843
.796
.704
.770
.713
.657
.699
.789
.733
.674
.678
.679
.739
.734
.681
.676
.733
.736
.468
.534
.544
.617
.477
.531
.532
.614
.782
.837
.870
.779
.839
.864
.645
.806
.796
.674
.802
.789
488
.997
.997
.998
446
.998
.997
.998
152
.992
.993
.995
158
.992
.993
.995
Note. There was significant invariance for all constraints tested with the multiple group model
(Non-ELL/ELL). ELL = English language learner; NFI = Normed Fit Index; NNFI = Nonnormed Fit
Index; CFI = Comparative Fit Index.
The correlations between the latent factors were also larger for non-ELL students than they were for ELL students. This gap in latent factor correlations between non-ELL and ELL students was especially large when there was more language demand. For example, in Sample 1 for Grade 9, the correlation between
latent factors for math and reading for non-ELL students was .782 compared to just
.645 for ELL students. When comparing the latent factor correlations between
reading and science from the same population, the correlation was still larger for
non-ELL students (.837) than for ELL students (.806), but the gap between the correlations was smaller. This was likely due to language demand differences. Multiple group structural models were used to test whether the differences between
252
ABEDI
non-ELL and ELL students mentioned previously were significant. There were
significant differences for all constraints tested at the p < .05 level.
The results of simple structure confirmatory factor analyses also showed differences on factor loadings and factor correlations between the ELL and non-ELL
groups for the Site 3 data. The hypotheses of invariance of factor loadings and factor correlations between the ELL and non-ELL groups were tested. Specifically,
we tested the following null hypotheses:
• Correlations between parcel scores and a reading latent variable are the same
for the ELL and non-ELL groups.
• Correlations between parcel scores and a science latent variable are the same
for the ELL and non-ELL groups.
• Correlations between parcel scores and a math latent variable are the same
for the ELL and non-ELL groups.
• Correlations between content-based latent variables are the same for the ELL
and non-ELL groups.
Table 11 summarizes the results of structural models for reading and math tests
for Site 3 students in Grade 10. Table 11 includes fit indexes for the ELL and
non-ELL groups, correlations between parcel scores and content-based latent variables (factor loadings), and correlations between latent variables. Hypotheses regarding the invariance of factor loadings and factor correlations between ELL and
non-ELL groups were tested. Significant differences between the ELL and
non-ELL groups at or below .05 nominal levels were identified. These differences
are indicated by an asterisk next to each of the constraints. There were several significant differences between the ELL and non-ELL groups on the correlations between parcel scores and latent variables. For example, on the math subscale, differences in factor loadings between the ELL and non-ELL groups on Parcels 2 and 3
were significant. Table 11 also shows a significant difference between the ELL and
non-ELL groups on the correlation between reading and math latent variables.
These results indicate that:
1. Findings from the two cross-validation samples are very similar and provide evidence on the consistency of the results.
2. Structural models show a better fit for non-ELL than for ELL students.
3. Correlations between parcel scores and the content-based latent variables
are generally lower for ELL students.
4. Correlations between the content-based latent variables are lower for ELL
students.
The results suggest that language factors may be a source of construct-irrelevant variance in the assessment of ELL students.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
253
TABLE 11
Site 3 Grade 10 Stanford 9 Reading and Math Structural Modeling Results
(Parcels Ordered by Item Number)
Goodness of Fit
Model 1 (df = 75)
Chi-square
NFI
NNFI
CFI
Factor Loadings
Reading
Parcel 1
Parcel 2
Parcel 3
Parcel 4
Parcel 5
Math
Parcel 1
Parcel 2
Parcel 3
Parcel 4
Parcel 5
Factor correlation
Reading vs. Math
Model 2 (df = 74)
2938
2019
.943
.933
.945
.916
.902
.918
Non-ELL
(N = 8,947)
ELL
(N = 303)
Non-ELL
(N = 8,947)
ELL
(N = 303)
.677
.683
.738
.826
.693
.683
.612
.695
.816
.723
.679
.684
.739
.824
.690
.685
.613
.696
.812
.720
.735
.659
.623
.724
.389
.763
.702*
.730*
.774
.471
.752
.667
.592
.722
.330
.788
.716*
.685*
.774
.391
.719
.624*
.723
.622*
Note. NFI = Normed Fit Index; NNFI = Nonnormed Fit Index; CFI = Comparative Fit Index: ELL
= English language learner.
*Significant at or above .05.
DISCUSSION
The purpose of this study was to examine the impact of students’ language background on the outcome of their assessments. Three major research questions
guided the analyses and reporting and will be the basis for discussion of the results
of this study:
1. Could the performance difference between ELL and non-ELL students be
partly explained by language factors in the assessment?
2. Could the linguistic complexity of test items as a possible source of measurement error influence the reliability of the assessment?
3. Could the linguistic complexity of test items as a possible source of construct-irrelevant variance influence the validity of the assessment?
254
ABEDI
In response to Question 1, results from the analyses of data from several locations nationwide indicated that students’ assessment results might be confounded
with language background variables. Descriptive statistics comparing ELL and
non-ELL student performance by subgroup and across different content areas revealed major differences between the performance of the two groups. Included in
the descriptive statistics section was a DI (the disparity of performance of
non-ELL students over that of ELL students). This index showed major differences in performance between students with different language backgrounds. The
higher the level of English language complexity in the assessment tool, the greater
the DI (the performance gap between ELL and non-ELL students).
Access to student-level and item-level data from the sites provided an opportunity
to conduct analyses on student subgroups that were formed based on their background variables, including language background. The exceptionally large numbers
of students in some subgroups enabled us to conduct cross-validation studies to demonstrate consistency of results over different sites and grade levels. The high degree
of consistency assured us of the validity and interpretability of the results.
Descriptive analyses revealed that ELL students generally perform lower than
non-ELL students on reading, science, and math subtests. The level of impact of
language proficiency on the assessment of ELL students is greater in content areas
with a higher level of language demand—a strong indication of the impact of English language proficiency on assessment. For example, analyses show that ELL
and non-ELL students had the greatest performance differences in reading, and the
least performance differences in math, where language has less of an impact on the
assessment.
In response to Question 2, the results of our analyses indicated that test items for
ELL students, particularly ELL students at the lower end of the English proficiency spectrum, suffer from lower internal consistency. That is, the language
background of students may add another dimension to the assessment in content-based areas. Thus, we speculate that language might act as a source of measurement error in such areas. It is therefore imperative that test publishers examine
the impact of language factors on test reliability and publish reliability indexes
separately for the ELL subpopulation.
To shed light on the issues concerning the impact of language factors on validity
(Question 3), concurrent validity of standardized achievement tests (Stanford 9 and
ITBS) was examined using a latent-variable modeling approach. Standardized
achievement latent variables were correlated with the external-criterion latent variables. The results suggest that (a) there is a strong correlation between the standardized achievement and external-criterion latent variables, (b) this relationship is
stronger when latent variables rather than measured variables are used, and (c) the
correlation between standardized achievement and external-criterion latent variables is significantly larger for the non-ELL population than for the ELL population.
We speculate that low correlation between the two latent variables for the ELL group
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
255
stems from language factors. That is, language factors act as construct-irrelevant
sources (Messick, 1994).
Analyses of the structural relationships between individual items and between
items with the total test scores revealed a major difference between ELL and
non-ELL students. Structural models for ELL students demonstrated lower statistical fit. Further, the factor loadings were generally lower for ELL students,
and the correlations between the latent content-based variables were weaker for
ELL students.
The results of this study suggest that ELL test performance may be explained
partly by language factors. That is, linguistic complexity of test items unrelated to
the content being assessed may at least be partly responsible for the performance
gap between ELL and non-ELL students. Based on the findings of this study, we
recommend that (a) the issues concerning the impact of language factors on the assessment of ELL students should be examined further; (b) psychometric characteristics of assessment tools should be carefully reviewed for use with ELL students;
and (c) in assessing ELL students, student language background variables should
always be included, and efforts should be made to reduce confounding effects of
language background on the assessment outcome.
ACKNOWLEDGMENTS
This research was supported in part by the Office of Bilingual Education and Minority Languages Affairs under Contract R305B960002 as administered by the
U.S. Department of Education. The findings and opinions expressed in this report
do not reflect the position or policies of the Office of Bilingual Education and Minority Languages Affairs or the U.S. Department of Education.
I acknowledge the valuable contribution of colleagues in preparation of this article. Seth Leon and Jim Mirocha provided assistance with the data analyses.
Kathryn Morrison provided technical assistance in preparation of this article. Joan
Herman and Mary Courtney contributed to this article with their helpful comments
and suggestions. I am grateful to Eva Baker and Joan Herman for their support of
this work.
REFERENCES
Abedi, J. (1996). The interrater/test reliability system (ITRS). Multivariate Behavioral Research, 31,
409–417.
Abedi, J., & Leon, S. (1999). Impact of students’language background on content-based performance:
Analyses of extant data. Los Angeles: University of California, National Center for Research on
Evaluation, Standards, and Student Testing.
256
ABEDI
Abedi, J., Leon, S., & Mirocha, J. (2001). Examining ELL and non-ELL student performance differences and their relationship to background factors: Continued analyses of extant data. Los Angeles:
University of California, National Center for Research on Evaluation, Standards, and Student
Testing.
Abedi, J., & Lord, C. (2001). The language factor in mathematics tests. Applied Measurement in Education, 14, 219–234.
Abedi, J., Lord, C., & Hofstetter, C. (1998). Impact of selected background variables on students’
NAEP math performance. Los Angeles: University of California, National Center for Research on
Evaluation, Standards, and Student Testing.
Abedi, J., Lord, C., Hofstetter, C., & Baker, E. (2000). Impact of accommodation strategies on English language learners’ test performance. Educational Measurement: Issues and Practice, 19(3),
16–26.
Abedi, J., Lord, C., & Plummer, J. R. (1997). Final report of language background as a variable in
NAEP mathematics performance (CSE Tech. Rep. No. 429). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.
Allen, M. J., & Yen, W. M. (1979). Introduction to measurement theory. Monterey, CA: Brooks/Cole.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing.
Washington, DC: American Educational Research Association.
Benjamini, Y., & Hochberg, Y. (1994). Controlling the false discovery rate: A practical and powerful
approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57, 289–300.
Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley.
Cattell, B. R., & Burdsal, A. C. (1975). The radial parcel double factoring design: A solution to the
item-vs.-parcel controversy. Multivariate Behavioral Research, 10, 165–179.
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of
Applied Psychology, 78, 98–104.
Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral
measurements: Theory of generalizability of scores and profiles. New York: Wiley.
Duran, R. P. (1989). Assessment and instruction of at-risk Hispanic students. Exceptional Children, 56,
154–158.
Garcia, G. E. (1991). Factors influencing the English reading test performance of Spanish-speaking
Hispanic children. Reading Research Quarterly, 26, 371–391.
Hakuta, K., & Beatty, A. (Eds.). (2000). Testing English language learners in U.S. schools. Washington, DC: National Academy Press.
Linn, R. L. (1995). Assessment-based reform: Challenges to educational measurement. Princeton, NJ:
Educational Testing Service.
Linn, R. L., & Gronlund, N. E. (1995). Measurement and assessment in teaching (7th ed.). Englewood
Cliffs, NJ: Prentice-Hall.
Mazzeo, J., Carlson J. E., Voelkl, K. E., & Lutkus, A. D. (2000). Increasing the participation of special
needs students in NAEP: A report on 1996 NAEP research activities. Washington, DC: National Center for Education Statistics.
Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.
Mestre, J. P. (1988). The role of language comprehension in mathematics and problem solving. In R. R.
Cocking & J. P. Mestre (Eds.), Linguistic and cultural influences on learning mathematics (pp.
201–220). Hillsdale, NJ: Lawrence Erlbaum, Associates, Inc.
National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs. (2002). Survey of the states’ limited English proficient students and available educational
programs and services. Washington, DC: Author.
STANDARDIZED ACHIEVEMENT TESTS FOR ELLS
257
Navarrette, C., & Gustke, C. (1996). A guide to performance assessment for linguistically diverse students. Albuquerque: New Mexico Highlands University.
No Child Left Behind Act of 2001, Pub. L. No. 107–110, 115 Stat. 1425 (2002).
Salvia, J., & Ysseldyke, J. E. (1998). Assessment (7th ed.). Boston: Houghton Mifflin.
Shavelson, R., & Webb, N. (1991). Generalizability theory: A primer. Newbury Park, CA: Sage.
Zehler, A. M., Hopstock, P. J., Fleischman, H. L., & Greniuk, C. (1994). An examination of assessment
of limited English proficient students (Special Issues Analysis Center Task Order D070 Report).
Arlington, VA: Development Associates.