Gender differences in computerised and conventional educational

Original article
Gender differences in computerised and
conventional educational tests
J. Horne
Department of Psychology, University of Hull, Hull, UK
Abstract
Research has demonstrated girls to outperform boys on conventional literacy tests. The
present studies concern gender differences on computerised educational tests. Seventy-one
children were tested using LASS Secondary and a set of seven conventional measures.
No significant gender differences were found on any of the LASS Secondary modules, although females did outperform males on a conventional spelling test. A further 126 pupils
were tested on computerised and paper versions of the LASS Secondary reading, spelling
and reasoning modules. No gender differences were found on the computerised versions, but
there were significant differences on the paper versions of the reading and spelling modules
favouring females. In a third study, 45 children were administered computerised and paper
versions of the LASS Junior reading and spelling modules. There were no significant differences on the computerised modules, but girls performed significantly higher than boys on
the paper version of the spelling module. It is possible that computerised assessment does not
detect the established gender effect due to differences between males and females in motivation, computer experience and competitiveness. Further large-scale studies are necessary
to confirm these findings.
Keywords
assessment, children, computer, education, gender.
Introduction
Several researchers have shown girls to be scoring
higher than boys on literacy tests on school entry at
age 4 or 5 (Denton & West 2002; Justice et al. 2005)
and even on pre-school cognitive tasks (Sammons
et al. 2004; Sylva et al. 2004). Strand (1999) maintains
that girls are typically found to score higher than boys
in teacher ratings of reading, writing and mathematics
throughout Key Stage 1. Indeed, a number of studies
using baseline assessment schemes (mostly based on
teacher rating scales) have shown girls to achieve
higher scores than boys (Lindsay 1980; Strand 1997;
Lindsay & Desforges 1999). In addition, the Annual
Report on sample data from the QCA Baseline AsAccepted: 10 July 2006
Correspondence: J. Horne, Department of Psychology, University of
Hull, Hull HU6 7RX, UK. E-mail: [email protected]
sessment Scales for the Autumn 1998 and Spring and
Summer 1999 periods (Qualifications and Curriculum
Authority 1999) also shows that, from a nationally
representative sample of 6953 children, girls score
higher than boys on all items (which are based on
teacher ratings). It has also been reported that girls
make more progress than boys in reading and writing
during Key Stage 1 and so increase the gender gap
(Strand 1997, 1999). Strand (1999) reports that girls
make less progress in mathematics but still score
slightly higher than boys at age 7. However, Tizard
et al. (1988) indicate that, on objective tests, there are
no significant differences between boys and girls at the
end of their nursery education (average age 4 years 7
months) but that girls are significantly ahead of boys
in reading by age 7. Tymms (1999) reports significant
gender differences, favouring girls, on all subsections
of the PIPS baseline test. However, he argues that
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd Journal of Computer Assisted Learning 23, pp47–55
47
48
‘none was large enough to be regarded as educationally important’ (p. 43). The computer-based version of
the PIPS baseline test shows girls performing slightly
better than boys, although the effect sizes (0.1–0.2) are
small (CEM 2001).
Gender differences in education, particularly with
girls scoring higher than boys on literacy, continue to
be evident throughout the school years (Frome &
Eccles 1998; Pajares et al. 1999; Swiatek et al. 2000;
Herbert & Stipek 2005). A number of studies in recent
years have demonstrated a female advantage over
males on reading tests (MacMillan 2000; Diamond &
Onwuegbuzie 2001) and writing tests (Malecki &
Jewell 2003). Tizard et al. (1988) report that girls in
Britain outperform boys on standardised tests of
reading and writing by the age of 7 but there is only a
small gender difference in mathematics. According to
Fergusson and Horwood (1997) the traditional educational disadvantage of girls has been replaced by a
male disadvantage. They studied a group of children
from school entry to age 18 and found that males
achieved less well than females. These differences
were not due to differences in intelligence as boys and
girls had similar IQ scores. Fergusson and Horwood
suggest that the gender differences were explained by
boys being more disruptive and inattentive in class,
which impeded their learning. Male (1994) states that,
in one LEA, two-thirds of the children receiving a
statement of special educational needs are boys. There
are also a disproportionate number of boys identified
as having reading difficulties in the United States
(Allred 1990). Furthermore, Vardill (1996) reports
that more boys than girls are identified as having
special educational needs and, in one LEA, almost
twice as many boys, compared with girls, were referred. He suggests that one of the reasons for this is
that boys experiencing difficulties are more likely to
be a problem to the teacher because of associated
conduct problems. Alternatively, Malecki and Jewell
(2003) suggest that boys may be over-identified for
difficulties as girls have an advantage on tests and
normative data may not account for gender.
Plewis (1997) suggests that the reasons for gender
differences in educational attainment are not well
understood. It has sometimes been argued that teachers have lower expectations for boys. He suggests,
however, that to conclude that teachers are biased
against boys it is necessary to show that these low
J. Horne
expectations are inaccurate and that teachers behave
differently towards male pupils. Plewis did not find
teachers’ assessments to be less accurate for this
group, although he did find teachers’ expectations to
be biased against boys. A number of researchers
suggest that expectations could be affected by the way
boys behave (Tizard et al. 1988; Bennett et al. 1993;
Delap 1995).
Despite the huge growth in expenditure on information and communication technology in education over recent years, the majority of assessment
taking place in schools is pen-and-paper based (Cowan
& Morrison 2001). According to Hargreaves et al.
(2004), English schools have one of the highest rates
of computer use in the world, with one computer for
every 9.7 pupils in 2002. There are a number of advantages of computerised assessment (see Greenwood
& Rieth 1994; Singleton 1994, 1997; Singleton et al.
1995, 1999; Woodward & Rieth 1997; British Psychological Society 1999) including that they are
labour and time saving, more standardised and
controlled. Another principal advantage is the possibility of adaptive testing. In an adaptive test, individuals are moved quickly to the test items that will
most efficiently discriminate his or her capabilities,
making assessment shorter, more reliable, more efficient and often more acceptable to the person being
tested. A number of studies have indicated that children (particularly if they feel they might perform
badly) often prefer computer-based assessment to
traditional assessment by a human assessor. However,
the effect of utilising computerised assessments on
gender differences is not clear. It has been suggested
that females may perform more poorly than males on
computerised tests due to computer anxiety (Brosnan
1999; Todman 2000) although it has also been argued
that males under-perform on such tests as they practice
less. A number of studies have shown females to
obtain lower scores on computerised assessments
(Gallagher et al. 2002; Volman et al. 2005), while
others have shown there to be no gender differences on
computerised assessment (Singleton 2001; Fill &
Brailsford 2005).
The present study seeks to ascertain whether gender
differences exist in computerised educational tests. It
consists of three small-scale studies involving two
computerised tests – one for secondary schools (LASS
Secondary) and one for junior schools (LASS Junior).
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
Gender differences in educational tests
49
Method
Participants
The participants in the three studies attended schools
in different regions of England and Wales, which were
selected in order to give a representative range across
the socio-economic spectrum. Details of the participants are given in Table 1. The pupils in each school
were randomly selected from the school register – they
were not selected on the basis of ability or for being
considered as at risk of having any special educational
need.
Instruments
Study 1: LASS Secondary versus conventional tests
LASS Secondary (Horne et al. 1999) is a computerised test consisting of seven main assessment
modules. It is standardised for the age range 11 years 0
months to 15 years 11 months.
1. Sentence reading is an adaptive test that involves
finding the missing word in a sentence. Items are
selected from a bank of 70 items.
2. Spelling is an adaptive test that involves spelling
single words. Items are selected from a bank of 98
items.
3. Reasoning is an adaptive test involving logical
matrix puzzles. Items are selected from a bank of
58 items.
4. Visual spatial short-term memory has a scenario of
a ‘cave’ with eight ‘hollows’. Different ‘phantoms’
appear in different hollows one at a time and then
disappear. The pupils must select from an array the
phantoms that were presented and place them in the
correct hollow. This is an adaptive test that starts
with three phantoms and progresses to a maximum
of eight phantoms, depending on the pupils’ performance.
5. Auditory-sequential short-term memory is a forward digit span task. Pupils are given a telephone
number to remember which they then enter on to a
representation of a mobile phone using the mouse
or the computer keyboard. Pupils start with two
items of three-digit numbers and, if they answer
one or both correctly, then they move on to two
items of four-digit numbers and so on up to a
maximum of nine digits.
6. Non-word reading is a test of phonic decoding
skills and has 25 items. For each item, pupils are
presented with a non-word visually and hear four
different spoken versions of the word; they must
select the version that they think is correct.
7. Syllable segmentation is a test of syllable and
phoneme deletion, which identifies poor phonological processing ability. Pupils are presented with
32 words and asked what each word would sound
like if part of it is removed. They listen to four
alternative answers, before selecting the answer
that they think is correct.
Commensurate conventional tests were used as comparison measures for the seven LASS Secondary
modules (see Table 2). It is important to note that these
tests are not exact equivalents of the LASS Secondary
modules.
Study 2: LASS Secondary computerised versus penand-paper administration
The second study utilised the LASS Secondary (Horne
et al. 1999) sentence reading, spelling and reasoning
modules as outlined above. These were administered
in their standard computerised format (as adaptive
tests) and also in a pen-and-paper format (in which all
items were administered).
Study 3: LASS Junior computerised versus pen-andpaper administration
The final study utilised the LASS Junior (Thomas et al.
2001) sentence reading and spelling modules. These
were administered in their standard computerised
Table 1. Participant details.
Study
Schools
n
Males
Females
Age range
Mean age (SD)
1. LASS Secondary versus conventional tests
2. LASS Secondary computer versus paper
3. LASS Junior computer versus paper
5
3
1
71
126
45
44
59
25
27
67
20
11:6–15:11
11:1–13:2
9:5–11:5
13:6 (17.02 months)
12:2 (7.90 months)
10:6 (7.32 months)
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
50
J. Horne
Table 2. Conventional tests and their comparative LASS Secondary modules.
Conventional test
LASS module
NFER – Nelson group reading sentence completion test (Macmillan Test Unit 1990)
British spelling test series Level 3 (Vincent & Crumpler 1997)
Matrix analogies test – short form (Naglieri 1985)
Wechsler Memory Scale-III Spatial Span Test (Wechsler 1997)
Wechsler Intelligence Scale for Children-III Digit Span Test (Wechsler 1991)
Phonological Assessment Battery (Frederickson et al. 1997) non-word reading
Phonological Assessment Battery (Frederickson et al. 1997) spoonerisms
Sentence reading
Spelling
Reasoning
Visual memory
Auditory memory
Non-word reading
Syllable segmentation
format (as adaptive tests) and also in a penand-paper format (in which all items were administered).
1. Sentence reading is an adaptive test that involves
finding the missing word in a sentence. Items are
selected from a bank of 94 items.
2. Spelling is an adaptive test that involves spelling
single words. Items are selected from a bank of
122 items.
Procedure
Study 1: LASS Secondary versus conventional tests
Pupils completed the seven LASS Secondary modules,
with each school completing the tests in a different
order. Pupils were administered the equivalent conventional tests, in the same order, with a 4-week gap in
between. Half the schools completed the LASS Secondary tests first and half completed the conventional
tests first.
Study 2: LASS Secondary computerised versus penand-paper administration
Pupils completed the three LASS Secondary computerised modules, with each school completing the tests
in a different order. Pupils were administered the
equivalent pen-and-paper versions, in the same order
as the comparable computerised tests, with an 8-week
gap in between.
Study 3: LASS Junior computerised versus pen-andpaper administration
Pupils completed the two LASS Junior computerised
modules and the equivalent pen-and-paper versions,
with a 4-week gap in between. Half the pupils com-
pleted the computerised tests first and half completed
the pen-and-paper tests first.
Results
Study 1: LASS Secondary versus conventional tests
Analysis by gender (see Table 3) revealed no significant differences on any of the LASS Secondary
modules.
Analysis by gender revealed a significant difference
on only one of the conventional measures (BSTS 3
spelling test), with girls outperforming boys on this
test (see Table 4).
Fifty-one of the 71 pupils preferred the computer
tests, while 17 preferred the paper-and-pencil tests.
There was no significant gender difference (see Fig 1)
in the preference for computerised or paper-and-pencil
tests (w2 5 0.34, df 5 1, not significant).
Study 2: LASS Secondary computerised versus penand-paper administration
Analysis by gender (see Table 5) revealed no significant differences on any of the LASS Secondary
computerised modules but significant differences on
two of the pen-and-paper versions of the tests (reading
and spelling), with girls outperforming boys on these
tests. Correlations between the computerised and penand-paper versions of the tests were 0.90 (Po0.001)
for reading, 0.89 (Po0.001) for spelling and 0.79
(Po0.001) for reasoning.
Study 3: LASS Junior computerised versus pen-andpaper administration
Analysis by gender (see Table 6) revealed no significant differences on either of the LASS Junior
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
Gender differences in educational tests
51
Table 3. Gender differences on the seven LASS Secondary modules (z-scores).
LASS sentence reading
LASS spelling
LASS reasoning
LASS visual memory
LASS auditory memory
LASS non-word reading
LASS syllable segmentation
Gender
n
Mean
SD
t
df
Significance
Effect size (r)
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
27
44
26
44
27
44
27
44
27
44
27
44
27
44
0.64
0.36
0.69
0.25
0.51
0.33
0.02
0.13
0.54
0.29
0.49
0.35
0.49
0.23
0.70
0.99
0.76
1.00
0.70
0.80
0.93
1.03
0.74
1.17
0.97
1.10
0.86
0.84
1.25
69
NS
0.15
1.93
68
NS
0.22
0.93
69
NS
0.11
0.44
69
NS
0.05
1.07
69
NS
0.13
0.55
69
NS
0.07
1.23
69
NS
0.14
SD
t
df
Significance
Effect size (r)
0.85
1.19
0.87
1.27
0.83
1.11
1.10
1.00
0.96
1.08
0.77
1.15
0.81
1.05
1.29
69
NS
0.15
2.36
68
Po0.05
0.27
0.77
69
NS
0.09
1.21
68
NS
0.14
1.87
69
NS
0.22
0.71
67
NS
0.09
0.56
68
NS
0.07
Table 4. Gender differences on the seven conventional measures (z-scores).
NFER group reading test
BSTS 3
MAT
WMS-III spatial span (total)
WISC-III Digit Span (total)
PhAB non-word reading
PhAB spoonerisms
Gender
n
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
27
44
27
44
27
44
27
43
27
44
27
42
27
43
Mean
0.29
0.63
0.06
0.66
0.22
0.41
0.11
0.42
0.08
0.40
0.06
0.10
0.22
0.35
computerised modules but a significant difference on
the pen-and-paper version of the spelling test, with
girls outperforming boys on this test. Correlations
between the computerised and pen-and-paper versions
of the tests were 0.92 (Po0.001) for reading and 0.96
(Po0.001) for spelling.
Discussion
No significant gender differences were found on any
of the computerised LASS Secondary or LASS Junior
modules. The conventional measures utilised in Study
1 show a significant gender difference on spelling,
with females outperforming males. Additionally, girls
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
also scored higher than boys on the pen-and-paper
versions of the LASS Secondary reading and spelling
modules and the LASS Junior spelling module. The
gender difference in spelling ability on the three penand-paper spelling tests is concordant with findings by
Hyde and Linn (1988) that, even at an undergraduate
level (Coren 1989), a gender difference, favouring
females, exists in spelling. The results of the first
LASS Secondary study suggest that there is no significant effect of gender in pupils’ preference for
computerised or conventional tests.
The findings of the present study, that there are no
significant gender differences in the computerised
reading and spelling tests, are in contrast to the results
52
J. Horne
of previous studies, which suggested that girls outperform boys in conventional pen-and-paper literacy
tests. It could be suggested that fully computerised
tests are more objective than conventional assessment
and less susceptible to gender bias. However, it is
more likely that the interaction with a computer in the
studies has enhanced the motivation of the boys more
than that of the girls and this could have compensated
for (or masked) gender differences in performance.
Several studies have shown that boys globally use ICT
more both at home and at school (Scott et al. 1992;
Bannert & Arbinger 1996; Makrakis & Sawada 1996;
Janssen & Plomp 1997; Kadijevich 2000). Further, it
is reported that girls do not view computers as positively as boys do (Durndell 1991; Comber et al. 1997;
Durndell & Thomson 1997; Huber & Schofield 1998)
and Crook (1996) suggests that girls’ attitudes towards
technology may become more negative as they go
through school. Previous research has suggested that
females experience higher levels of computer anxiety
(Brosnan 1999; Todman 2000) which may negatively
affect their performance on computerised tests. This is
consistent with the finding of Newton and Beck (1993)
that, in the early 1990s, the percentage of women
studying for computer science degrees was falling.
Additionally, Volman and van Eck (2001) state that
35
Computer
30
Pen and Paper
Frequency
25
20
15
10
5
0
Males
Females
Gender
Fig 1 Gender differences in preferences for different test types.
Table 5. Gender differences on the LASS Secondary computerised and paper-administered modules.
Computerised reading
Computerised spelling
Computerised reasoning
Pen-and-paper reading
Pen-and-paper spelling
Pen-and-paper reasoning
Gender
n
Mean
SD
t
df
Significance
Effect size (r)
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Male
66
57
65
58
51
46
66
57
65
58
51
46
53.95
53.19
59.85
56.55
33.90
36.13
55.20
45.32
61.05
48.78
35.63
35.24
18.93
21.28
20.64
19.63
17.76
14.00
16.45
18.47
20.21
20.07
13.48
10.04
0.21
113
NS
0.02
0.90
121
NS
0.08
0.69
93
NS
0.07
3.14
121
Po0.01
0.27
3.37
121
Po0.01
0.29
0.16
95
NS
0.02
Table 6. Gender differences on the LASS Junior computerised and paper-administered modules.
Computerised reading
Computerised spelling
Pen-and-paper reading
Pen-and-paper spelling
Gender
n
Mean
SD
t
df
Significance
Effect size (r)
Female
Male
Female
Male
Female
Male
Female
Male
20
25
20
25
20
25
20
25
64.05
56.48
75.20
65.00
61.85
54.12
79.25
66.12
18.66
24.03
19.76
22.27
15.43
19.85
18.14
21.30
1.16
43
NS
0.17
1.60
43
NS
0.24
1.43
43
NS
0.22
2.19
43
Po0.05
0.32
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
Gender differences in educational tests
girls report fewer ICT skills and less ICT knowledge
than boys and Janssen and Plomp (1997) suggest that
boys score better than girls on tests of ICT skills.
However, Crook and Steele (1987) found no significant gender differences among reception class
pupils choosing to engage in computer activities in
school and Essa (1987) obtained similar results among
pre-schoolers. Volman and van Eck (2001) and Volman et al. (2005) also found gender differences to be
less common in younger pupils than older pupils.
Crook (1996, p. 25) concludes that ‘. . . gender-based
attitude differences are not convincingly present at the
start of schooling: they must somehow be cultivated
within the early school years’. Nevertheless, if gender
differences in computer interest do exist, then these
could bias the results of a computerised assessment.
However, Taylor et al. (1999) found no relationship
between computer familiarity and level of performance on a computerised test of English as a foreign
language, after controlling for English language ability. Furthermore, Hargreaves et al. (2004) found no
relationship between low performance on a computerised mathematics test and unfamiliarity or nervousness with computers. This has been supported in
adult samples by Parshall and Kromrey (1993) and
Clariana and Wallace (2002). Fill and Brailsford
(2005) found no evidence of higher computer anxiety
among female University students.
Alternatively, it is suggested that males are more
competitive than females, while females are more
collaborative (Maccoby & Jacklin 1974; Fiore 1999).
According to Hayes and Waller (1994) this may interfere with accurate performance under situations
such as group testing. The pen-and-paper reading and
spelling tests used in the studies were administered as
group tests, while the computerised tests were administered individually in most cases. It is therefore
possible that the male students attempted to finish the
conventional tests quickly and so made more mistakes,
whereas during the individual computerised tests boys
were under less pressure to complete the test quickly.
Both LASS Junior and LASS Secondary have network
versions, which can be installed on several machines
within a computer room so that pupils can be tested
simultaneously (wearing headphones). In such situations, it may be a necessary precaution to advise that
students being tested at the same time be administered
the tests in different orders. Furthermore, the paper
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
53
versions of the LASS tests in the current studies incorporated all of the test items, while the computerised
versions utilised an adaptive algorithm. It is possible
that the extended length of the paper test impacted
more negatively on the boys.
Conclusions
Gender differences favouring females that are evident
in conventional literacy tests are not evident in the
LASS Secondary and LASS Junior reading and spelling tests. It may be that computerised assessment does
not detect this established gender effect due to differences between males and females in motivation,
computer experience and competitiveness. However,
these results are from small-scale studies and further
large-scale studies looking at the effect of test mode
on gender differences in educational tests are necessary to establish the generalisability of these findings.
References
Allred R.A. (1990) Gender differences in spelling achievement in grades 1 through 6. Journal of Educational
Research 83, 187–193.
Bannert M. & Arbinger P.R. (1996) Gender-related differences in exposure to and use of computers: results of a
survey of secondary schools. European Journal of Psychology of Education 11, 269–282.
Bennett R.E., Gottesman R.L., Rock D.A. & Cerullo F.
(1993) Influence of behaviour perceptions and gender on
teachers’ judgements of students’ academic skills. Journal of Educational Psychology 85, 347–356.
British Psychological Society (1999) Dyslexia, Literacy and
Psychological Assessment. British Psychological Society,
Leicester.
Brosnan M. (1999) Computer anxiety in students. In Computer-Assisted Assessment in Higher Education (eds
S. Brown, J. Bull & P. Race), pp. 47–54. Kogan Page,
London.
CEM Centre (2001) Performance Indicators in Primary
Schools: Technical Report 2001: CD-ROM Version.
CEM Centre, Durham.
Clariana R. & Wallace P. (2002) Paper-based versus computer-based assessment: key factors associated with the
test mode effect. British Journal of Educational Technology 33, 593–602.
Comber Ch., Colley A., Hargreaves D.J. & Dorn L. (1997)
The effects of age, gender and computer experience upon
computer attitudes. Educational Research 9, 123–134.
54
Coren S. (1989) The spelling component test: psychometric
evaluation and norms. Educational and Psychological
Measurement 49, 961–971.
Cowan P.C. & Morrison H. (2001) Assessment in the 21st
century: role of computerised adaptive testing in National
Curriculum subjects. Teacher Development 5, 241–257.
Crook C. (1996) Computers and the Collaborative Experience of Learning: A Psychological Perspective. Routledge, London.
Crook C. & Steele J. (1987) Self-selection of simple computer activities by infant school pupils. Educational
Psychology 7, 23–32.
Delap M.R. (1995) Teachers’ estimates of candidates’ performance in public examinations. Assessment in Education 2, 75–92.
Denton K. & West J. (2002) Children’s Reading and
Mathematics Achievement in Kindergarten and First
Grade. National Center for Education Statistics, Washington, DC.
Diamond P.J. & Onwuegbuzie A.J. (2001) Factors associated with reading achievement and attitudes among
elementary school-aged students. Research in the
Schools 8, 1–11.
Durndell A. (1991) The persistence of the gender gap in
computing. Computers and Education 16, 283–287.
Durndell A. & Thomson K. (1997) Gender and computing: a
decade of change? Computers and Education 28, 1–9.
Essa E.L. (1987) The effect of a computer on pre-school
children’s activities. Early Childhood Research Quarterly 2, 377–382.
Fergusson D.M. & Horwood L.J. (1997) Gender differences
in educational achievement in a New Zealand birth cohort. New Zealand Journal of Educational Studies 32,
83–96.
Fill K. & Brailsford S. (2005) Investigating gender bias in
formative and summative CAA. Available at: http://
eprints.soton.ac.uk/16256, last accessed 8 May 2006.
Fiore C. (1999) Awakening the tech bug in girls. Learning
and Leading with Technology 26, 10–17.
Frederickson N., Frith U. & Reason R. (1997) Phonological
Assessment Battery. NFER-Nelson, Windsor.
Frome P.M. & Eccles J.S. (1998) Parents’ influence on
children’s achievement-related perceptions. Journal of
Personality and Social Psychology 74, 435–452.
Gallagher A., Bridgeman B. & Cahalan C. (2002) The effect
of computer-based tests on racial-ethnic and gender
groups. Journal of Educational Measurement 39,
133–147.
Greenwood C. & Rieth H. (1994) Current dimensions of
technology-based assessment in special education. Exceptional Children 61, 105–113.
J. Horne
Hargreaves M., Shorrocks-Taylor D., Swinnerton K.T. &
Threlfall J. (2004) Computer or paper? That is the
question: does the medium in which assessment questions are presented affect children’s performance in
mathematics? Educational Research 46, 29–42.
Hayes Z.L. & Waller T.G. (1994) Gender differences in
adult readers: a process perspective. Canadian Journal of
Behavioural Science 26, 421–437.
Herbert J. & Stipek D. (2005) The emergence of gender
differences in children’s perceptions of their academic
competence. Applied Developmental Psychology 26,
276–295.
Horne J.K., Singleton C.H. & Thomas K.V. (1999) Lucid
Assessment Systems for Schools – Secondary. Lucid
Creative Limited, Beverley.
Huber B. & Schofield J.W. (1998) I like computers, but
many girls don’t: gender and the sociocultural context of
computing. In Education/ Technology/Power: Educational Computing as a Social Practice (eds H. Bromley &
M.W. Apple), pp. 103–132. University of New York
Press, Albany.
Hyde J.S. & Linn M.C. (1988) Gender differences in verbal
ability: a meta-analysis. Psychological Bulletin 104, 53–
69.
Janssen R.I. & Plomp T.J. (1997) Information technology
and gender equality: a contraICTtion in terminis. Computers in Education 28, 65–78.
Justice L.M., Invernizzi M., Geller K., Sullivan A. & Welsch
J. (2005) Descriptive developmental performance of atrisk preschoolers on early literacy tasks. Reading
Psychology 26, 1–25.
Kadijevich D. (2000) Gender differences in computer attitude among ninth-grade pupils. Journal of Educational
Computing Research 22, 145–154.
Lindsay G. (1980) The Infant Rating Scale. British Journal
of Educational Psychology 50, 97–104.
Lindsay G. & Desforges M. (1999) The use of the
Infant Index/Baseline-PLUS as a baseline assessment
measure of literacy. Journal of Research in Reading 22,
55–66.
Maccoby E.E. & Jacklin C.N. (1974) The Psychology of Sex
Differences. Stanford University Press, Stanford.
MacMillan P. (2000) Simultaneous measurement of reading
growth, gender and relative-age effects: many faceted
Rasch applied to CBM reading scores. Journal of Applied
Measurement 1, 393–408.
Macmillan Test Unit (1990) Group Reading Test. NFERNelson, Windsor.
Makrakis V. & Sawada T. (1996) Gender, computers and
other school subjects among Japanese and Swedish
pupils. Computers and Education 26, 225–231.
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
Gender differences in educational tests
Male D.B. (1994) Time taken to complete assessments and
make statements: does the nature of SEN matter and will
new legislation make a difference? British Journal of
Special Education 21, 147–151.
Malecki C.K. & Jewell J. (2003) Developmental, gender and
practical considerations in scoring curriculum-based
measurement writing probes. Psychology in the Schools
40, 379–390.
Naglieri J.A. (1985) Matrix Analogies Test – Short Form.
The Psychological Corporation, London.
Newton P. & Beck E. (1993) Computing: an ideal occupation
for women? In Computers into Classrooms (eds J. Benyon
& H. Mackay), pp. 130–146. The Falmer Press, London.
Pajares F., Miller M.D. & Johnson M.J. (1999) Gender differences in writing self-beliefs of elementary school
students. Journal of Educational Psychology 91, 50–61.
Parshall C.G. & Kromrey J.D. (1993) Computer testing
versus paper-and-pencil: an analysis of examinee characteristics associated with mode effect. Paper presented
at the Annual Meeting of the American Educational
Research Association, Atlanta, GA, April (Educational
Resources Document Reproduction Service ED363272).
Plewis I. (1997) Inferences about teacher expectations from
national assessment at key stage one. British Journal of
Educational Psychology 67, 235–247.
Qualifications and Curriculum Authority (1999) Annual
Report of the Evaluation of the First Year of Statutory
Baseline Assessment 1998/1999. QCA, London.
Sammons P., Elliot K., Sylva K., Melhuish E., Siraj-Blatchford I. & Taggart B. (2004) The impact of pre-school on
young children’s cognitive attainments at entry to reception. British Educational Research Journal 30, 691–712.
Scott T., Cole M. & Engel M. (1992) Computers and education: a cultural constructivist perspective. Review of
Research in Education 18, 191–251.
Singleton C.H. (1994) Computer applications in the identification and remediation of dyslexia. In Literacy and
Computers: Insights from Research (ed. D. Wray),
pp. 55–61. United Kingdom Reading Association, Widness.
Singleton C.H. (1997) Computer-based assessment of reading. In The Psychological Assessment of Reading (eds
J.R. Beech & C.H. Singleton), pp. 257–278. Routledge,
London.
Singleton C.H. (2001) Computer-based assessment in education. Educational and Child Psychology 18, 58–74.
Singleton C.H., Horne J.K. & Vincent D. (1995) Implementation of a Computer-Based System for the
Diagnostic Assessment of Reading. National Council for
Educational Technology, Coventry.
Singleton C.H., Horne J.K. & Thomas K.V. (1999) Computerised baseline assessment of literacy. Journal of
Research in Reading 22, 67–80.
& 2007 The Author. Journal compilation & 2007 Blackwell Publishing Ltd
55
Strand S. (1997) Pupil progress during key stage 1: a value
added analysis of school effects. British Educational
Research Journal 23, 471–487.
Strand S. (1999) Baseline assessment results at age 4: associations with pupil background factors. Journal of Research in Reading 22, 14–26.
Swiatek M.A., Lupkowski-Shoplik A. & O’Donoghue C.C.
(2000) Gender differences in above-level EXPLORE
scores of gifted third through sixth graders. Journal of
Educational Psychology 92, 718–723.
Sylva K., Melhuish E., Sammons P., Siraj-Blatchford I. &
Taggart B. (2004) The Effective Provision of Pre-School
Education (EPPE) Project: Findings from Pre-School to
End of Key Stage 1. DFES Publications, Nottingham.
Taylor C., Kirsch I., Eignor D. & Jamieson J. (1999) Examining the relationship between computer familiarity
and performance on computer-based language tasks.
Language Learning 49, 219–274.
Thomas K.V., Singleton C.H. & Horne J.K. (2001) Lucid
Assessment Systems for Schools – Junior. Lucid Creative
Ltd, Beverley.
Tizard B., Blatchford P., Burke J., Farquhar C. & Plewis I.
(1988) Young Children at School in the Inner City.
Lawrence Erlbaum, Hove.
Todman J. (2000) Gender differences in computer anxiety
among University entrants since 1992. Computers and
Education 34, 27–35.
Tymms P.B. (1999) Baseline Assessment and Monitoring in
Primary Schools. David Fulton, London.
Vardill R. (1996) Imbalance in the numbers of boys and girls
identified for referral to educational psychologists: some
issues. Support for Learning 11, 123–129.
Vincent D. & Crumpler M. (1997) British Spelling Test
Series. NFER-Nelson, Windsor.
Volman M. & van Eck E. (2001) Gender equity and
information technology in education: the second
decade. Review of Educational Research 71, 613–
631.
Volman M., van Eck E., Heemskerk I. & Kuiper E. (2005)
New technologies, new differences: gender and ethnic
differences in pupils’ use of ICT in primary and
secondary education. Computers and Education 45,
35–55.
Wechsler D. (1991) Wechsler Intelligence Scale for Children – Third Edition. The Psychological Corporation,
San Antonio.
Wechsler D. (1997) Wechsler Memory Scale – Third Edition. The Psychological Corporation, London.
Woodward J. & Rieth H. (1997) A historical review of
technology research in special education. Review of
Educational Research 67, 503–536.