Letter position coding across modalities: Braille

Psychon Bull Rev
DOI 10.3758/s13423-014-0680-8
BRIEF REPORT
Letter position coding across modalities: Braille and sighted
reading of sentences with jumbled words
Manuel Perea & María Jiménez & Miguel Martín-Suesta &
Pablo Gómez
# Psychonomic Society, Inc. 2014
Abstract This article explores how letter position coding is
attained during braille reading and its implications for models
of word recognition. When text is presented visually, the
reading process easily adjusts to the jumbling of some letters
(jugde–judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other
focuses on the activation of an abstract level of representation
(i.e., bigrams) that is shared by all orthographic codes. Thus,
these explanations make differential predictions about reading
in a tactile modality. In the present study, congenitally blind
readers read sentences presented on a braille display that
tracked the finger position. The sentences either were intact
or involved letter transpositions. A parallel experiment was
conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposedletter words in braille readers. In contrast with the findings
with sighted readers, in which there is a cost of transpositions
in the external (initial and final) letters, the reading cost in
braille readers occurs serially, with a large cost for initial letter
transpositions. Thus, these data suggest that the letterposition-related effects in visual word recognition are due to
the characteristics of the visual stream.
Keywords Letter position . Word recognition . Reading
M. Perea (*) : M. Jiménez
Departamento de Metodología and ERI-Lectura, Universitat de
València, Av. Blasco Ibáñez, 21, 46010 Valencia, Spain
e-mail: [email protected]
M. Martín-Suesta
Organización Nacional de Ciegos (ONCE), Valencia, Spain
P. Gómez
Depaul University, Chicago, USA
A pervasive phenomenon in the literature on visual word
recognition and reading is that a jumbled word like jugde
can be easily confused with judge (Perea & Lupker, 2003,
2004). Indeed, the ERP waves of jumbled words like jugde
closely resemble those of their base words (judge) in relatively
late time windows (N400 component; Vergara-Martínez,
Perea, Gómez, & Swaab, 2013). The robustness of this phenomenon across different orthographic systems (see Frost,
2012) demonstrates that character position encoding during
visual word recognition is graded in some form: the string
ABCD is more perceptually similar to ACBD than to ADCB,
and even more than to AXYD.
In a sentence-reading experiment, Rayner, White, Johnson,
and Liversedge (2006) found that readers were able to
understand setneces wirtten with jubmled lettres relatively
accurately. Rayner et al. also reported that there was a
decrement in reading speed, relative to intact sentences.
The reading cost (in terms of words per minute) with
jumbled words was especially large when the letter transpositions occurred at external positions (a slowdown of
36% and 26% in initial and final transpositions), while it was
substantially smaller when the letter transpositions occurred in
internal positions (11%). This pattern is consistent with the
commonly held view that external letters play a special role
during visual word recognition (Tydgat & Grainger, 2009) and
that letters in the visual modality are processed in parallel
(Adelman, Marquis, & Sabatos-DeVito, 2010).
The results outlined above indicate that some form of
graded representations of letter position coding is utilized
during word recognition. The question is whether these
graded representations are a consequence of the characteristics of the visual system or, instead, are an intrinsic property
of all orthographic codes (i.e., they occur at an abstract level
of representation). To answer this question, we examined
how letter position coding is attained during sentence reading
in a tactile modality: braille. The braille code is employed by
Psychon Bull Rev
people who are blind or have low vision. Braille characters are
represented as raised dots in a 3 × 2 matrix that are read
sequentially (e.g., JUDGE would be
). Because
of its letter-by-letter nature, braille has been characterized as
“the most strictly serial mode of language input” (Bertelson,
Mousty, & Radeau, 1992, p. 284). Comparing braille and
sighted reading not only provides us with insights into
modality-specific versus modality-independent processes in
reading (Perea, García-Chamorro, Martín-Suesta, & Gómez,
2012), but also may be of particular relevance for improving
the methods of braille teaching; among the visually impaired,
fluency in braille is crucial to achieving employment and
higher incomes (Ryles, 1996).
In the experiment, we included two types of sentences: (1)
intact sentences and (2) sentences wirtten with jubmled lettres
(at the beginning, in the middle, or at the end of the words; for
illustrations, see Table 1). To present the sentences and to
record the location/timing of the participant’s reading position
during sentence reading in braille, we employed a display that
detects the position of the finger while reading (Active Braille
40-cell display, HandyTech). Thus, it was possible to obtain
information about reading speed, which has some commonalities with (and important differences from as well) the data
obtained from an eyetracker. For comparison purposes, we
also presented the sentences on a computer screen to sighted
readers, and their eye movements were monitored via an
eyetracker.
In the eye-tracking literature, there are global and local
ocular–motor measurements that have been accepted and
validated (see Rayner et al., 2006). Studies on the motor
component of braille reading have shown that there are some
fundamental differences between the two reading modalities
(see Hughes, McClelland, & Henare, 2014, for a discussion).
While eyetrackers measure oculomotor activity, active braille
displays measure the point of contact with the braille line.
Nonetheless, both methods yield some measurements that can
be thought of as indices of reading difficulty. Total reading
time is analogous in sighted and braille reading: reading rate
(words per minute); note, however, that reading times are
around 3 times faster in the visual than in the tactile modality.
Number of fixations in eye movement research is qualitatively
different from any parallel measurement in braille: While
saccades are ballistic and there is saccadic suppression, braille
readers actually detect the dots during the movement.
Furthermore, while visual information is obtained during a
fixation, an interruption of motion for a braille reader implies
loss in information gathering. Nonetheless, the number of
lingers in braille (the reading finger slowing down and lingering on one character) may also be a good index for reading
difficulties—as is the number of fixations in sighted readers.
Another measurement that differs in sighted and braille
readers, yet could reflect processes related to reading difficulty, is the percentage of regressions even if those regressions
are different in the two modalities (e.g., eye regressions are
ballistic, while finger regressions can stop at any time; eye
regressions benefit from parafoveal preview of the intended
target of the regression, while finger regressions have no
analogous benefits).
The present experiment is intended to disentangle the possible loci of the effects of letter position coding in reading. If
the high degree of flexibility of letter position coding during
reading is mostly due to perceptual uncertainty at the visual
level, as is claimed in a number of models (Davis 2010;
Gómez, & Ratcliff, & Perea, 2008; Norris, Kinoshita, & van
Casteren, 2010), the cost of reading sentences with jumbled
words should be markedly large in the tactile modality—
clearly, larger than in the visual modality. Alternatively, if
the locus of the jumbled word phenomenon is a consequence
Table 1 Example sentences and results for each dependent measure (with means and standard errors in parentheses)
Note. The English translation of the sentence is: “It was the best moment of her life”. NOR = normally (intact) written text; BEG = transpositions of initial
letters; INT = transpositions of internal letters; END = transpositions of final letters. The (global) dependent variables are (1) number of fixations, (2)
percentage of regressions, and (3) number of words per minute, and the (local) dependent variable is first-pass time on the target word (in the example, the
target word is “momento”). For each column of results in the visual/tactile modalities, entries not sharing the subscript differ at p < .025. The reading cost
in the main text was computed averaging the reading cost across all participants, rather than on the average reading times; unsurprisingly, the two
computations show the same pattern
Psychon Bull Rev
of an abstract (amodal) level of representations intrinsic to all
orthographic systems (open-bigram accounts; Dehaene,
Cohen, Sigman, & Vinckier, 2005; Whitney, 2001), reading
with jumbled words in braille should not differ at a qualitative
level from reading jumbled words in the visual modality. The
alleged open-bigram detectors, according to this account, are
located in a brain area (the visual-word form area) that shows
similar patterns of activation during reading for sighted and
congenitally blind braille readers (Reich, Szwed, Cohen, &
Amedi, 2011). Finally, there are hybrid accounts that combine
the ideas of perceptual uncertainty and bigram activation
(Adelman, 2011; Grainger, Granier, Farioli, Van Assche, &
van Heuven, 2006).
To our knowledge, only one published experiment has
examined the process of letter position coding during braille
word recognition. Using a word/nonword discrimination task,
Perea et al. (2012) found that the responses to pseudowords
created by transposing/replacing two nonadjacent internal
letters (e.g., cholocate vs. chotonate) were equally fast and
accurate in braille readers. When presented in the visual
modality with sighted readers, these transposed-letter
pseudowords yielded much longer latencies and error rates
than the replacement-letter pseudowords. Thus, at the level of
isolated word recognition, letter position coding in the tactile
modality is substantially less noisy (meaning that there is less
letter position uncertainty) than in the visual modality.
Nonetheless, as Perea et al. (2012) indicated, “with some
effort, they [braille readers] were able to volitionally reconstruct the base word of a number of pseudowords” (p. 3). To
examine in finer detail the intricacies of letter position coding
in a more natural scenario, the present experiment examined
the role of the different letter positions (initial, middle, final)
during normal reading.
The experiment was conducted in Spanish. The most common braille code in Spanish (Grade 1) does not employ
abbreviations, while the most common braille code in
English (Grade 2) employs many contractions and abbreviations (e.g., the = , -ing = ); thus, each sentence was
composed of the same number of letters in the two modalities.
The word transpositions in the jumbled word sentences
involved adjacent letter positions, at the beginning of the
word, in the middle, or at the end (see Table 1), with the
constraint that only words with five or more letters had
letter transpositions.
The predictions are straightforward. Because of the
intrinsic seriality of braille reading, the cost of the
sentences with jumbled words should be substantially
larger for the initial letter position—the one that creates
the initial access code—than for the other positions, following a monotonic function (see Bertelson et al., 1992,
for evidence using word identification tasks). In contrast, we
expect to find a nonmonotonic (quadratic) function in the
visual modality (i.e., the smallest reading cost should be for
middle transpositions), as in the Rayner et al. (2006) experiment. The experiment also has implications for the different
models regarding the locus/loci of transposed-letter effects:
perceptual uncertainty at the visual level versus abstract representation accounts of letter position coding. If the processes
underlying letter position coding are (mostly) due to perceptual uncertainty at the visual level, reading jumbled words
during sentence reading in the tactile modality should involve
a much greater cost than in the visual modality. Alternatively,
if the processes underlying letter position coding are modality
independent (e.g., due to the activation of abstract open
bigrams), reading jumbled words during sentence reading
in the visual and tactile modalities should involve a similar
cost. While the open-bigram account has not explicitly been
expanded to braille reading, there is the claim that open
bigrams provide us with a near optimal level of representation (Dandurand, Grainger, Duñabeitia, & Granier, 2011). If
transposed-letter effects are a consequence of a representation intrinsic to all orthographic systems and if the bigram
representation is such a representation open bigrams would
be activated as the braille reader progresses in the finger
movement. For example, if the word is TRIAL, the word
identification process would begin with the detection of the
letter T, followed by the letter R, which activates the bigram
[TR]; detecting I activates the bigrams [TR-TI-RI]; detecting
A activates the bigrams [TR-TI-TA-RI-RA-RI-IA]; and finally,
detecting the final letter L activates the bigrams [TR-TI-TATL-RI-RA-RL-IA-IL-AL]. The idea is that while the manner
of braille reading makes the activation of the bigram-level
unit more serial, it is those bigrams that are the driving force
of the word recognition process.
To summarize, given that there are differences between
braille and sighted reading due to motor and sensory limitations that constrain each of the two modalities, is there a
common abstract representation, and, if so, is that representation responsible for the transposed-letter effects described
above?
Method
Participants
In the braille subexperiment, we recruited 20 congenitally
blind participants (mean age 40 years, range = 24–60), all of
them university undergraduates/graduates in Valencia or
Málaga. They had learned braille when they were 5/6 years
old and reported reading in braille on a daily basis. All of them
had normal hearing. Most (77%) of the participants were daily
braille display users; the rest employed a braille display occasionally. None of the participants reported any difficulties
while reading in the braille display. In the sighted-reading
subexperiment, we recruited 16 students from the University
Psychon Bull Rev
of Valencia with normal vision/hearing. All participants were
native speakers of Spanish.
Stimuli
We created a set of 80 sentences in Spanish. Each sentence
contained a high-frequency target word (mean = 123.6 per
million, range = 30–545, in the EsPal database (Duchon,
Perea, Sebastián-Gallés, Martí, & Carreiras, 2013); mean
length = 7.4, range = 7–9). None of the sentences contained
more than 40 characters, a length that could be displayed on a
single line in the braille display and on the computer screen.
As in the Rayner et al. (2006) experiment, we created four
versions of each sentence: (1) The word was intact; (2) the
initial letters of each word were transposed; (3) two internal
letters were transposed; and (4) the two final letters were
transposed. Words of four or fewer letters were not presented
in jumbled form. The initial word in the sentence was always
four letters or shorter (see Table 1). One quarter of the participants received each experiment list.
Procedure
The experiment took place individually in a quiet room. For
the braille subexperiment, a HandyTech-Active Braille 40-cell
display was employed to present the sentences and record the
participant’s reading position during reading; participants only
used their preferred hand. This device detects the participant’s
tactile reading position of the finger with the strongest pressure approximately every 16 ms (i.e., it provides one data
point 60 times per second: the braille cell in which there is
the firmest contact). For the printed-reading subexperiment,
participants were instructed to read the printed sentences; their
eye movements were monitored with an Eyelink-II eyetracker
(500 Hz) using Eyetrack (available at http://blogs.umass.edu/
eyelab/software/). A chinrest was employed to reduce head
movements, and calibrations/recalibrations were performed
when necessary. In the two subexperiments, comprehension
questions were asked after 50% of the trials.
Results
Most participants claimed to have understood all or to have
trouble with no more than two sentences (94% and 95%
in the visual and tactile modalities, respectively). None of
the participants made more than two comprehension errors
(mean = 1.5% and 3.5% errors in the visual and tactile
modalities, respectively). The averages of the dependent
measures are shown in Table 1.
Given the analogous yet dissimilar nature of eye- tracking
and braille-tracking measurements, we did not directly perform any type of statistical inference comparing the two
modalities, except for a global measure of reading cost. The
average reading cost for participants (in terms of words per
minute) from reading sentences with jumbled words (merged
together), relative to the normally written condition, was
dramatically larger in braille than in the visual modality
(34.0% vs. 18.7%, respectively), F(2, 68) = 40.28, η2 = .54,
p < .001. Instead, we focused on the comparison of the distinct
features of the data from sighted versus braille reading
regarding letter position encoding. To examine in detail the
cost of reading jumbled words on the dependent measures,
position of transposition (initial, middle, final) was the factor
in the by-participants (F1) and by-items (F2) ANOVAs.
Reading rate (words per minute)
The data in the tactile modality reflected more words per
minute in the sentences involving transpositions in the initial
letter position and less cost in the final letter position. This was
reflected in a linear function, F1(1, 19) = 43.18, η2 = .69,
p < .001; F2(1, 79) = 56.50, η2 = .42, p < .001; the quadratic
function was not significant, F1 < 1; F2(1, 79) = 2.26, η2 = .03,
p = .13. In contrast, in the visual modality, the number of
words per minute was larger in the external-letter transpositions and smaller in the internal-letter transpositions. This was
reflected in a quadratic function, F1(1, 15) = 28.71, η2 = .66,
p < .001; F2(1, 79) = 18.37, η2 = .19, p < .001; the linear
function was negligible, both Fs < 1.
Number of fixations (visual) and lingers (braille)
Data analysis
We computed a number of global and local dependent variables. The global measures were number of words per minute,
number of fixations (eye) or lingers (finger), and percentage of
regressions (Rayner et al., 2006). The local measure was the
first-pass time on the target word. For the eye-tracking
subexperiment, fixation times beyond the 80- to 800-ms cutoffs were excluded from the analyses. For the braille
subexperiment, when participants’ reading finger lingered
for more than 1 s on a given character, those pauses were
excluded from the analyses.
The data in the visual modality reflected a larger number of
fixations for those sentences involving transpositions in the
external letters. Most of the variance was explained by a
quadratic function, F1(1, 15) = 27.50, η2 = .65, p < .001;
F2(1, 79) = 13.73, η2 = .15, p < .001, and there was also a
small linear function, F1(1, 15) = 7.46, η2 = .33, p = .015;
F2(1, 79) = 3.11, η2 = .10, p = .08. In the tactile modality, there
is not a variable that directly compares with number of fixations, for the reasons mentioned in the introduction; however,
as another measurement of reading difficulties, we computed
the number of times in which the finger lingered on a braille
Psychon Bull Rev
cell for longer than a refresh cycle (16 ms). Note that this does
not necessarily mean a pause in the finger movement but,
rather, a slowdown in motion. The number of lingers was
larger for those sentences involving transpositions at the initial
letter position and fewer lingers when the sentences involved
transposition at the final letter position. Most of the variance
was explained by a linear function, F1(1, 19) = 33.17, η2 = .64,
p < .001; F2(1, 79) = 65.40, η2 = .45, p < .001, and there was
also a small quadratic component, F1(1, 19) = 5.85, η2 = .02,
p = .026; F2(1, 79) = 9.84, η2 = .11, p = .002.
Percentage of regressions
The regression data in the tactile modality reflected a linear
function (more regressions in the initial position and fewer
regressions in the final position), F1(1, 19) = 43.71, η2 = .70,
p < .001; F2(1, 79) = 53.50, η2 = .40, p < .001, while the
quadratic component did not approach significance,
F1(1, 19) = 1.37, η2 = .06, p = .26; F2(1, 79) = 2.53,
η2 = .03, p = .12. The regression data in the visual modality
failed to reveal any linear or quadratic trends, all Fs < 1.
First-pass time (on target word)
In the tactile modality, the duration of the first-pass time on
the target word reflected a linear function, F1(1, 19) = 25.23,
η2 = .57, p < .001; F2(1, 79) = 64.10, η2 = .34, p < .001, and
was accompanied by a smaller quadratic component,
F1(1, 19) = 7.31, η2 = .27, p = .014; F2(1, 79) = 6.49,
η2 = .08, p = .013. In contrast, the data in the visual modality
reflected a quadratic function (i.e., there was a larger cost in
the external letter positions), F1(1, 15) = 29.98, η2 = .66,
p < .001; F2(1, 79) = 18.37, η2 = .19, p < .001, whereas the
linear function was negligible, both Fs < 1.
Discussion
As occurs with sighted readers in the visual modality, congenitally blind readers can accurately understand sentences with
jumbled words in braille. The high level of comprehension of
the sentences with jumbled words was accompanied by a
reading cost. This cost was modulated by the position of the
transposition (initial, internal, final) and the modality. In the
visual modality, the cost reflected mostly a quadratic function
(i.e., larger cost of the external letter transpositions, thus
replicating Rayner et al., 2006), whereas in the tactile modality, the cost reflected a linear function (i.e., larger cost of the
initial letter position). Thus, the nature of word processing is
more parallel in the visual modality and more serial in the
tactile modality. The reading cost of presenting jumbled words
in braille was larger than that in the visual modality (in terms
of words per minute: 34.0% vs. 18.7%, respectively), which
favors those models that assume that perceptual uncertainty at
the visual level is key in the high degree of flexibility of letter
position coding.
Therefore, the characteristics of the sensory modality shape
the process of letter position coding during sentence reading.
While the perceptual system in the visual modality has to deal
with the identity and position of several objects (e.g., letters
while reading) in a single gaze, the process in the tactile
modality is manifestly more serial. The larger reading cost in
braille reading than in sighted reading fits entirely with models
of letter position coding that assume that the jumbled word
phenomenon is the results of perceptual uncertainty when the
location of objects is processed in the visual modality (see
Perea, García-Chamorro, Centelles, & Jiménez, 2013, for
evidence when music is read).1 In the case of visual reading,
a feasible account of the present findings is based on the
overlap model (Gómez et al., 2008). In this model, the position of a visual stimulus in a string is described as a field,
rather than as a discrete point (e.g., the position of the letters I
and A in TRIAL have some overlap in their encoded locations).
When we present a sentence such as “FNACY LAYWER . . . ,”
these strings of letters will have a large overlap with the representations of their base words (“FANCY LAWYER . . .”), thus
producing little cost relative to the intact sentence. The overlap
model assumes that these fields can be described as normal
distributions and that different letters can have smaller and
larger standard deviations (σ) (i.e., perceptual noise). In the
visual modality, the external letters would have smaller σs,
and hence, there would be less overlap than in internal letters.
Importantly, the present experiment poses some problems for
those accounts that assume that the graded representation in
letter position coding is intrinsic to reading in all orthographic
codes (i.e., open-bigram models; Dehaene et al., 2005). These
accounts assume an abstract level of representations of open
bigrams shared by all orthographic codes; note that openbigram detectors are allegedly activated in the same brain area
for sighted and blind readers (Reich et al., 2011). Thus, openbigram accounts would have predicted a similar reading cost in
the two modalities.
How are letter identities and positions attained in braille?
One might argue that the location of a given letter in a word is
obtained earlier than its identity. This can readily explain the
presence of a strong linear component in performance as a
1
We acknowledge, however, that at some level, there are processes that
are specific to letter/word processing (e.g., word/nonword differences in
letter position coding, as shown in Gómez et al., 2008). However, these
“late” processes may not require a level of open bigram detectors that
encode letter position. Additional research is necessary to determine
whether bigrams (or some form of frequent orthographic/morphological
chunks) play a relevant role during word recognition and reading, as
proposed by hybrid models of letter position coding (Adelman, 2011;
Grainger et al., 2006).
Psychon Bull Rev
function of position of the letter transposition (see Bertelson
et al., 1992, for evidence of a uniqueness point effect in braille
word identification). The limitations of the motor and tactile
systems make even the most expert braille readers slower than
fluent sighted readers, as is clear from the reading times in our
baseline (intact) condition. Note that this does not scale up the
effects: The reading cost of the transposed-letter conditions
was linear in the tactile modality (i.e., less reading cost for
the final transpositions) and quadratic in the visual modality
(i.e., less reading cost for the internal transpositions). Thus,
braille reading is qualitatively different from sighted reading.
Determining whether this difference occurs because of the
slower pace, the sensory modality, or any other factor not
explored here is beyond the scope of this article.
In sum, the present experiment demonstrates that comparing the data from parallel sighted versus braille reading experiments can give us insights into modality-specific versus
modality-independent processes in sentence reading. In particular, the particularities of the visual and tactile sensory systems
shape the way letter positions are encoded during sentence
reading. Future research should focus on how sensory modality modulates higher-level processes during sentence reading.
Acknowledgments The research reported in this article has been partially supported by Grant PSI2011-26924 from the Spanish Ministry of
Economy and Competitiveness. María Jiménez was the recipient of a
postgraduate grant from the program “Atracció de Talent” at the University of Valencia (VLC-Campus). We would like to thank Antonio Ferrer
for help in setting up the braille experiment. We are also indebted to the
Organización Nacional de Ciegos (National Organization of Spanish
Blind People] for their invaluable help at all stages of this project. Finally,
we would also thank Simon Fischer-Baum, Barry Hughes, and an anonymous reviewer for their very valuable feedback on earlier versions of the
manuscript.
References
Adelman, J. S. (2011). Letters in time and retinotopic space. Psychological
Review, 118, 570–582. doi:10.1037/a0024811
Adelman, J. S., Marquis, S. J., & Sabatos-DeVito, M. G. (2010).
Letters in words are read simultaneously, not in left-to-right
sequence. Psychological Science, 21, 1799–1801. doi:10.1177/
0956797610387442
Bertelson, P., Mousty, P., & Radeau, M. (1992). The time course of
braille word recognition. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 18, 284–297. doi:10.1037/
0278-7393.18.2.284
Dandurand, F., Grainger, J., Duñabeitia, J. A., & Granier, J. P. (2011). On
coding non-contiguous letter combinations. Frontiers in Cognitive
Science, 2, 136.
Davis, C. J. (2010). The spatial coding model of visual word
identification. Psychological Review, 117, 713–758. doi:10.
1037/a0019738
Dehaene, S., Cohen, L., Sigman, M., & Vinckier, F. (2005). The neural
code for written words: A proposal. Trends in Cognitive Sciences, 9,
335–341. doi:10.1016/j.tics.2005.05.004
Duchon, A., Perea, M., Sebastián-Gallés, N., Martí, A., & Carreiras, M.
(2013). EsPal: One-stop shopping for Spanish word properties.
Behavior Research Methods, 45, 1246–1258. doi:10.3758/s13428013-0326-1
Frost, R. (2012). Towards a universal model of reading. Behavioral and
Brain Sciences, 35, 263–279. doi:10.1017/S0140525X11001841
Gómez, P., Ratcliff, R., & Perea, M. (2008). The overlap model: A model
of letter position coding. Psychological Review, 115, 577–601. doi:
10.1037/a0012667
Grainger, J., Granier, J.-P., Farioli, F., Van Assche, E., & van Heuven, W. J.
B. (2006). Letter position information and printed word perception:
The relative-position priming constraint. Journal of Experimental
Psychology: Human Perception and Performance, 32, 865–884.
doi:10.1037/0096-1523.32.4.865
Hughes, B., McClelland, A., & Henare, D. (2014). On the nonsmooth,
nonconstant velocity of braille reading and reversals. Scientific
Studies of Reading, 18, 94–113. doi:10.1080/10888438.2013.
802203
Norris, D., Kinoshita, S., & van Casteren, M. (2010). A stimulus sampling theory of letter identity and order. Journal of Memory and
Language, 62, 254–271. doi:10.1016/j.jml.2009.11.002
Perea, M., García-Chamorro, C., Centelles, A., & Jiménez, M. (2013).
Position coding effects in a 2D scenario: The case of musical
notation. Acta Psychologica, 143, 292–297. doi:10.1016/j.actpsy.
2013.04.014
Perea, M., García-Chamorro, C., Martín-Suesta, M., & Gómez, P. (2012).
Letter position coding across modalities: The case of braille
readers. PLoS ONE, 7(10), e45636. doi:10.1371/journal.pone.
0045636
Perea, M., & Lupker, S. J. (2003). Does jugde activate COURT?
Transposed-letter confusability effects in masked associative
priming. Memory and Cognition, 31, 829–841. doi:10.3758/
BF03196438
Perea, M., & Lupker, S. J. (2004). Can CANISO activate CASINO?
Transposed-letter similarity effects with nonadjacent letter positions.
Journal of Memory and Language, 51, 231–246. doi:10.1016/j.jml.
2004.05.005
Rayner, K., White, S. J., Johnson, R. L., & Liversedge, S. P. (2006).
Raeding wrods with jubmled lettres: There’s a cost. Psychological
Science, 17, 192–193. doi:10.1111/j.1467-9280.2006.01684.x
Reich, L., Szwed, M., Cohen, L., & Amedi, A. (2011). A ventral visual
stream reading center independent of visual experience. Current
Biology, 21, 363–368. doi:10.1016/j.cub.2011.01.040
Ryles, R. (1996). The impact of braille reading skills on employment,
income, education, and reading habits. Journal of Visual Impairment
and Blindness, 90, 219–226.
Tydgat, I., & Grainger, J. (2009). Serial position effects in the identification of letters, digits and symbols. Journal of Experimental
Psychology: Human Perception and Performance, 35, 480–498.
doi:10.1037/a0013027
Vergara-Martínez, M., Perea, M., Gomez, P., & Swaab, T. Y. (2013). ERP
correlates of letter identity and letter position are modulated by
lexical frequency. Brain and Language, 125, 11–27. doi:10.1016/j.
bandl.2012.12.009
Whitney, C. (2001). How the brain encodes the order of letters in a printed
word: The SERIOL model and selective literature review.
Psychonomic Bulletin & Review, 8, 221–243. doi:10.3758/
BF03196158