Cross-modal integration and plastic changes

Cerebral Cortex August 2005;15:1113--1122
doi:10.1093/cercor/bhh210
Advance Access publication November 24, 2004
Cross-modal integration and plastic
changes revealed by lip movement,
random-dot motion and sign languages
in the hearing and deaf
Norihiro Sadato1,2,3, Tomohisa Okada1,4, Manabu Honda1,
Ken-Ichi Matsuki5, Masaki Yoshida5, Ken-Ichi Kashikura2,
Wataru Takei6, Tetsuhiro Sato2, Takanori Kochiyama7 and
Yoshiharu Yonekura2
Sign language activates the auditory cortex of deaf subjects, which is
evidence of cross-modal plasticity. Lip-reading (visual phonetics),
which involves audio-visual integration, activates the auditory cortex
of hearing subjects. To test whether audio-visual cross-modal
plasticity occurs within areas involved in cross-modal integration,
we used functional MRI to study seven prelingual deaf signers, 10
hearing non-signers and nine hearing signers. The visually presented
tasks included mouth-movement matching, random-dot motion
matching and sign-related motion matching. The mouth-movement
tasks included conditions with or without visual phonetics, and
the difference between these was used to measure the lip-reading
effects. During the mouth-movement matching tasks, the deaf
subjects showed more prominent activation of the left planum
temporale (PT) than the hearing subjects. During dot-motion matching, the deaf showed greater activation in the right PT. Sign-related
motion, with or without a lexical component, activated the left PT
in the deaf signers more than in the hearing signers. These areas
showed lip-reading effects in hearing subjects. These findings
suggest that cross-modal plasticity is induced by auditory deprivation
independent of the lexical processes or visual phonetics, and this
plasticity is mediated in part by the neural substrates of audio-visual
cross-modal integration.
the superior temporal gyrus (GTs) in native deaf signers
compared with hearing non-signers. These findings suggest
that the auditory association cortex is where the changes
associated with audio-visual cross-modal plasticity occur.
However, the activation of the auditory cortex by visual
stimuli is not specific to deaf subjects. In hearing subjects, lipreading enhances the perception of spoken language. Combining audible speech with visible articulation movements can
improve comprehension to the same extent as altering the
acoustic signal-to-noise ratio by 15--20 dB (Sumby and Polack,
1954). Calvert et al. (1997) found that linguistic visual cues
during lip-reading activated the auditory cortex and GTs in
hearing individuals in the absence of speech sounds, whereas
non-linguistic facial movement did not. The findings of Calvert
et al. (1997) suggest that this type of audio-visual integration
(face/voice matching) occurs in the superior temporal cortices.
Our hypothesis was that audio-visual cross-modal plasticity
occurs within areas involved in cross-modal integration. To test
this, we recruited deaf and hearing subjects for a fMRI study.
Three tasks were included: mouth-movement matching, random
dot-movement matching, and sign-related motion matching.
The mouth-movement matching task contained open-mouth
movements (OPEN), representing visual phonetics, and closedmouth movements (CLOSED), which are not phonetic. Thus,
the difference between the OPEN and CLOSED conditions can
be used as a marker for phonetic speech perception (lip-reading
effect). Cross-modal plastic changes were evaluated by comparing the task-related activation between deaf and hearing
groups during closed-mouth, random-dot and sign-related
movements without lexical processing. The bimodal activation
of the auditory cortex in hearing subjects suggests that visual
and auditory inputs are balanced competitively (Rauschecker,
1995). Hence, with auditory deprivation, the bimodal association auditory cortex is expected to be activated by the visual
processing that usually does not activate the auditory cortex. As
the CLOSED task does not contain visual phonetics, we
expected that it would not activate the association auditory
cortex of the hearing, whereas it would activate that of the deaf
subjects. The random-dot movement-matching task was included to determine whether the plastic change is specific to
the biological mouth motion or related to motion in general.
Sign-word/non-word discrimination tasks were also included
to determine whether the plastic changes are related to the
motion of the hands or linguistic processes. To this end, a direct
comparison was made between the deaf signers and the hearing
signers.
Keywords: blood flow, cortex, deafness, language, magnetic resonance,
speech
Introduction
There is evidence of cross-modal plasticity during the perception of sign language by deaf subjects. Sign language involves the
visual perception of hand and face movements. Using functional
MRI (fMRI), Neville et al. (1998) observed increased activity in
the superior temporal sulcus (STS) during comprehension of
American sign language in congenital deaf subjects and hearing
signers. They suggested that the STS is involved in the linguistic
analysis of sign language. Using PET imaging, Nishimura et al.
(1999) found that activity was increased in the auditory
association cortex, but not in the primary auditory cortex, of
a prelingual deaf individual during comprehension of Japanese
sign language. After this patient received a cochlear implant, the
primary auditory cortex was activated by the sound of spoken
words, but the auditory association cortex was not. Nishimura
et al. (1999) suggested that audio-visual cross-modal plasticity is
confined to the auditory association cortex, and that cognitive
functions (such as sign language) might lead to functional
plasticity in the underutilized auditory association cortex.
Moreover, Petitto et al. (2000) observed increased activity in
Oxford University Press 2004; all rights reserved
1
National Institute for Physiological Sciences, Okazaki, Japan,
Biomedical Imaging Research Center, Fukui University, Fukui,
Japan, 3JST (Japan Science and Technology Corporation)/
RISTEX (Research Institute of Science and Technology for
Society), Kawaguchi, Japan, 4Department of Image-based
Medicine, Institute of Biomedical Research and Innovation,
Kobe, Japan, 5Department of Education, Fukui University,
Fukui, Japan, 6 Department of Education, Kanazawa University,
Kanazawa, Japan and 7Graduate School of Human and
Environmental Studies, Kyoto University, Kyoto, Japan
2
Subjects and Methods
Subjects
Our volunteers were 19 hearing subjects (11 women and eight men)
aged 36.0 ± 10.7 years (mean ± SD), and seven prelingual deaf subjects
(one woman and six men) aged 31.6 ± 7.1 years. Nine of the hearing
subjects and all of the deaf subjects were fluent in Japanese sign
language. The deaf subjects also had been trained in lip-reading at deaf
school. The characteristics of the subjects are summarized in Table 1.
The subjects were all right-handed according to the Edinburgh
handedness inventory (Oldfield, 1971). None of the subjects had any
history of neurological or psychiatric illness, and none had any
neurological deficits other than deafness. The protocol was approved
by the ethical committee of Fukui Medical University, Japan, and the
National Institute for Physiological Sciences. All subjects gave their
written informed consent for the study.
fMRI Experiment
This experiment consisted of three tasks: a lip-movement task,
a moving-dot task and signed-word discrimination. Each task was
performed in separate sessions. Hearing signers, hearing non-signers
and deaf signers underwent the mouth-movement and dot-motion tasks.
Hearing signers and deaf signers participated in the signing tasks. The
lip-motion and dot-motion sessions were repeated twice.
Experimental Setup
Visual stimuli were displayed on a half-transparent screen hung 285 cm
from the subject’s eyes, and were shown via an LCD projector (Epson
ELP-7200L, Tokyo, Japan) connected to a digital video camera (SONY,
Tokyo, Japan). Subjects viewed the screen through a mirror. The screen
covered a visual angle of ~10. The subject’s right hand was placed on
a button box connected to a microcomputer that recorded the subject’s
responses.
MRI
A time-course series of volumes was acquired using T2*-weighted
gradient echo-planar imaging (EPI) sequences with a 3.0 T magnetic
resonance imager (VP, General Electric, Milwaukee, WI). Each volume
Table 1
Characteristics of the participants
Subject no. Age Sex Age at start of
Age at onset of Cause of
sign language (years) deafness (years) deafness
Residual
hearing (dB)
Deaf
1
2
3
4
5
6
7
100
100
[100
100
[100
[100
[100
34
44
32
26
33
21
31
F
M
M
M
M
M
M
9
5
7
19
6
1
6
Hearing signers
1
51
2
35
3
37
4
30
5
25
6
43
7
53
8
59
9
46
F
F
F
F
F
F
F
F
M
46
29
31
23
17
35
45
49
20
\3
0
0
0
0
0
0
Face-movement Task (Fig. 1)
The event-related design consisted of three event conditions: OPEN,
CLOSED and a still face. Throughout the session, the subjects fixated
a small cross-hair in the center of the screen. During the OPEN
condition, four consecutive frames (frame duration = 500 ms) of
a face pronouncing a vowel (/a/, /i/, /u/, /e/, or /o/) were presented
(Fig. 1). No auditory input was provided. Following the stimulus, we
presented a question mark (‘?’) for 400 ms. This prompted the subject to
press a button with their right index finger if the movements of the lips
in the first and last frames were the same, and with the middle finger if
the movements were not the same. The CLOSED condition was similar
to the OPEN condition, except that the mouth was always closed during
the movement, which consisted of five patterns of twitches of the lips
(Fig. 1). During the still-face event condition (STILL), a face without any
movement was presented for 2 s followed by the 400 ms presentation of
an arrow pointing to the right or left. The subject was instructed to
press the button with their right index finger if the leftward arrow was
shown, and with the middle finger if the rightward arrow was shown.
The inter-trial interval (ITI) was fixed at 3 s. Each condition was
repeated 30 times, so there were 90 events in total.
To maximize the efficiency of detecting the differences between the
CLOSED and STILL, OPEN and STILL, and OPEN and CLOSED conditions,
the distribution of stimulus onset asynchronies (SOAs) of each condition were determined as follows (Friston et al., 1999b): the order of the
90 events (30 for each condition) was permutated randomly to generate
a set of three vectors (1 3 90 matrix) indicating the presence (1) or
absence (0) of a particular event, and hence representing the distribution of the SOAs of each condition (SOA vectors). A design matrix (X)
incorporating the three conditions (CLOSED, OPEN and STILL) was
created by convolving a set of SOA vectors with a hemodynamicresponse function:
~
~; ~
c; o
s 5h
X = ½~
~ is for OPEN and ~
where ~
c is for CLOSED, o
s is for STILL.
The efficiency of the estimation of the CLOSED--STILL, OPEN--STILL
and OPEN--CLOSED comparisons was evaluated by taking the inverse of
the covariance of the contrast between parameter estimates (Friston
et al., 1999b),
^ = r2 c T ðX T X Þ 1 c
varfc T bg
–
Efficiency = tracefc T ðX T X Þ 1 c g
–
–1
where c = (1, 0, --1) for CLOSED--STILL, (0, 1, --1) for OPEN--STILL and
(--1, 1, 0) for OPEN--CLOSED. From the 1000 randomly generated sets
of SOA vectors, we selected the most efficient based on the maximum of
the sum of the square of the efficiency for three contrasts. The session
took 4.5 min, and tried twice per subject.
Hearing non-signers
1
25 F
2
25 M
3
27 M
4
27 M
5
28 M
6
29 F
7
29 M
8
29 M
9
39 F
10
46 M
1114 Cortical Plasticity in the Deaf
Prematurity
Prematurity
Prematurity
Prematurity
Fever
Inherited
Fever
consisted of 28 slices, with a slice thickness of 4.0 mm and a 1.0 mm gap,
and included the entire cerebral and cerebellar cortex. The time interval
between two successive acquisitions of the same image was 2500 ms,
the echo time was 30 ms and the flip angle was 81. The field of view
(FOV) was 19 cm. The in-plane matrix size was 64 3 64 pixels with
a pixel dimension of 3.0 3 3.0 mm. Tight, but comfortable, foam padding
was placed around the subject’s head to minimize head movement.
Immediately after five fMRI sessions, T2-weighted fast-spin echo
images were obtained from each subject using location variables that
were identical to those of the EPIs used for anatomical reference. In
addition, high-resolution whole-brain MRIs were obtained with a conventional T2-weighted fast-spin echo sequence. A total of 112 transaxial
images were obtained. The in-plane matrix size was 256 3 256 mm, slice
thickness was 1.5 mm and pixel size was 0.742 3 0.742 mm.
Dot-motion Task
The fMRI session with random-moving dot stimuli consisted of three
periods of stationary dots and three periods of moving dots; each period
was 30 s long, alternating stationary and moving stimuli (Fig. 1).
Throughout the session, the subjects were asked to fixate a small
cross-hair in the center of the screen. During the 30 s stationary period,
every 3 s stationary random dots were presented for 2 s, followed by the
d
Sadato et al.
400 ms appearance of an arrow pointing either right or left. The frame
size of the presented dots was 5. The subject was instructed to press
a button with their right index finger if the leftward arrow appeared, and
with the middle finger if the rightward arrow appeared. During the 30 s
moving-dot period, every 3 s four consecutive frames of coherently
moving random dots were presented. In each frame, the random dots
moved for 500 ms in a single horizontal direction (right or left). The dots
moved at a speed of 2.5/s. This was followed by a question mark, which
was presented for 400 ms. This cued the subject to press the button
with the right index finger if the direction of the movement in the first
and last frames was the same, and otherwise to respond by pressing the
middle finger button. Thus, the subject was required to match the
movement direction of the random dots in the first and last frames.
The session took 3 min, and tried twice per subject.
Sign Task
We selected 30 sign words that can be expressed by moving the hands
and arms, and without facial expression. The animated sign words were
generated with commercially available software (Mimehand II, Hitachi,
Tokyo, Japan) implemented on a personal computer (Endevor Pro-600L,
Dell, Tokyo, Japan). This software can modify all components of the
movement separately. As the sign words are composed of several
component movements of the hands or arms, pseudo-words can be
generated by modifying a single component of each word. Thirty
pseudo-words were generated from the 30 words. Non-words were
generated by flipping the 30 words and 30 pseudo-words upside down.
Finally, a still image was generated.
The event-related design consisted of three event conditions:
word discrimination (D), non-discrimination (ND) and still (S). The
order of presentation was determined to maximize the efficiency of
the predefined contrasts, as in the face-movement tasks. During the D
condition, a single frame (2000 ms duration) of an animated character
moving her hands and arms appeared, followed by a question mark (400
ms), which cued the subject to press the button with their right index
finger if the movements constituted a real word, and otherwise to press
the button with the middle finger. The ND condition was similar to D,
except that the reversed upside-down images were presented, and the
arrows were presented for 400 ms. The subject was instructed to press
the button with their right index finger if the leftward arrow appeared,
and with the middle finger if the rightward arrow appeared. During the
S condition, the same characters were presented but without any
movement, and both flipped and non-flipped images were presented.
The stimuli lasted 2 s and were followed by an arrow pointing either
right or left for 400 ms. The subject was instructed to press the button
with their right index finger if the leftward arrow appeared, and with
the middle finger if the rightward arrow appeared. The ITI was fixed at
3 s. Each condition was repeated 60 times and there were 180 events
in total. The session took 9 min, and tried once per subject.
Figure 1. Task design. (Top) The event-related design of the face-motion
discrimination task consisted of three conditions: OPEN, CLOSED and a still face.
During the OPEN condition, four consecutive frames (each frame 5 500 ms duration)
of a face pronouncing a vowel (/a/, /i/, /u/, /e/, or /o/) were presented. A 400 ms
presentation of a question mark ‘?’ followed, which prompted the subject to press the
button with their right index finger if the movement of the lips in the first and last
frames was the same, and otherwise to press the button with their right middle finger.
The CLOSED condition was similar to the OPEN condition, except that the mouth was
closed during the movement. During the still-face condition (STILL), a face without any
movement was presented for 2 s followed by the 400 ms presentation of an arrow that
pointed either right or left. The subject was instructed to press the button with their
right index finger if the leftward arrow was displayed, and with the middle finger if the
rightward arrow appeared. Each condition was repeated 30 times. (Middle) Block
design of the dot-motion discrimination task. (Bottom) The event-related design of the
sign-word discrimination task consisted of a sign-word/pseudo-word discrimination
condition (D), a non-discrimination (ND) condition using flipped images, and control
conditions [Still (S) and Flipped Still (FS)], during which corresponding still images
were presented. During the D condition, a single frame of an animated character
moving her hands and arms was presented for 2 s, followed by the presentation of
a question mark ‘?’ for 400 ms, which cued the subject to press the button with their
right index finger if the movements constituted a real word, and otherwise to perform
the button press with their right middle finger. ND was similar to D, except that the
reversed upside-down images were presented, followed by the presentation of arrows
Data Analysis
The data were analyzed using statistical parametric mapping (SPM99,
Wellcome Department of Cognitive Neurology, London, UK) implemented in Matlab (Mathworks, Sherborn, MA; Friston et al., 1994,
1995a,b). As our scanning protocol used sequential axial-slice acquisition
in ascending order, signals in different slices were measured at different
time points. To correct for this, we interpolated and resampled the data
(Büchel and Friston, 1997). Following realignment, all images were coregistered to the high-resolution three-dimensional (3-D) T2-weighted
MRI, using the anatomical MRI with T2-weighted spin-echo sequences
from locations identical to those of the fMRI images. The parameters for
affine and nonlinear transformation into a template of T2-weighted
images already fitted to standard stereotaxic space (MNI template; Evans
et al., 1994) were estimated using the high-resolution 3-D T2-weighted
MRI employing least-square means (Friston et al., 1995b). The parameters were applied to the co-registered fMRI data. The anatomically
indicating the button press for 400 ms. S was similar to ND, except that the still
images of the same character, both flipped and non-flipped, were presented. The ITI
was fixed at 3 s. Each condition was repeated 30 times.
Cerebral Cortex August 2005, V 15 N 8 1115
normalized fMRI data were filtered using a Gaussian kernel of 8 mm (full
width at half maximum) in the x, y and z axes.
Statistics
Statistical analysis was conducted at two levels. First, individual taskrelated activation was evaluated. Secondly, individual data were summarized and incorporated into a random-effect model so that inferences
could be made at a population level (Friston et al., 1999a).
Individual Analysis
The signal was scaled proportionally by setting the whole-brain mean
value to 100 arbitrary units. Proportional scaling was performed to
adjust for the fluctuation of the sensitivity of the MRI scanner which
affects both global and local signals. Here we assume that the global
signal (which is an indirect measure of global cerebral blood flow) is
unchanged and that the local change will not affect the global signal.
The signal time-course of each subject was modeled with two box-car
functions convolved with a hemodynamic-response function, high-pass
filtering and session effect. To test hypotheses about regionally specific
condition effects, the estimates for each condition were compared
using linear contrasts of the matching task versus the rest period. The
resulting set of voxel values for each comparison constituted a statistical
parametric map (SPM) of the t statistic (SPM{t}). The SPM{t} was
transformed to normal-distribution units (SPM{Z}). The threshold for
the SPM{Z} of individual analyses was set at P < 0.05, correcting for
multiple comparisons at the voxel level for the entire brain (Friston
et al., 1996).
hearing subjects (Fig. 2). During the OPEN condition compared
with the STILL condition, activation was found in almost
identical regions to those found in the CLOSED--STILL comparison. However, in the hearing subjects, when the OPEN
condition was compared with the CLOSED, activation was
found in the left middle and inferior frontal gyrus, the bilateral
GTs along the superior temporal sulci and the SMA (Table 2,
Fig. 3). Post hoc tests confirmed that the responses in the left PT
of the hearing signers and non-signers were similar (P > 0.1,
two-sample t-test), and that both hearing and deaf subjects
showed a significant lip-reading effect (OPEN--CLOSED) (Fig. 2).
Group Analysis with the Random-effect Model
The weighted sum of the parameter estimates in the individual analysis
constituted ‘contrast’ images that were used for the group analysis
(Friston et al., 1999a). The contrast images obtained by individual
analysis represent the normalized task-related increment of the MR
signal of each subject, comparing the matching task versus rest period.
To examine the effect of the tasks by group (deaf and hearing) and the
task 3 group interaction, the contrast images of the prelingual deaf and
hearing groups were entered into a random-effect model. For the
mouth-movement and dot-motion tasks, hearing signers and hearing
non-signers were pooled, as there was no significant group difference.
The comparison was then made between the deaf (n = 7) and hearing
(n = 19) groups. For the sign tasks, the deaf subjects (n = 7) and hearing
signers (n = 9) were compared. Significant signal changes for each
contrast were assessed using t-statistics on a voxel-by-voxel basis
(Friston et al., 1995a). The resulting set of voxel values for each contrast
constituted an SPM of the t-statistic (SPM{t}). The SPM{t} was transformed into normal-distribution units (SPM{Z}). The threshold for the
SPM{Z} was set at Z > 3.09 and P < 0.05, with a correction for multiple
comparisons at the cluster level for the entire brain (Friston et al., 1996).
Results
Face Motion
In the hearing subjects, during the CLOSED condition the
matching accuracy was 93.0 ± 5.9%, and accuracy during the
OPEN condition was 90.8 ± 5.5%, with no significant difference
between the conditions. In the deaf subjects, during the
CLOSED condition the matching accuracy was 92.7 ± 5.7%,
and accuracy during the OPEN condition was 86.5 ± 9.6%, with
no significant difference.
During the CLOSED condition compared with the STILL
condition, activation was found in the bilateral inferior (GFi)
and middle (GFm) frontal gyrus, temporo-occipital junction,
pre- and post-central gyri, superior (LPs) and inferior (LPi)
parietal lobules, dorsal premotor cortex (PMd), supplementary
motor cortex (SMA), posterior lobe of the cerebellar hemisphere, and left basal ganglia and thalamus. The left GTs,
corresponding to the planum temporale (PT), was more active
during the CLOSED condition in deaf subjects compared with
1116 Cortical Plasticity in the Deaf
d
Sadato et al.
Figure 2. Group analysis of CLOSED. (Upper half) SPMs of the average neural activity
within each group, deaf on the left and hearing on the right, during CLOSED compared
with the rest period. The maps are shown in standard anatomical space. The 3-D
information was collapsed into 2-D sagittal and transverse images (that is, maximumintensity projections viewed from the right and top of the brain). The statistical
threshold was P \ 0.05 with a correction for multiple comparisons at the cluster level.
(Lower half) There was more prominent activation in the PT on the left in deaf subjects
than in hearing subjects during the CLOSED condition. The focus of activation on
a pseudocolor fMRI superimposed on a high-resolution anatomical MRI in the sagittal
(upper left), coronal (upper right) and transaxial (lower left) planes, as indicated by the
blue lines that cross at (--56, --18, 6), corresponding to the left PT (Brodmann area 42). The
significance level of activity is as indicated by the color bar, increasing as red proceeds
to white. The statistical threshold was P \ 0.05, corrected for multiple comparisons at
the cluster level. Lower right, percent signal change in the left PT of the deaf (open
square) and hearing (closed square) during CLOSED, OPEN and OPEN--CLOSED. The
right PT counterpart showed the same tendency with less prominent increments.
* indicates P \ 0.001 (two-sample t-test), ** indicates P 5 0.041 (one-sample
t-test), and þ, P 5 0.001 (one-sample t-test). Post hoc tests in the region of interest
in the left PT are specified by the group comparisons of the CLOSED condition.
Table 2
Fields significantly activated by lip-motion discrimination (mouth open versus closed)
Cluster P
Cluster equiv. k
Voxel Z
Talairach’s coordinates
x
y
z
46
48
44
48
2
4
62
56
60
62
14
12
8
2
2
16
16
4
32
24
4
34
48
42
64
64
6
6
6
2
Side
Location
BA
L
L
L
L
L
L
L
L
L
R
GFi
GFi
GFm
GprC
SMA
GFs
GTs
GTm
GTs
GTs
47
44
6
6
6
6
42
21
22
22
Deaf
N.S.
Hearing (signers þ non-signers)
0.017
174
4.07
0
512
4.16
3.71
5.58
0
485
5.27
4.6
0
391
4.68
4.25
3.88
0.023
162
4.54
Deaf [ hearing
N.S.
Abbreviations: GFi, inferior frontal gyrus; GFm, middle frontal gyrus; GprC, precentral gyrus; GTm,
middle temporal gyrus; GTs, superior temporal gyrus; SMA, supplementary motor area. BA,
Brodmann’s areas. All P values are corrected for multiple comparisons. Height threshold,
Z 5 3.09, P 5 0.001. L, left; R, right.
Dot Motion
Matching accuracy in the deaf group was 87.4 ± 9.1%, and in the
hearing group accuracy was 85.1 ± 12.5%, with no significant
group difference (P > 0.1, two-sample t-test).
In deaf subjects, the bilateral LPi, GTs and posterior lobe of the
cerebellum, the left medial prefrontal cortex and PMd, and the
right GFi and GFm were activated by the dot-motion discrimination task. In hearing subjects, the bilateral LPi, LPs, GFm, PMd,
inferior and middle occipital gyrus, fusiform gyrus and posterior
lobe of the cerebellum, the left medial prefrontal cortex and
right GTs were activated. The right GTs (corresponding to the
PT) was more active during the dot-motion discrimination task
in deaf subjects compared with hearing subjects (Fig. 4). This
area of hearing subjects showed a significant lip-reading effect
during face motion tasks (contrasting OPEN--CLOSED). There
were no areas in which the hearing subjects showed significantly more activation than deaf subjects did.
Sign Discrimination
The accuracy of sign discrimination was 59.8 ± 2.9% in the
hearing signers and 53.6 ± 5.4% in the deaf subjects, which
was an insignificant difference. The deaf subjects showed an
accuracy of 84.0 ± 35.7% in response to real words, and only
35.7 ± 11.3% accuracy in response to pseudo-words. The
hearing signers showed 86.7 ± 12.1% accuracy in response to
true words, and 40.6 ± 16.9% in response to pseudo-words.
In the hearing signers, the sign-discrimination and nondiscrimination conditions activated the SMA, bilateral GFi,
bilateral PMd, LPs, LPi, GOm and GTm. Deaf subjects showed
a similar activation pattern, except more prominent activation
was seen in the left GTs adjacent to the transverse temporal
gyrus (Fig. 5, Table 3). The left GTs was activated more
prominently in the deaf than in the hearing subjects during
closed-lip movement, sign discrimination and non-discrimination.
This area of hearing subjects showed a significant lip-reading
effect during face motion tasks (contrasting OPEN--CLOSED).
There was a marked overlap of the foci activated by lip-reading
in hearing subjects and the cross-modal plasticity in the deaf,
Figure 3. Group analysis of OPEN--CLOSED. The task-related increments of the MR
signal in the regions that showed lip-reading effects. SPMs of the OPEN--CLOSED (O--C)
comparison in the hearing subjects are shown in standard anatomical space (left
column). The 3-D information was collapsed into 2-D sagittal, coronal and transverse
images. (Right column) Focus of activation on a pseudocolor fMRI superimposed on
a high-resolution anatomical MRI in the sagittal (upper right), transaxial (middle right)
and coronal (lower right) planes.
which was revealed by the sign-word discrimination/nondiscrimination and lip-movement tasks (Fig. 6). Figure 6 showed
that the common area for cross-modal integration (blue, as lipreading in the hearing) and cross-modal plasticity (yellow for
sign and red for CLSOED lip movement) is in the PT adjacent to
the anterior border, the sulcus behind Heschl’s gyrus (Westbury
et al., 1999). On the other hand, the area for the lip-reading in
the hearing (blue) extended to the posterior STS area, more
prominent on the left. Sign discrimination compared with nondiscrimination activated the bilateral PMd, GFm and GFi of both
hearing and deaf signers but not the temporal cortices, whereas
no activation was observed in the PT (Fig. 7).
Discussion
The present study showed that the neural substrates of audiovisual integration during lip-reading in hearing subjects overlap
with the substrates underlying cross-modal plasticity due to
auditory deprivation: During the CLOSED matching task, the
deaf showed more prominent activation than the hearing
subjects in the left planum temporale (PT). During dot-motion
matching, the deaf subjects showed greater activation in the
right PT. Sign-related motion, with or without a lexical component, activated the left PT of the deaf signers more than in the
hearing signers. These areas showed a lip-reading effect in
hearing subjects. Hence, the PT is the common substrate of
Cerebral Cortex August 2005, V 15 N 8 1117
Figure 4. Group analysis of dot motion discrimination. (Upper half) SPMs of the
average neural activity within each group, deaf on the left and hearing on the right,
during the dot-motion discrimination task compared with the rest period. The same
format and statistical thresholds are used as in Figure 2. (Lower half) There was more
prominent activation in the PT on the right in deaf subjects than in hearing subjects
during the dot-motion discrimination task. The focus of activation on a pseudocolor
fMRI is superimposed on a high-resolution anatomical MRI in the sagittal (upper left),
coronal (upper right), and transaxial (lower left) planes, as indicated by the blue lines
that cross at (56, --18, 6) corresponding to the right PT (Brodmann area 42). The
significance level of activity is as indicated by the color bar; this increases as red
proceeds to white. The statistical threshold was P \ 0.05, corrected for multiple
comparisons at the cluster level. Lower right, percent signal change in the right PT of
the deaf (open square) and hearing (closed square) subjects during DOT, CLOSED,
OPEN and O--C. * indicates P \ 0.001 (two-sample t test), þ P 5 0.009 (two-sample
t test), þþ P 5 0.001 (two-sample t test), and 3 P 5 0.004 (one-sample t test).
audio-visual integration during lip-reading in hearing subjects
and of audio-visual cross-modal plasticity in deaf subjects.
Neural Substrates of Lip-reading
In this study, we used the difference between the OPEN and
CLOSED conditions as a marker of phonetic speech perception
(lip-reading effect). We assumed that the OPEN condition
carried phonetic information even if the subjects did not
understand which vowel was presented, whereas the CLOSED
condition did not. This is based on the fact that visual phonetics
shown by lip movements can influence speech perception
without any understanding of what is presented in each
modality (the McGurk effect) (McGurk and MacDonald, 1976;
1118 Cortical Plasticity in the Deaf
d
Sadato et al.
Figure 5. Group analysis of sign. (Upper half) SPMs of average neural activity within
each group, deaf on the left and hearing on the right, during the sign-word
discrimination and non-discrimination tasks compared with the rest period. The same
format and statistical thresholds are used as in Figure 2. (Lower half) There was more
prominent activation in the left PT in deaf subjects than in hearing subjects during the
sign-word discrimination/non-discrimination tasks. The focus of activation on a pseudocolor fMRI is superimposed on a high-resolution anatomical MRI in the sagittal (upper
left), coronal (upper right) and transaxial (lower left) planes, as indicated by the blue
lines that cross at (--62, --20, 4) corresponding to the left PT (Brodmann area 42). The
significance level of the activity is indicated by the color bar; this increases as red
proceeds to white. The statistical threshold was P \ 0.05, corrected for multiple
comparisons at the cluster level. Lower right, percent signal change in the left PT
(-- 62, --20, 4) of the deaf (open square) and hearing (closed square) during worddiscrimination and non-discrimination tasks, CLOSED, OPEN and O--C. *, ** and þ
indicate P \ 0.001 (two-sample t-test), þþ P 5 0.002 (two-sample t-test), 3 P 5
0.004 and 33 P \ 0.001 (one-sample t-test).
Driver, 1996). We used phonetic discrimination by visual
speech (OPEN) to exclude the influence of the lexicon
(Bernstein et al., 2002). To control for the perception of lip
movements and working memory processes, a non-phonetic lipmovement (CLOSED) condition was included. Thus, the OPEN-CLOSED comparison reveals the phonetic perception of visual
speech that is essential for lip-reading. In the hearing group,
during the OPEN condition compared with the CLOSED
condition, more prominent activation was seen in the GTs,
along the STS bilaterally and in the left lateralized frontal cortex
including Broca’s area, which is consistent with previous results
Table 3
Fields significantly activated by sign-word discrimination and non-discrimination
Cluster P
Deaf
0
Cluster size
1079
0
1531
0
1153
0
320
0
939
0.033
0.002
0.041
0.003
0
137
265
129
240
889
Hearing signer
0
837
0
602
0
1281
0.001
311
0
1580
0
753
0
1012
Deaf [ hearing signer
0.001
277
Voxel Z
Talairach’s coordinates
Side
Location
BA
R
L
L
L
R
R
R
R
L
L
L
L
R
L
R
L
L
L
SMA
GFd
GFi
PMd
GFi
GFi
GFm
LPs
LPi
LPs
PCu
LPs
PCu
GTm
GTs
GTs
GOm
GTm
Cerebellum
6
8
44
6
47
45
46
7
40
7
7
7
7
21
22
22
37
21
44
6
6
8
44
6
45
7
40
7
40
19
39
19
37
22
22
41
x
y
z
5.63
4.75
4.85
4.64
4.45
4.16
4
3.96
3.93
4.69
4.56
4.4
4.51
4.61
4.97
3.8
4.92
4.88
4.5
0
8
48
36
30
48
46
36
40
26
26
30
20
54
60
62
48
50
34
6
24
14
0
28
32
46
56
46
62
48
54
76
58
18
10
70
56
64
66
46
20
50
2
2
16
48
56
54
48
52
40
0
2
0
4
6
30
4.91
4.07
4.77
4.13
4.58
4.26
4.04
3.99
3.63
4.88
4.87
5.23
4.06
4.92
4.88
42
42
2
4
40
28
36
30
38
30
46
46
30
44
54
10
0
6
20
12
4
30
62
48
56
34
60
74
66
62
32
56
66
52
32
62
16
48
56
52
40
6
24
8
2
R
R
L
L
L
L
R
R
L
L
R
R
L
L
GFi
PMd
SMA
GFs
GFi
PMd
GFi
LPs
LPi
LPs
LPi
GOm
GTm
GOm
GTm
5.08
4.68
3.4
62
58
54
20
10
24
4
4
10
L
L
L
GTs
GTs
GTT
Figure 6. Summary of the between-group differences. The areas showing more
prominent activation in the deaf than the hearing subjects during dot-motion
discrimination (DOT, green), the CLOSED condition (red), sign-word discrimination
and non-discrimination (SIGN, yellow) were superimposed with the areas activated by
OPEN--CLOSED in the hearing subjects (blue outline). Marked overlap among the latter
three conditions was seen in the anterior part of the left PT, adjacent to the Heschl’s
gyrus. On the other hand, the neural substrates of lip-reading in the hearing subjects
extended to the ascending portion of the superior temporal sulcus (white arrowhead),
passing the posterior border of the PT (black arrow head, descending ramus of the
Sylvian fissure) (Westbury et al., 1999).
Abbreviations: GF, fusiform gyrus; GFd, medial frontal gyrus; GFi, inferior frontal gyrus; GFm,
middle frontal gyrus; GOi, inferior occipital gyrus; GOm, middle occipital gyrus; GpoC, postcentral
gyrus; GprC, precentral gyrus; Gsm, supramarginal gyrus; GTi, inferior temporal gyrus; GTs,
superior temporal gyrus; LPi, inferior parietal lobule; LPs, superior parietal lobule; PCu, precuneus;
PMd, dorsal premotor cortex; SMA, supplementary motor area. BA, Brodmann’s areas.
All P values are corrected for multiple comparisons. Height threshold, Z 5 3.09, P 5 0.001.
L, left; R, right.
(Burt and Perrett, 1997; Campbell et al., 2001; Olson et al., 2002;
Calvert and Campbell, 2003).
Broca’s Area
Functional neuroimaging studies revealed activity in Broca’s
area during prepositional speech tasks, including reading words
(Price et al., 1994), word generation (McCarthy et al., 1993),
decoding syntactically complex sentences (Just et al., 1996) and
phoneme discrimination (Zatorre et al., 1992, 1996). Auditory
phonemic-discrimination activated secondary auditory cortices
bilaterally as well as Broca’s area (Zatorre et al., 1992, 1996).
Zatorre et al. (1996) proposed that Broca’s area is recruited
for fine-grained phonetic analysis by means of articulatory
decoding. The phonetic analysis of speech depends not only
on auditory information but also on access to information about
the articulatory gestures associated with a given speech sound
(Liberman and Whalen, 2000). This access might be accomplished through a ‘mirror’ system, including Broca’s area, which
forms a link between the observer and the actor (Rizzolatti and
Figure 7. Areas more prominently activated during the sign-word discrimination task
compared with the non-discrimination task in deaf subjects (top) and hearing subjects
(bottom). The areas are superimposed on surface-rendered high-resolution MRIs
viewed from the right, front and left. Both hearing and deaf signers exhibited activation
of the bilateral frontal cortex without activation in the temporal cortex. The statistical
threshold is P \ 0.05, corrected for multiple comparisons at the cluster level.
Arbib, 1998; Iacoboni et al., 1999). The mirror system might not
be confined to Broca’s area (Bucchino et al., 2001). Lip-reading
enhances the motor excitability of the left primary motor
cortex, presumably through input from premotor areas
Cerebral Cortex August 2005, V 15 N 8 1119
(Watkins et al., 2003). This could account for the activation of
other left-lateralized prefrontal cortical structures, such as BA
44/6 and SMA, as parts of the mirror system.
subjects and audio-visual cross-modal plasticity during nonphonetic lip (CLOSED) or hand movements in deaf subjects.
Task Dependency
Planum Temporale
As expected, the OPEN--CLOSED condition activated the part of
the GTs posterior to the transverse temporal gyrus (Heschl’s
gyrus), corresponding to the PT. Anatomically, the anterior
border of the PT is the sulcus behind Heschl’s gyrus and the
medial border is the point where the PT fades into the insula. The
posterior border is the ascending and descending rami of
the Sylvian fissures (Westbury et al., 1999). Functionally, the
left PT is involved in word detection and generation, due to its
ability to process rapid frequency changes (Schwartz and Tallal,
1980; Belin et al., 1998b). The right homologue is specialized for
the discrimination of melody, pitch and sound intensity (Zattore
et al., 1994; Belin et al., 1998a). Activation of the PT by the
OPEN--CLOSED comparison is consistent with previous work
(Calvert et al., 1997), and confirms that phonetic information
encoded in lip movements (during the OPEN condition)
activates the PT. Furthermore, the temporal cortical activation
was limited during the CLOSED condition in this study. This
supports a previous study, which found that non-phonetic
lower-face movement did not activate the temporal cortex
(Calvert et al., 1997). Hence, in hearing subjects, the activation
of the PT is related to the phonetic information encoded by lip
movements, and thus the perception of physical visual speech
stimuli (visual phonetics) (Bernstein et al., 2002). Figure 6 shows
that the area for the lip-reading in the hearing (blue) extended to
the posterior STS area, more prominent on the left. This is
consistent with the previous report by Puce et al. (1998)
showing that the mouth movement activates the posterior
portion of the STS. The more prominent posterior extension
on the left than the right is consistent with the lesion study
(Campbell et al., 1986): a patient with a left occipitotemporal
lesion cannot lip-read whereas another patient with the occipitotemporal lesion on the right did not show deficit in lip-reading.
Hence the posterior STS on the left may be functionally related to
the nearby PT thought to be involved in lip-reading.
In the deaf group, a similar, but less significant, activation
pattern in the PT was found in response to visual phonetics, and
hence there is a lip-reading effect (Figs 2, 4 and 5). This might
reflect the fact that, despite being profoundly deaf, the deaf
subjects in the present study have undergone training in lipreading at school.
The Effects of Auditory Deprivation
The results also indicate that the neural substrates of the crossmodal plasticity resulting from auditory deprivation lie within
the PT. The PT in hearing subjects did not respond to nonphonetic lip movements (CLOSED), dot movement and signrelated hand movement without lexical meaning, whereas the
PT of the deaf subjects responded to all these conditions.
However, the response of the PT to visual phonetics (OPEN-CLOSED) was seen in the hearing subjects. As these results were
found for both hearing signers and hearing non-signers, the
ability to sign cannot explain this difference. Similarly, visual
phonetics cannot explain these differences because the group
difference was seen during the CLOSED condition. Instead,
these findings suggest the PT is the common substrate of audiovisual integration during lip-reading (OPEN--CLOSED) in hearing
1120 Cortical Plasticity in the Deaf
d
Sadato et al.
Lip Movement and Dot Movement
This study confirmed the results of Finney et al. (2001), who
showed that visual stimulation using moving dot patterns
activated the right PT of deaf subjects. This finding suggests
that the auditory cortex in deaf subjects is recruited for the
processing of purely visual stimuli with no lexical component.
The right PT is activated by sound motion (Baumgart et al.,
1999), probably reflecting the dominance of the right hemisphere for the processing of spatial information. However,
in deaf subjects, the left PT was active in response to face
movement, which was not seen in hearing subjects. The leftlateralized activity might be related to the notable anatomical
and functional asymmetry seen in auditory processing.
Anatomically, the PT is significantly larger in the left hemisphere, with increased cell size and density of neurons. Furthermore, there are more functionally distinct columnar systems per
surface unit in the left than the right PT (Galuske et al., 2000).
Functionally, the left auditory cortical areas that are optimal for
speech discrimination, which is highly dependent on rapidly
changing broadband sounds, have a higher degree of temporal
sensitivity. By contrast, the right cortical counterpart has greater
spectral sensitivity, and thus is optimal for processing the tonal
patterns of music, in which small and precise changes in
frequency are important (Zatorre et al., 2002). Thus, due to
auditory deprivation, in deaf subjects the processing of the rapid
temporal alternations of mouth movements might be directed
towards the left PT, whereas processing of the global motion of
moving dots might be performed in the right PT. This functional
asymmetry might also explain why the area of the right PT that
was more active in deaf than hearing subjects during the dotdiscrimination task showed less overlap with the area activated
by OPEN versus CLOSED conditions.
Sign
The sign-word discrimination and non-discrimination conditions revealed more prominent activation in the left PT of
the deaf signers than hearing signers. This is consistent with
Petitto et al. (2000), who found that meaningless but linguistically organized phonetic-syllabic units in sign, as well as real
signs, activated the bilateral GTs of deaf subjects. They
concluded that semantic content is not an important determinant of the activity of the GTs. This study revealed that the PT
was activated by not only signs and pseudo-signs, but also their
upside-down reversed moving images. Thus, the linguistic
component of the signs might not be causing the PT activation
in the deaf subjects. The sign word-discrimination and nondiscrimination tasks yielded a similar activation pattern in the
left PT: both tasks activated the left PT of the deaf signers, but
deactivated that of the hearing signers (Fig. 5). Furthermore,
when the sign-word discrimination was compared with the
non-discrimination condition, the bilateral frontal cortex was
activated, but the temporal cortical areas were not (Fig. 7).
We saw a marked overlap of the enhanced activation in the
left PT during sign presentation and face discrimination in the
deaf subjects compared with the hearing signers. This finding
suggests that the activation of the left PT in the deaf during sign
presentation relates neither to sign-language comprehension
nor linguistic processing. Instead, the left PT activation might be
related to the analysis of the temporal segmental aspects of hand
or lip movements. Lip-reading accuracy correlated negatively
with sign-language usage in the deaf (Geers and Moog, 1989;
Moores and Sweet, 1991; Bernstein et al., 1998), suggesting
competition between the processes. Hence, amodal processing
of rapid temporal alternation might occur in the left PT, the
input to which might be visual or auditory. Although crossmodal plastic changes in the PT are not specific to linguistic processing itself, as shown in the present study, it might promote
the sublexical processing of signs.
Cross-modal Plasticity in the Deaf and Lip-reading in the
Hearing Share the PT
This study revealed that the primary auditory cortex (A1) is not
activated by visual phonetics (lip-reading) in the hearing, which
was consistent with a previous study (Bernstein et al., 2002). We
also found that the plastic changes due to auditory deprivation
activated the PT, as did lip-reading in the hearing subjects; the A1
was also spared. In deaf subjects, higher-order auditory areas
become activated by visually presented sign language (Nishimura
et al., 1999; Petitto et al., 2000). When the auditory nerve is
stimulated by cochlear implants, the contralateral A1 becomes
active (Lee et al., 2001). The higher-order cortical areas become
progressively activated only after increased hearing experience
with cochlear implants (Giraud et al., 2001). Therefore, crossmodal reorganization is likely to occur only in the higher-order
auditory areas, consistent with the present findings. A previous
electrophysiological study revealed no evidence of cross-modal
plasticity in the A1 in congenitally deaf cats (Kral et al., 2003).
They attributed this to the different inputs that the A1 and the
auditory-association cortices receive. The primary auditory
cortex receives its main input from the lemniscal auditory
pathway, whereas the higher-order auditory cortices receive
mainly cortical and extralemniscal thalamic inputs. The extralemniscal thalamic nuclei also receive projections from other
modalities (Aitkin et al., 1978, 1981; Robards, 1979; Shore et al.,
2000; Schroeder et al., 2001). The sensory representation is
determined by the competitive balance of inputs from different
modalities (Rauschecker, 1995); therefore, these findings suggest that cross-modal plasticity occurs in the PT.
In conclusion, the cross-modal plastic changes as a result of
auditory deprivation might be mediated, at least in part, by the
neural substrates of cross-modal integration in the hearing.
Notes
This study was supported in part by a Grant-in Aid for Scientific
Research B#14380380 (to N.S.) from the Japan Society for the
Promotion of Science, and Special Coordination Funds for Promoting
Science and Technology from the Ministry of Education, Culture,
Sports, Science and Technology, of the Japanese Government.
Address correspondence to Norihiro Sadato, MD, PhD, Division of
Cerebral Integration, Department of Cerebral Research, National
Institute for Physiological Sciences, Myodaiji, Okazaki, Aichi, 444-8585
Japan. Email: [email protected].
References
Aitkin LM, Dickhaus H, Schult W, Zimmermann M (1978) External
nucleus of inferior colliculus: auditory and spinal somatosensory
afferents and their interactions. J Neurophysiol 41:837--847.
Aitkin LM, Kenyon CE, Philpott P (1981) The representation of the
auditory and somatosensory systems in the external nucleus of the
cat inferior colliculus. J Comp Neurol 196:25--40.
Baumgart F, Gaschler-Markefski B, Woldorff MG, Heinze H-J, Scheich H
(1999) A movement-sensitive area in auditory cortex. Nature 400:
724--726.
Belin P, McAdams S, Smith B, Savel S, Thivard L, Samson S, Samson Y
(1998a) The functional anatomy of sound intensity discrimination.
J Neurosci 18:6388--6394.
Belin P, Zilbovicius M, Crozier S, Thivard L, Fontaine A, Masure M,
Samson Y (1998b) Lateralization of speech and auditory temporal
processing. J Cogn Neurosci 10:536--540.
Bernstein LE, Demorest ME, Tucker PE (1998) What makes a good
speechreader? First you have to find one. In: Hearing by eye: II. The
psychology of speechreading and auditory-visual speech (Campbell
R, Burnham D, Burnham D, eds), pp. 211--228. Hove: Psychology
Press.
Bernstein LE, Auer ET, Moor JK, Ponton CW, Don M, Singh M (2002)
Visual speech perception without primary auditory cortex activation. Neuroreport 13:311--315.
Bucchino G, Binkofski F, Fink GR, Fadiga L, Fagossi L, Gallese V, Seitz R,
Zilles K, Rizzolatti G, Freund H-J (2001) Action observation activates
premotor and parietal areas in a somatotopic manner: an fMRI study.
Europ J Neurosci 13:400--404.
Büchel C, Friston KJ (1997) Modulation of connectivity in visual
pathways by attention: cortical interactions evaluated with structural equation modelling and fMRI. Cereb Cortex 7:768--778.
Burt DM, Perrett DI (1997) Perceptual asymmetries in judgments of
facial attractiveness, age, gender, speech and expression. Neuropsychologia 35:685--693.
Calvert GA, Bullmore ET, Brammer MJ, Campbell R, Williams SC,
McGuire PK, Woodruff PW, Iversen SD, David AS (1997) Activation
of auditory cortex during silent lipreading. Science 276:593--596.
Calvert GA, Campbell R (2003) Reading speech from still and moving
faces: the neural substrates of visible speech. J Cogn Neurosci 15:
57--70.
Campbell R, Landis T, Regard M (1986) Face recognition and lipreading.
A neurological dissociation. Brain 109:509--521.
Campbell R, MacSweeney M, Surguladze S, Calvert GA, McGuire P,
Suckling J, Brammer MJ, David AS (2001) Cortical substrates for the
perception of face action: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning).
Cogn Brain Res 12:233--243.
Driver J (1996) Enhancement of selective listening by illusory mislocation of speech sounds due to lip-reading. Nature 381:66--68.
Evans AC, Kamber M, Collins DL, MacDonald D (1994) An MRI-based
probabilistic atlas of neuroanatomy. In: Magnetic resonance scanning
and epilepsy (Shorvon SD, ed.), pp. 263--274. New York: Plenum Press.
Finney EM, Fine I, Dobkins KR (2001) Visual stimuli activate auditory
cortex in the deaf. Nat Neurosci 4:1171--1173.
Friston KJ, Worsley KJ, Frackowiak RSJ, Mazziotta JC, Evans AC (1994)
Assessing the significance of focal activations using their spatial
extent. Hum Brain Mapp 1:210--220.
Friston KJ, Holmes AP, Worsley KJ, Poline JB, Frith CD, Frackowiak RSJ
(1995a) Statistical parametric maps in functional imaging: a general
linear approach. Hum Brain Mapp 2:189--210.
Friston KJ, Ashburner J, Frith CD, Heather JD, Frackowiak RSJ (1995b)
Spatial registration and normalization of images. Hum Brain Mapp
2:165--189.
Friston KJ, Holmes A, Poline J-B, Price CJ, Frith CD (1996) Detecting
activations in PET and fMRI: levels of inference and power. Neuroimage 4:223--235.
Friston KJ, Holmes AP, Worsley KJ (1999a) How many subjects
constitute a study? Neuroimage 10:1--5.
Friston KJ, Zarahn E, Josephs O, Henson RNA, Dale AM (1999b)
Stochastic designs in event-related fMRI. Neuroimage 10:607--619.
Galuske RAW, Schlote W, Bratzke H, Singer W (2000) Interhemispheric
asymmetries of the modular structure in human temporal cortex.
Science 289:1946--1949.
Geers A, Moog J (1989) Factors predictive of the development of literacy in profoundly hearing-impaired adolescents. Volta Rev 91:69--86.
Giraud AL, Price CJ, Graham JM, Truy E, Frackowiak RS (2001) Crossmodal plasticity underpins language recovery after cochlear implantation. Neuron 30:657--663.
Cerebral Cortex August 2005, V 15 N 8 1121
Iacoboni M, Woods RP, Brass M, Becckering H, Mazziotta JC, Rizzolatti G
(1999) Cortical mechanisms of human imitation. Science 286:
2526--2528.
Just MA, Carpenter PA, Keller TA, Eddy WF, Thulborn KR (1996) Brain activation modulated by sentence comprehension. Science 274:114--116.
Kral A, Schroder J, Klinke R, Engel A (2003) Absence of cross-modal
reorganization in the primary auditory cortex of congenitally deaf
cats. Exp Brain Res 153:605--613.
Lee DS, Lee JS, Oh SH, Kim S-K, Kim J-W, Chung J-K, Lee MC, Kim CS
(2001) Cross-modal plasticity and cochlear implants. Nature 409:
149--150.
Liberman AM, Whalen DH (2000) On the relation of speech to language.
Trends Cogn Sci 4:187--196.
McCarthy G, Blamire AM, Rothman DL, Gruetter R, Shulman RG (1993)
Echo-planar magnetic resonance imaging studies of frontal cortex
activation during word generation in humans. Proc Natl Acad Sci USA
90:4952--4956.
McGurk H, MacDonald J (1976) Hearing lips and seeing voices. Nature
264:746--748.
Moores DF, Sweet C (1991) Factors predictive of school achievement. In:
Educational and developmental aspects of deafness (Moores DF, K.P.
M-O, eds), pp. 154--201. Washington, DC: Gallaudet University Press.
Neville HJ, Bavelier D, Corina D, Rauschecker J, Karni A, Lalwani A,
Braun A, Clark V, Jezzard P, Turner R (1998) Cerebral organization
for language in deaf and hearing subjects: biological constraints
and effects of experience. Proc Natl Acad Sci USA 95:922--929.
Nishimura H, Hashikawa K, Doi K, Iwaki T, Watanabe Y, Kusuoka H,
Nishimura T, Kubo T (1999) Sign language ‘heard’ in the auditory
cortex. Nature 397:116.
Oldfield RC (1971) The assessment and analysis of handedness: the
Edinburgh inventory. Neuropsychologia 9:97--113.
Olson IR, Gatenby JC, Gore JC (2002) A comparison of bound and
unbound audio-visual information processing in the human cerebral
cortex. Cogn Brain Res 14:129--138.
Petitto LA, Zatorre RJ, Gauna K, Nikelski EJ, Dostie D, Evans AC (2000)
Speech-like cerebral activity in profoundly deaf people processing
signed languages: implication for the neural basis of human language.
Proc Natl Acad Sci USA 97:13961--13966.
1122 Cortical Plasticity in the Deaf
d
Sadato et al.
Price CJ, Wise RJ, Watson JD, Patterson K, Howard D, Frackowiak RS
(1994) Brain activity during reading. The effects of exposure
duration and task. Brain 117:1255--1269.
Puce A, Allison T, Bentin S, Gore JC, McCarthy G (1998) Temporal
cortex activation in humans viewing eye and mouth movements.
J Neurosci 18:2188--2199.
Rauschecker JP (1995) Compensatory plasticity and sensory substitution in the cerebral cortex. Trends Neurosci 18:3643.
Rizzolatti G, Arbib A (1998) Language within our grasp. Trends Neurosci
21:188--194.
Robards MJ (1979) Somatic neurons in the brainstem and neocortex
projecting to the external nucleus of the inferior colliculus: an
anatomical study in the opossum. J Comp Neurol 184:547--565.
Schroeder CE, Lindsley RW, Specht C, Marcovici A, Smiley JF, Javitt DC
(2001) Somatosensory input to auditory association cortex in the
macaque monkey. J Neurophysiol 85:1322--1327.
Schwartz JH, Tallal P (1980) Rate of acoustic change may underlie
hemispheric specialization for speech perception. Science 207:
1380--1381.
Shore SE, Vass Z, Wys NL, Altschuler RA (2000) Trigeminal ganglion
innervates the auditory brainstem. J Comp Neurol 419:271--285.
Sumby WH, Polack I (1954) Visual contribution to speech intelligibility
in noise. J Acoust Soc Am 26:212--215.
Watkins KE, Strafella AP, Paus T (2003) Seeing and hearing speech
excites the motor system involved in speech production. Neuropsychologia 41:989--994.
Westbury CF, Zatorre RJ, Evans AC (1999) Quantifying variability in the
planum temporale: a probability map. Cereb Cortex 9:392--405.
Zatorre RJ, Evans AC, Meyer E, Gjedde A (1992) Lateralization of
phonetic and pitch discrimination in speech processing. Science
256:846--849.
Zatorre RC, Evans AC, Meyer E (1994) Neural mechanisms underlying
melodic perception and memory for pitch. J Neurosci 14:1908--1919.
Zatorre RJ, Meyer E, Gjedde A, Evans AC (1996) PET studies of phonetic
processing of speech: review, replication, and reanalysis. Cereb
Cortex 6:21--30.
Zatorre RJ, Belin P, Penhune VB (2002) Structure and function of
auditory cortex: music and speech. Trends Cogn Sci 6:37--46.