Lateralization of auditory language functions

Brain and Language 89 (2004) 267–276
www.elsevier.com/locate/b&l
Lateralization of auditory language functions: A dynamic
dual pathway model
Angela D. Friederici* and Kai Alter
Max Planck Institute of Cognitive Neuroscience, P.O. Box 500 355, 04303 Leipzig, Germany
Accepted 20 August 2003
Abstract
Spoken language comprehension requires the coordination of different subprocesses in time. After the initial acoustic analysis the
system has to extract segmental information such as phonemes, syntactic elements and lexical-semantic elements as well as suprasegmental information such as accentuation and intonational phrases, i.e., prosody. According to the dynamic dual pathway
model of auditory language comprehension syntactic and semantic information are primarily processed in a left hemispheric
temporo-frontal pathway including separate circuits for syntactic and semantic information whereas sentence level prosody is
processed in a right hemispheric temporo-frontal pathway. The relative lateralization of these functions occurs as a result of stimulus
properties and processing demands. The observed interaction between syntactic and prosodic information during auditory sentence
comprehension is attributed to dynamic interactions between the two hemispheres.
Ó 2003 Elsevier Inc. All rights reserved.
1. Introduction
The processing of spoken language depends on more
than one mental capacity: on the one hand the system
must extract from the input a number of different types
of segmental information to identify phonemes and
content words as well as syntactic elements indicating
the grammatical relation between these words: on the
other hand the system has to extract suprasegmental
information, i.e., the intonational contour which signals
the separation of different consistuents and the accentuation of relevant words in the speech stream.
There are various descriptions of how syntactic and
semantic information are processed in the brain
(Friederici, 2002; Ullman, 2001). However, apart from a
few general descriptions of processing intonational aspects in language and music (Zatorre, Belin, & Penhune,
2002), there is no brain based description of how intonational information and segmental information work
together during spoken language comprehension. Here
we will propose a model incorporating this aspect. The
indication that such a model is needed may best be ex*
Corresponding author. Fax: +49-341-9940-113.
E-mail address: [email protected] (A.D. Friederici).
0093-934X/$ - see front matter Ó 2003 Elsevier Inc. All rights reserved.
doi:10.1016/S0093-934X(03)00351-1
emplified by the following examples (# indicates the
‘‘intonational pause’’, called Intonational Phrase
Boundary, IPB).
(a) The teacher said # the student is stupid.
(b) The teacher # said the student # is stupid.
(c ) The teacher said the student # is stupid.
Sentences (a) and (b) are both prosodically correct,
however, sentence (c ) is not. The incorrect intonational
boundary after student in (c ) indicates a mismatch between the syntactic and the prosodic structure. The
prosodic realization in (c ) left open the question to
whom the attribute ‘‘to be stupid’’ has to be assigned.
This example shows how intonational information of
natural speech, called prosodic information can influence syntactic processes and thus sentence comprehension. The language processing system (ÔparserÕ) does well
in relying on the prosodic information as all IPBs are
syntactic phrase boundaries as well, although the reverse
is not always true. This prosody–syntax relationship is
manifested by the finding that prosodic information
eases the infantsÕ access to syntax during early development (Gleitman & Wanner, 1982; Hirsch-Pasek, 1987;
Jusczyk, 1997), and supports parsing during language
acquisition and during adult language comprehension
(Marslen-Wilson, Tyler, Warren, Grenier, & Lee, 1992;
268
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
Warren, Grabe, & Nolan, 1995). In the following we
present our dynamic dual pathway model taking into
consideration semantic, syntactic and prosodic aspects
of processing and discuss the empirical evidence on
which this model is based.
2. The dynamic dual pathway model
The neural basis of language processing has been the
focus of many studies (for review see Friederici, 2002;
Hickok & Poeppel, 2000; Kaan & Swaab, 2002; Kutas &
Federmeier, 2000; Ullman, 2001;), however, only a few
have addressed auditory language comprehension in
particular (Friederici, 2002; Hickok & Poeppel, 2000).
The latter two approaches have either concentrated on
the processing of segmental information suggesting
particular networks in the left hemisphere (LH) to
support phonological, syntactic and semantic processes,
or they have focused on the processing of prosodic information suggesting an involvement of the right
hemisphere (RH) (Gandour et al., 2000; Zatorre et al.,
2002).
The present model that binds together existing LH
models with observations from more recent studies on
prosodic processing. The premise of the dual pathway
model is that the rough distinction of processing segmental versus suprasegmental speech information is related to the distinction between the two hemispheres.
Segmental properties are associated via prosodic information to lexical and syntactic information. The interconnection between segmental and suprasegmental
parameters can be established by the association of
tones and tonal variations to segments and syllables.
Segmental, lexical and syntactic information is processed in the LH. This is even true when lexical differ-
ences are coded by tones bearing lexical meaning
(Gandour & Dardarananda, 1983; Van Lancker &
Fromkin, 1973) and by word level stress (Baum, Daniloff, Daniloff, & Lewis, 1982; Blumstein & Goodglass,
1972; Pell & Baum, 1997; Van Lancker & Sidtis, 1992).
In contrast, sentence level suprasegmental information,
namely accentuation and boundary marking expressed
acoustically by typical pitch variations, is processed by
the RH (Meyer, Alter, Friederici, Lohmann, & von
Cramon, 2002). During spoken language comprehension processes of the left and the right hemisphere are
assumed to interact dynamically in time.
The brain bases of the segmental language processing
system has already been described in some detail with
respect to phonetic, syntactic and semantic aspects quite
recently. Basic processes of speech perception have been
modeled to be subserved by the left-temporal–parietal–
occipital junction when relating word input to meaning
and on parietal and frontal regions when accessing
speech segments (Hickok & Poeppel, 2000). Syntactic
and semantic processes are assumed to be supported by
separable temporo-frontal circuits consisting of a superior temporal and an inferior frontal subcomponent
(Friederici, 2002), see Fig. 1. The syntactic circuit includes the anterior portion of the superior temporal
gyrus (STG) as well as the frontal operculum and the
inferior portion of BrocaÕs area (Brodmann area (BA)
44) and the semantic network includes the mid and
posterior portion of the STG, the middle temporal gyrus
(MTG) as well as the BA 45 in the inferior frontal gyrus
(IFG).
The pathway for suprasegmental sentence level information of speech comprehension is subserved by the
RH. Here pitch information, i.e., the fundamental frequency of the auditory input, is the most relevant type of
information for the identification of accentuation and
Fig. 1. Left and right hemisphere in different sagittal sources (position of sources is indicated at the top of the figure). Color coded filled circles
indicate the maximum of brain activation of auditory, semantic and syntactic activation (Caplan, Alpert, Waters, & Olivieri, 2000) and prosodic
(Plante, Creusere, & Sabin, 2002) information processing.
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
intonational phrasing. Accentuation is marked in addition to pitch by intensity and rhythm, whereas IPBs
apart from pitch are marked by a pause and the
lengthening of the syllable preceding the pause. The
processing of intonational information has been shown
to be supported by a temporo-frontal network primarily
in the RH namely the frontal operculum as well as areas
in the STG (Meyer et al., 2002) (see Fig. 1). It has been
shown, however, that this lateralization can change as a
function of particular task demands (Plante et al., 2002)
and as a function of stimulus properties (Pannekamp,
Toepel, Hahne, & Friederici, 2003). These latter results
suggest a dynamic interplay between the two hemispheres. We propose that the neural basis for this interplay is the corpus callosum. Although the empirical
evidence for this proposal is sparse there are indications
that it might hold up (Friederici, Kotz, Steinhauer, &
von Cramon, 2003; Klouda, Robin, Graff-Radford, &
Cooper, 1988).
It is important to note that each of these parts of the
specialized networks described here is not necessarily
domain specific. In particular, the left inferior frontal
gyrus, and more specifically even its inferior portion, is
also implicated in processing the temporal structure of
sequences in the non-linguistic domain (K€
oelsch et al.,
2002; Maess, K€
oelsch, Gunter, & Friederici, 2001;
Schubotz & von Cramon, in press). The right temporofrontal circuit, for example, has been reported to support both processing of suprasegmental prosodic aspects
in language as well as in music (K€
oelsch et al., 2002;
Maess et al., 2001; Zatorre, Evans, Meyer, & Gjedde,
1992). Thus, rather than being domain specific it appears that a particular area receives its specific function
as part of a specific neural network. This means that the
same area, e.g. BrocaÕs area can serve different processing domains, e.g. syntax, music or action in different
networks.
3. Comparison with other views
The left hemispheric pathway of language processing
can be compared to a recent neurocognitive model proposing a declarative system to support the lexicon and a
procedural system to support the procedural grammar
(Ullman, 2001). The former system is located in temporal
and temporo-parietal regions, whereas the latter system
is located in the frontal cortex and the basal ganglia. In
contrast to this model the present view argues for a
temporal and frontal involvement for each of the processing domains, namely lexicon versus grammar. Similar to a recent neurocognitive model on auditory
language processing (Friederici, 2002) it is claimed that
the temporal areas might represent aspects of item
identification and integration whereas the inferior frontal cortex is held responsible for the structuring of the
269
auditory sequences and the build up of phonological,
semantic and structural hierarchical relations. Here we
add that more strategic processes and those that require
specific memory resources may be represented in the
lateral inferior frontal cortex, that is in BA 45 and BA 44
whereas more automatic processes such as initial local
syntactic structure building may be located more medially, that is in the deep frontal operculum. The notion of
a temporo-frontal network for semantic and syntactic
processes is based on a large number of imaging studies
indicating temporal and frontal areas to be active for
sentence level semantic processes as well as for syntactic
processes (Caplan, Alpert, & Waters, 1998; Dapretto &
Bookheimer, 1999; Just, Carpenter, Keller, Eddy, &
Thulborn, 1996; Kuperberg et al., 2000; Ni et al., 2000;
Stromswold, Caplan, Alpert, & Rauch, 1996).
The present view is compatible, in principle, with the
proposal of Zatorre et al. (2002) who claim a functional
dissociation between the LH and the RH with respect to
different aspects of acoustic information. According to
this view temporal resolution necessary for the identification of phonemes is better in the left auditory cortical
areas and spectral resolution necessary for the recognition of tonal patterns is better in the right auditory
cortical areas. This dissociation appears to hold for
lower level processes (phonemes versus tones), but their
view does not extent to higher level processes involving
semantic, syntactic and sentence level prosodic information. The model proposed here is also in principle
agreement with the Ôasymmetric sampling in timeÕ hypothesis put forward recently by Poeppel (2003). This
hypothesis holds that the neural representation of an
early level of processing is bilateral whereas it is asymmetric beyond initial processing with respect to the time
domain of integration. Left auditory areas preferentially
extract information from short temporal integration
windows while the right hemispheric homologue areas
extract information from long integration windows.
Segmental information extractable in a short time window may be processed primarily by the LH whereas
prosodic information extractable only in long time
windows may be processed by the RH.
4. Psycholinguistic models
Different classes of models of language processing
based on exclusively behavioral measures have been
proposed in psycholinguistic research. These models are
primarily differentiated by their assumptions about the
modularity or interactivity of syntactic and semantic information during language processing (Altmann &
Steedman, 1988; Boland & Tanenhaus, 1991; Clifton,
Speer, & Abney, 1991; Fodor, 1983; Fodor & Inoue, 1994;
Frazier, 1995; Frazier & Rayner, 1982; Gorrell, 1995; Just
& Carpenter, 1992; MacDonald, 1993; Marslen-Wilson &
270
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
Tyler, 1980). Whereas these models on syntactic and semantic processing are still in competition, a third class of
models, relevant for the issue discussed here, would have
to consider the integration of prosodic information into
language comprehension processes during reading and
hearing. Although the need for such models has been
identified (Cutler, Dahan, & van Donselaar, 1997; Fodor,
1998, 2002), syntax–prosody-interface models so far have
not been spelled out.
For intonational languages such as English, Dutch,
and German it is often assumed that the fundamental
frequency and its prosodic realization, sentence intonation play a prominent role during the encoding of
syntactic and prosodic parameters (Beach, 1991;
Pierrehumbert, 1980). In addition, most of these studies
strengthen the notion that prosody helps to guide the
sequencing of an auditory input (Cutler et al., 1997;
Marslen-Wilson et al., 1992; Streeter, 1978) and resolve
syntactic ambiguity resolution (Cooper & Paccia-Cooper, 1980; Cutler & Fodor, 1979; Cutler & Norris, 1988;
Ferreira, 1993; Lehiste, 1973; Marcus & Hindle, 1990;
Warren et al., 1995). Up to now psycholinguistic research has shown that utterance processing is even influenced by sentence initial prosodic properties related
duration and fundamental frequency. These parameters
lead to expectations on following prosodic and syntactic
realizations (Gee & Grosjean, 1983; Grosjean, 2000;
Lehiste, 1973; Marslen-Wilson et al., 1992). To summarize, most of the phonological investigations have
clearly shown that prosody plays an important role in
spoken language processing. However, it has been difficult to specify the neural bases of these processes. In
the following we will review neurological, neurophysiological and brain imaging studies with the goal to
provide a neurocognitive description of auditory sentence processing.
5. Neurological evidence
Evidence from clinical research with aphasics might
be an interesting approximation to the neural basis of
language processing. There are two classical types of
aphasia: Broca’s aphasia and Wernicke’s aphasia usually
caused by lesions in the LH. The former type is associated with brain lesions in the anterior part of the LH
whereas the latter type of aphasia is associated with lesions in the left-temporal and temporo-parietal cortex.
BrocaÕs aphasia is usually characterized by agrammatic
speech output and agrammatic comprehension (Caplan
& Hildebrandt, 1988; Caramazza & Zurif, 1976),
whereas WernickeÕs aphasics typically produce fluent,
but semantically empty speech and show severe comprehension deficits. Early psycholinguistic studies on
BrocaÕs aphasia defined the underlying deficit as a central specific syntactic one (Berndt & Caramazza, 1980;
Caramazza & Zurif, 1976), whereas later studies described the underlying deficit as a computational one
(Friederici, 1985; Haarman & Kolk, 1991). A similar
shift from a representional towards a computational
view took place in the description of WernickeÕs aphasia.
While earlier studies assumed the underlying deficit to
be caused by an impairment in the lexicon (Whitehouse,
Caramazza, & Zurif, 1978), later studies rather suggested that WernickeÕs aphasia suffers from an incapacity to perform controlled lexical-semantic processes,
though their automatic lexical-semantic processes are
intact (Hagoort, 1993; Milberg & Blumstein, 1981). The
view that BrocaÕs aphasia is characterized by a syntactic
impairment, whereas WernickeÕs aphasia is best described by a lexical-semantic impairment appeared to be
the general view in the eighties. Interestingly, however,
new techniques of lesion analysis indicated that patients
with temporal lesions, usually correlated with a WernickeÕs aphasia, also show problems in comprehending
syntactically complex sentences when their lesions extend to the anterior portion of the STG (Dronkers,
Wilkins, Van Valin, Redfern, & Jaegers, 1994). Thus,
concerning the syntactic network the combined data
support the notion that syntactic processes might also
involve the anterior portion of the STG in addition to
the inferior frontal region in the LH. The evidence with
respect to semantic processes is somewhat sparse: It is
clear that semantic deficits are reported for patients with
lesions in the left-temporal regions, but semantic deficits
for BrocaÕs aphasics involving the inferior frontal region
have only been demonstrated in a few studies (Hagoort,
1993; Swaab, Brown, & Hagoort, 1995, 1998). In general, patient studies clearly suggest a dominance of the
LH for syntactic and semantic processes.
Most of the clinical studies have long associated the
RH with the processing of emotions and their relations
to affective prosody. Only a very small number of patients studies, however, are dedicated to the processing
of linguistic prosody. The findings reported in this research domain are less clear as to hemispheric asymmetry and region of specialization. This might be due to the
greater degree of interaction of the several variables
under study, such as the prosodic domain (syllables,
words, phrases, sentences), the prosodic parameter
(word stress, sentence accents), the type of prosodic
manipulation (phonemic violations, filtering) and the
experimental method. In their studies with right hemisphere damaged patients, Weintraub, Mesulam, and
Kramer (1981) and Bradvik et al. (1991) arrived at the
conclusion that the RH plays a superior role even in the
processing of linguistic prosody. However, Bryan (1989)
showed that both LH and RH patients show impairments in processing sentence level prosody. These and
other results are thus not univocal with respect to the
issue of hemispheric specialization of sentence level
prosody. An early study on healthy subjects using a
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
dichotic listening paradigm suggests that the possible
interaction between different information types during
auditory input may be responsible for this. Blumstein
and Cooper (1974) presented speech stimuli which were
filtered such that only the intonational contour remained
(declarative, question, imperative). They found a superiority of the left ear (i.e., for the processing in the RH)
for the recognition and identification of the different
intonational contours. Evidence for a hemispheric specialization in processing intonation contours also comes
from a study comparing patients with RH and with LH
damage. In this study, patients were asked to identify
intonation contours as questions or statements (Bryan,
1989). When segmental information was degraded, so
that reliance on intonational information becomes necessary, RH patients demonstrated a significantly poorer
performance than LH patients. When segmental information was preserved LH patients showed a poorer
performance than RH patients. These findings suggest a
RH dominance for the processing of linguistic prosody
only in the absence of segmental, i.e., lexical information. Studies dealing with similar manipulations of prosodic features generally suggest a higher involvement of
the RH (Blumstein & Cooper, 1974; Heilman, 1995;
Perkins, Baran, & Gandour, 1996).
Studies on the comprehension of metrical/lexical
stress (Baum et al., 1982; Blumstein & Goodglass, 1972;
Pell & Baum, 1997; Van Lancker & Sidtis, 1992) show
that LH patients are deficient compared to their controls
and RH patients. A LH involvement is also reported by
Behrens (1985) for the processing of tones related to
lexical meaning in tonal languages such as Mandarin
and Thai using the dichotic listening method in healthy
subjects (Gandour & Dardarananda, 1983).
The results of the studies presented in the section above
suggest that the comprehension of linguistic prosody is
not exclusively lateralized to the RH. The LH may come
into play whenever segmental information is present
(non-filtered speech) and whenever prosody is segmentally bound (stress, lexically relevant tone). For a discussion of the subcortical areas (basal ganglia, patients with
ParkinsonÕs or HuntingtonÕs disease) with additionally
special focus on the processing of affective prosody see
Cancelliere and Hausdorf (1988) as well as Pell (1996).
6. Neurophysiological evidence
Event-related brain potentials (ERPs) and magnetic
fields (ERFs) reflect the real time neurophysiological
activity time-locked to the presentation of target stimuli
(see Fig. 2). Semantic processes are correlated with the
N400 component which has a centro-parietal distribution (Kutas & Federmeier, 2000; Kutas & Hillyard,
1980). Recently it has been demonstrated that the N400
can also reflect difficulties in processing hierarchies of
271
thematic roles (Frisch & Schlesewsky, 2001), suggesting
that the N400 may be tied to aspects of meaning in
general rather than lexical semantics in particular. This
notion is supported by the finding that N400 effects have
also been observed in the non-linguistic domain (Federmeier & Kutas, 2001; Patel, Gibson, Ratner, Besson,
& Holcomb, 1998; West & Holcomb, 2002).
Syntactic processes are correlated with two types of
components: one that appears to be language specific,
i.e., a negativity usually with a left anterior maximum
(E)LAN and a non-domain specific one, i.e., a late centro-parietal positivity (P600). P600 effects have been
observed in numerous language studies but also in
studies of music processing (Patel et al., 1998) and of
gesture processing (Gunter, Knoblich, Bach, & Friederici, 2002). Moreover, the finding that the P600 not only
varies as a function of syntactic parameters but also as a
function of semantic parameters (Gunter, Friederici, &
Schriefers, 2000) supports the notion that this component is not specific for syntactic processing (M€
unte,
Szentkuti, Wieringa, Matzke, & Johannes, 1997; but see
Osterhout, McKinnon, Bersick, & Corey, 1996), but may
rather reflect general late integration processes. Source
localization of the ELAN component on the basis of
MEG data revealed that this early effect is modeled best
by two dipoles, one in the anterior temporal region and
one in the inferior frontal region in the LH and smaller
dipoles in the homologue areas of the RH (Friederici,
Wang, Herrmann, Maess, & Oertel, 2000). Thus, it appears that the language specific component, reflecting
early syntactic processes is primarily located in the LH,
involving both the temporal and inferior frontal cortex
although homologue areas of the RH are clearly involved. An increased involvement of the RH in some
studies may be a function of the fact that the preceding
syntactic structure sometimes allows not only predictions
about the incoming word category (e.g. noun versus
verb), but sometimes also predictions about the prosodic
information (e.g. phrase initial versus phrase final intonation). If the incoming information fails to match both,
the syntactic and the prosodic expectations, the ELAN
component may be less lateralized.
Prosodic processes and their possible correlates are
still under investigation. One promising ERP component correlated with prosodic processing in auditory
sentence comprehension is the Closure Positive Shift
(CPS) (Steinhauer, Alter, & Friederici, 1999). The CPS
is a bilaterally distributed positivity that appears in the
temporal vicinity of IPBs reflected in the acoustic signal
during spoken language processing. This study used
normal connected speech carrying phonetic, semantic
and syntactic information in addition to prosodic information. Thus from this study it is not entirely clear in
how far the CPS is independent of morpho-syntactic
information. Therefore, an additional study was conducted in which the sentence material was hummed.
272
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
Fig. 2. Language related brain potentials. Semantic processing difficulties elicit a centro-parietal negativity that peaks about 400 ms post stimulus
onset (N400) displayed in the upper part of figure: averaged brain potential for the sentence final word in correct sentences (solid line) versus semantically incorrect sentences (broken line). Difficulties in rule governed syntactic processing often result in early left anterior negativities (ELAN),
and difficulties of syntactic integration be they due to syntactic violations or to syntactic complexity elicit a late centro-parietal positivity (P600). Both
syntax-related components are displayed in the lower part of the figure: averaged brain potentials for the sentence final word in correct sentences
(solid line) and syntactically incorrect sentences (broken line). Adapted from Hahne and Friederici (2002).
Fig. 3 shows that the CPS even appears in sentences
without morpho-syntactic information. Interestingly,
the CPS is lateralized to the RH when stimulus material
carries only prosodic information. These ERP data
provide strong evidence that prosody is processed independently of morpho-syntactic information and pro-
vide an additional hint concerning the involvement of
the RH.
Given, however, the constraints of using the ERP
technique for the localization of sources of language
processing, the next paragraph is dedicated to more finegrained neuroanatomical evidence of localization.
Fig. 3. ERPs reflect the online processing of sentence-like material produced by a human voice during humming imitating the underlying prosodic
structure such as discussed in Steinhauer and Friederici (2001). The vertical line marks the sentence onset. Note that acoustic obstructions in the
signal by special filtering procedures similar to low-pass filtering or delexicalization (Plante et al., 2002) have been avoided. During the processing of
these speech-like sounding signals, containing one (condition A, solid line) or two IPBs (condition B, broken line), the CPS again correlates with the
IPBs appearing in the acoustic signal. The CPS during the processing of hummed sentence-like stimuli is maximal at right parietal electrodes (CP6,
T8 versus CP5, T7) (Pannekamp et al., 2003).
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
7. Neuroimaging evidence
A number of studies have investigated brain activation during the processing of semantic and syntactic
information. Lexical-semantic processing is strongly
correlated with activation in the middle and posterior
portion of the STG and the MTG (Price, Moore,
Humphreys, & Wise, 1997; Vandenberghe, Price, Wise,
Josephs, & Frackowiak, 1996; Wise et al., 1991). The
inferior frontal gyrus (IFG) appears to be responsible
for strategic and executive aspects of semantic processing (Fiez, 1997; Poldrack et al., 1999; Thompson-Schill,
Desposito, Aguirre, & Farah, 1997). Sentence level semantic processes are associated with a variety of activation loci, including the left IFG (BA 45/47), the left
MTG and the right STG as well as the left posterior
temporal region (Dapretto & Bookheimer, 1999;
Friederici, R€
uschemeyer, Hahne, & Fiebach, 2003; Kuperberg et al., 2000; Ni et al., 2000). Thus, the combined
findings indicate that temporal as well as inferior frontal
regions mainly in the LH support semantic processes.
Studies on the functional brain basis of syntactic
processes report activation in the inferior frontal cortex
and the anterior portion of the temporal cortex. In the
inferior frontal cortex two subregions seem to be separable: Those studies that compared syntactically simple
to syntactically complex sentences consistently found
BA 44/45 active (Caplan et al., 1998; Inui et al., 1998;
Just et al., 1996; Stromswold et al., 1996) whereas those
studies that investigated local syntactic phrase structure
building rather report activation in the frontal operculum adjacent to inferior portion of BA 44 (Friederici,
Meyer, & von Cramon, 2000; Friederici, R€
uschemeyer,
et al., 2003; Friederici et al., 2000). This possible functional distinction between the lateral somewhat more
anterior portion of BA44/45 and the frontal operculum
with the latter area being involved in the on-line processing of local phrase structure and the former area
supporting the processing of sentence structures containing moved elements seems to receive confirming
support from recent fMRI studies (Ben-Shahar, Hendler, Kahn, Ben-Bashat, & Grodzinsky, 2003; Fiebach,
Schlesewsky, & Friederici, 2001).1
1
Note, that while these studies indicate a functional interpretation
of the anterior STG in terms of syntactic processes, others have
reported that the STS reacts systematically to the ‘‘intelligibility of
speech’’ as the authors call the factor operationalized as normal versus
voice vocoded speech (Scott, Blank, Rosen, & Wise, 2000). Scott and
Johnsrude (2003) view the left anterior STS to be ‘‘important in
representing and accessing the meaning content of utterances’’. There
are two possible interpretations with respect to the combined findings:
either the anterior portion of the left STG and the left STS serve
different functions or both areas react to aspects of linguistic form, i.e.
wordform and syntactic form. Further research will have to clarify this
issue.
273
The functional neuroanatomy of prosodic processes
has been approached in recent PET and fMRI studies.
The processing of pitch information at the syllable level
was found to be associated with increased activation in the
right prefrontal cortex (Wildgruber, Pihan, Ackermann,
Erb, & Grodd, 2002), whereas the left frontal operculum
adjacent to BrocaÕs area was active when pitch is the relevant parameter to discriminate between lexical elements
in a language such as Thai (Gandour et al., 2000). Processing pitch at the suprasegmental sentence level is primarily associated with activation increase in the RH
although clear lateralization can be modulated by task
demands (Plante et al., 2002). A recent fMRI study in
which the presence of pitch information and the presence
of syntactic information in auditory speech stimuli were
varied systematically indicated that the right superior
temporal cortex and the right fronto-opercular cortex
specifically support the processing of suprasegmental information (Meyer et al., 2002, see also Meyer, Steinhauer,
Alter, von Cramon, & Friederici, this issue).
Several proposals have been formulated to explain
these findings regarding the hemispheric specialization in
processing prosodic information. Most recently, Wildgruber et al. (2002) concluded that there is a RH dominance for affective prosody, in particular in the right
parietal and dorso-lateral frontal cortex. Other aspects of
prosodic processing such as the extraction and encoding
of acoustic features of the speech signal may rather be
supported by a bilateral network including temporal
cortex, dorso-lateral frontal cortex and supplementary
motor area. Evidence for this assumption is supported by
recent work on production of rhythm (Riecker, Wildgruber, Dogil, Grodd, & Ackermann, 2002). With respect to
linguistic prosody Gandour et al. (2000) reconsidered a
hypothesis first formulated by Shipley-Brown, Dingwall,
and Berlin (1988), namely the so-called attraction hypothesis. This hypothesis assumes that differential lateralization occurs as a result of interaction between the
acoustic property of the input and its function. Pitch in
isolation is processed in the RH. When pitch is used to
signal affective aspects it is processed in the RH (or at least
mediated by the RH). When pitch is used to signal linguistic aspects ‘‘it is drawn’’ toward the LH. In short, with
respect to both linguistic and para-linguistic prosody the
more non-segmental the acoustic feature, the more processing is lateralized to the RH. Stress, as a suprasegmental feature which, however, concerns the relation
between adjacent syllables, in contrast to phrasal or sentence level relations, is assumed to occupy an intermediate
position.
8. Conclusion
The combined studies using different methodologies
to examine the neural basis of syntax, semantics and
274
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
prosody during language comprehension provide a clear
picture with respect to syntactic and semantic processes:
Syntactic processes are supported by a left lateralized
temporo-frontal network including the anterior portion
of the superior temporal gyrus and the pars opercularis
(BA 44/BA6) in the inferior frontal gyrus whereas semantic processes are subserved primarily by a left lateralized temporo-frontal network consisting of the
posterior portion of the superior and middle temporal
gyrus and BA 45/47 in the inferior frontal gyrus. The
picture with respect to prosodic processes at the sentence
level appears to be more dynamic: Pitch in isolation is
processed in the RH, but the more linguistic the nature
of either the stimulus or the task the larger the involvement of the LH.
Here we hypothesize that this dynamic interplay is
supported by a close interaction between the LH and the
RH. Neuroanatomically, this interaction is assumed to
be crucially dependent on the corpus callosum, interconnecting the two hemispheres. This notion receives
support from a single case study investigating a patient
with a lesion in the anterior four-fifths of the corpus
callosum (Klouda et al., 1988) indicating that pitch information, is primarily processed in the RH, but that
during language comprehension this information is integrated with linguistic information from the LH via the
corpus callosum. Future research will have to specify
this interhemispheric interaction.
Acknowledgments
This study was supported by the Leibniz Science
Prize and by the project FR 517/2-3 awarded to A.F. by
the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG) as well as by the Human
Frontier Science Program (HFSP) awarded to K.A.
Special thanks go to Sonja Kotz, Christian Fiebach,
Thomas Gunter and Martin Meyer for valuable comments on an earlier version.
References
Altmann, G. T. M., & Steedman, M. (1988). Interaction with context
during human sentence processing. Cognition, 30, 191–238.
Baum, S. R., Daniloff, J. K., Daniloff, R., & Lewis, J. (1982). Sentence
comprehension by BrocaÕs aphasics: Effects of some suprasegmental variables. Brain and Language, 17, 261–271.
Beach, C. M. (1991). The interpretation of prosodic patterns at points
of syntactic struncture ambiguity: Evidence for cue trading
relations. Journal of Memory and Language, 30, 644–663.
Behrens, S. (1985). The perception of stress and lateralization of
prosody. Brain and Language, 26, 332–348.
Ben-Shahar, M., Hendler, T., Kahn, I., Ben-Bashat, D., & Grodzinsky, Y. (2003). The neural reality of syntactic transformations:
Evidence from fMRI. Psychological Science, 14, 433–440.
Berndt, R. S., & Caramazza, A. (1980). A redefinition of the syndrome
of BrocaÕs aphasia: Implications for a neuropsychological model of
language. Applied Psycholinguistics, 1, 225–278.
Blumstein, S., & Cooper, W. E. (1974). Hemispheric processing of
intonation contours. Cortex, 10, 146–158.
Blumstein, S., & Goodglass, H. (1972). The perception of stress as a
semantic cue in aphasia. Journal of Speech and Hearing Research,
15, 800–806.
Boland, J. E., & Tanenhaus, M. K. (1991). Advances in psychology.
In G. B. Simpson (Ed.), Understanding word and sentence (pp.
331–366). Amsterdam: North Holland Elsevier Science Publ.
Bradvik, B., Dravins, C., Holtas, S., Rosen, I., Ryding, E., & Ingvar,
D. (1991). Disturbances of speech prosody following right hemisphere infarcts. ACTA Neurologica Scandinavica, 84, 114–126.
Bryan, K. (1989). Language prosody and the right hemisphere.
Aphasiology, 3, 285–299.
Cancelliere, A., & Hausdorf, P. (1988). Emotional expression in
HuntingtonÕs disease. Journal of Clinical Experimental Neuropsychology, 10, 62.
Caplan, D., Alpert, N., & Waters, G. (1998). Effects of syntactic
structure and propositional number on patterns of regional
cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541–
552.
Caplan, D., Alpert, N., Waters, G., & Olivieri, A. (2000). Activation of
BrocaÕs area by syntactic processing under conditions of concurrent
articulation. Human Brain Mapping, 9, 65–71.
Caplan, D., & Hildebrandt, N. (1988). Disorders of syntactic comprehension. Cambridge, MA: MIT Press.
Caramazza, A., & Zurif, E. B. (1976). Dissociation of algorithmic and
heuristic processes in language comprehension: Evidence from
aphasia. Brain and Language, 3, 572–582.
Clifton, C., Speer, S., & Abney, S. P. (1991). Parsing arguments:
Phrase structure an argument structure as determinants of initial
parsing decisions. Journal of Memory and Language, 30, 251–271.
Cooper, W. E., & Paccia-Cooper, J. (1980). Syntax and speech.
Cambridge, MA: Harvard University Press.
Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the
comprehension of spoken language: A literature review. Language
and Speech, 40, 141–201.
Cutler, A., & Fodor, J. A. (1979). Semantic focus and sentence
comprehension. Cognition, 7, 49–59.
Cutler, A., & Norris, D. (1988). The role of strong syllables for lexical
access. Journal of Experimental Psychology: Human Perception and
Performance, 14, 121–133.
Dapretto, M., & Bookheimer, S. Y. (1999). Form and content:
Dissociating syntax and semantics in sentence comprehension.
Neuron, 24, 427–432.
Dronkers, N. F., Wilkins, D. P., Van Valin, R. D., Redfern, J. R., &
Jaegers, J. J. (1994). A reconsideration of the brain areas involved
in the disruption of morphosyntactic comprehension. Brain and
Language, 47, 461–463.
Federmeier, K. D., & Kutas, M. (2001). Meaning and modality:
Influences of context, semantic memory organization, and perceptual predictability on picture processing. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 27, 202–224.
Ferreira, F. (1993). Creation of prosody during sentence production.
Psychological Review, 100, 233–253.
Fiebach, C. J., Schlesewsky, M., & Friederici, A. D. (2001). Syntactic
working memory and the establishment of filler-gap dependencies:
Insights from ERPs and fMRI. Journal of Psycholinguistic
Research, 30, 321–338.
Fiez, J. A. (1997). Phonology, semantics, and the role of the left
inferior prefrontal cortex. Human Brain Mapping, 5, 79–83.
Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT
Press.
Fodor, J. D. (1998). Learning to parse? Journal of Psycholinguistic
Research, 27, 285–319.
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
Fodor, J. D. (2002). Psycholinguistics cannot escape prosody. In B. Bel
& I. Marlien (Eds.), Proceedings of the 1st international conference
on speech prosody (pp. 83–88). University of Tokio: Keikichi Hirose
Laboratory.
Fodor, J. D., & Inoue, A. (1994). The diagnosis and cure of garden
paths. Journal of Psycholinguistic Research, 23, 407–434.
Frazier, L. (1995). Constraint satisfaction as a theory of sentence
processing. Journal of Psycholinguistic Research, 24, 437–468.
Frazier, L., & Rayner, K. (1982). Making and correcting errors during
sentence comprehension: Eye movement in the analysis of structural ambiguous sentences. Cognitive Psychology, 14, 178–210.
Friederici, A. D. (1985). Levels of processing and vocabulary types:
Evidence from on-line comprehension in normals and agrammatics. Cognition, 19, 133–166.
Friederici, A. D. (2002). Towards a neural basis of auditory sentence
processing. Trends in Cognitive Sciences, 6, 78–84.
Friederici, A. D., Kotz, S. A., Steinhauer, K., & von Cramon, D. Y.
(2003). The neural basis of the prosody–syntax interplay: The role
of the corpus callosum. Abstract for the academy of Aphasia,
Vienna, Austria, October 19–21.
Friederici, A. D., Meyer, M., & von Cramon, D. Y. (2000). Auditory
language comprehension: An event-related fMRI study on the
processing of syntactic and lexical information. Brain and Language, 74, 289–300.
Friederici, A. D., R€
uschemeyer, S.-A., Hahne, A., & Fiebach, C. J.
(2003). The role of left inferior frontal and superior temporal cortex
in sentence comprehension: Localizing syntactic and semantic
processes. Cerebral Cortex, 13, 117–177.
Friederici, A. D., Wang, Y., Herrmann, C. S., Maess, B., & Oertel, U.
(2000). Localization of early syntactic processes in frontal and
temporal cortical areas: A magnetoencephalographic study. Human
Brain Mapping, 11, 1–11.
Frisch, S., & Schlesewsky, M. (2001). The N400 reflects problems of
thematic hierarchizing. Neuroreport, 12, 3391–3394.
Gandour, J., & Dardarananda, R. (1983). Indentification of tonal
contrasts in Thai aphasic patients. Brain and Language, 18, 98–114.
Gandour, J., Wong, D., Hsieh, L., Weinzapfel, B., Van Lancker, D., &
Hutchins, G. D. (2000). A crosslinguistic PET study of tone
perception. Journal of Cognitive Neuroscience, 12, 207–222.
Gee, J., & Grosjean, F. (1983). Performance structures: A psycholinguistic and linguistic appraisal. Cognitive Psychology, 15, 411–458.
Gleitman, L. R., & Wanner, E. (1982). Language acquisition: The state
of the art. In E. Wanner & L. R. Gleitman (Eds.), Language
acquisition: The state of the art. Cambridge: University Press.
Gorrell, P. (1995). Syntax and parsing. Cambridge, England: Cambridge University Press.
Grosjean, F. (2000). The bilingualÕs language modes. In J. Nicol (Ed.),
One mind, two languages: Bilingual language processing (pp. 1–22).
Oxford: Blackwell.
Gunter, T. C., Friederici, A. D., & Schriefers, H. (2000). Syntactic
gender and semantic expectancy. ERPs reveal early autonomy and
late interaction. Journal of Cognitive Neuroscience, 12, 556–568.
Gunter, T. C., Knoblich, P., Bach, P., & Friederici, A. D. (2002).
Meaning and structure in action comprehension: Electrophysiological evidence. Journal of Cognitive Neuroscience, 37(Suppl), 80.
Haarman, H. J., & Kolk, H. H. J. (1991). A computer model of the
temporal course of agrammatic sentence understanding: The effects
of variation in severity and sentence complexity. Cognitive Science,
15, 49–87.
Hagoort, P. (1993). Impairments of lexical-semantic processing in
aphasia: Evidence from the processing of lexical ambiguities. Brain
and Language, 45, 189–232.
Hahne, A., & Friederici, A. D. (2002). Differential task effects on
semantic and syntactic processes as revealed by ERPs. Cognitive
Brain Research, 13, 339–356.
Heilman, K. M. (Ed.). (1995). Clinical Neuropsychology. New York:
Oxford University Press.
275
Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4, 131–138.
Hirsch-Pasek, K. (1987). Clauses are perceptual units for young
infants. Cognition, 26, 269–286.
Inui, T., Otsu, Y., Tanaka, S., Okada, T., Nishizawa, S., & Konishi, J.
(1998). A functional MRI analysis of comprehension processes of
Japanese sentences. NeuroReport, 9, 3325–3328.
Jusczyk, P. W. (1997). The discovery of spoken language. Cambridge,
MA: MIT Press.
Just, M. A., & Carpenter, P. A. (1992). A capacity theory of
comprehension: Individual differences in working memory. Psychological Review, 99, 122–149.
Just, M. A., Carpenter, P. A., Keller, T. A., Eddy, W. F., & Thulborn,
K. R. (1996). Brain activation modulated by sentence comprehension. Science, 274, 114–116.
Kaan, E., & Swaab, T. Y. (2002). The brain circuitry of syntactic
comprehension [Review]. Trends in Cognitive Sciences, 6, 350–356.
Klouda, G., Robin, D., Graff-Radford, N., & Cooper, W. (1988). The
role of callosal connections in speech prosody. Brain and Language,
35, 154–171.
K€
oelsch, S., Gunter, T. C., von Cramon, D. Y., Zysset, S., Lohmann,
G., & Friederici, A. D. (2002). Bach speaks: A cortical ÔlanguagenetworkÕ serves the processing of music. NeuroImage, 17, 956–966.
Kuperberg, G. R., McGuire, P. K., Bullmore, E. T., Brammer, M. J.,
Rabe-Hesketh, S., Wright, I. C., Lythgoe, D. J., Williams, S. C. R.,
& David, A. S. (2000). Common and distinct neural substrates for
pragmatic, semantic, and syntactic processing of spoken sentences:
An fMRI study. Journal of Cognitive Neuroscience, 12, 321–341.
Kutas, M., & Federmeier, K. D. (2000). Electrophysiology reveals
semantic memory use in language comprehension. Trends in
Cognitive Sciences, 4, 463–470.
Kutas, M., & Hillyard, S. A. (1980). Reading senseless sentences: Brain
potentials reflect semantic incongruity. Science, 207, 203–204.
Lehiste, I. (1973). Phonetic disambiguation of syntactic ambiguity.
Glossa, 7, 107–122.
MacDonald, M. C. (1993). The interaction of lexical and syntactic
ambiguity. Journal of Memory and Language, 32, 692–715.
Maess, B., K€
oelsch, S., Gunter, T. C., & Friederici, A. D. (2001).
Musical syntax is processed in the area of Broca: An MEG study.
Nature Neuroscience, 4, 540–545.
Marcus, M., & Hindle, D. (1990). Psycholinguistic and computational
perspectives. In G. T. M. Altmann (Ed.), Cognitive models of speech
processing (pp. 483–512). Cambridge, MA: MIT Press.
Marslen-Wilson, W. D., & Tyler, L. K. (1980). The temporal structure
of spoken language understanding. Cognition, 8, 1–71.
Marslen-Wilson, W. D., Tyler, L. K., Warren, P., Grenier, P., & Lee,
C. S. (1992). Prosodic effects in minimal attachment. Quarterly
Journal of Experimental Psychology, 45A, 73–87.
Meyer, M., Alter, K., Friederici, A. D., Lohmann, G., & von Cramon,
D. Y. (2002). Functional MRI reveals brain regions mediating slow
prosodic modulations in spoken sentences. Human Brain Mapping,
17, 73–88.
Meyer, M., Steinhauer, K., Alter, K., von Cramon, D. Y., &
Friederici, A. D. (this issue). Brain activity varies with modulation
of dynamic pitch variance in sentence melody. Brain and Language,
Special Issue.
Milberg, W., & Blumstein, S. E. (1981). Lexical decision and aphasia:
Evidence for semantic processing. Brain and Language, 14, 371–385.
M€
unte, T. F., Szentkuti, A., Wieringa, B. M., Matzke, M., &
Johannes, S. (1997). Human brain potentials to reading syntactic
errors in sentences of different complexity. Neuroscience Letters,
235, 105–108.
Ni, W., Constable, R. T., Mencl, W. E., Pugh, K. R., Fulbright, R. K.,
Shaywitz, S. E., Shaywitz, B. A., Gore, J. C., & Shankweiler, D.
(2000). An event-related neuroimaging study distinguishing form
and content in sentence processing. Journal of Cognitive Neuroscience, 12, 120–133.
276
A.D. Friederici, K. Alter / Brain and Language 89 (2004) 267–276
Osterhout, L., McKinnon, R., Bersick, M., & Corey, V. (1996). On the
language-specificity of the brain responses to syntactic anomalies:
Is the syntactic positive shift a member of the P300 family? Journal
of Cognitive Neuroscience, 8, 507–526.
Pannekamp, A., Toepel, U., Hahne, A., & Friederici, A. D. (2003).
The BrainÕs Response to Hummed Sentences. In M.-J. Sole, D.
Recasens, & J. Romero (Eds.), Processings of the 15th International
Congress of Phonetic Sciences, Barcelona, 3–9 August (pp. 877–
879). Universitat Aut
onoma de Barcelona.
Patel, A., Gibson, E., Ratner, J., Besson, M., & Holcomb, P. J. (1998).
Processing syntactic relations in language and music: An event-related
potential study. Journal of Cognitive Neuroscience, 10, 717–733.
Pell, M. (1996). On the receptive prosodic loss in ParkinsonÕs disease.
Cortex, 32, 693–704.
Pell, M., & Baum, S. (1997). The ability to perceive and comprehension intonation in linguistic and affectice contexts by braindamaged adults. Brain and Language, 57, 80–99.
Perkins, J. M., Baran, J. A., & Gandour, J. (1996). Hemispheric
specialization in processing intonation contours. Aphasiology, 10,
343–362.
Pierrehumbert, J. (1980). The phonology and phonetics of English
intonation. Dissertation, MIT, Cambridge, MA, Indiana University
Linguistics Club.
Plante, E., Creusere, M., & Sabin, C. (2002). Dissociating sentential
prosody from entence processing: Activation interacts with taks
demands. NeuroImage, 17, 401–410.
Poeppel, D. (2003). The analysis of speech in different temporal
integration windows: Cerebral lateralization as Ôasymmetric sampling in timeÕ. Speech Communication, 41, 245–255.
Poldrack, R. A., Wagner, A. D., Prull, M. W., Desmond, J. E., Glover,
G. H., & Gabrieli, J. D. E. (1999). Functional specialization for
semantic and phonological processing in the left inferior prefrontal
cortex. NeuroImage, 10, 15–35.
Price, C. J., Moore, C. J., Humphreys, G. W., & Wise, R. J. S. (1997).
Segregating semantic from phonological processes during reading.
Journal of Cognitive Neuroscience, 9, 727–733.
Riecker, A., Wildgruber, D., Dogil, G., Grodd, W., & Ackermann, H.
(2002). Hemispheric lateralization effects of rhythm implementation
during syllable repetitions: An fMRI study. NeuroImage, 16, 169–
176.
Schubotz, R. I., & von Cramon, D. Y. (in press). Learning serial order
of interval, spatial and object information: An fMRI-study on
sequencing. 10th Annual Conference of the Rotman Research
Institute, The Frontal Lobes, Toronto, Canada, March 2000. To
appear in Brain & Cognition.
Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. S. (2000).
Identification of a pathway for intelligible speech in the left
temporal lobe. Brain, 123, 2400–2406.
Scott, S. K., & Johnsrude, I. S. (2003). The neuroanatomical and
functional organization of speech perception. Trends in Neurosciences, 26, 100–107.
Shipley-Brown, F., Dingwall, W. O., & Berlin, C. I. (1988). Hemispheric processing of affective and linguistic intonation contours in
normal subjects. Brain and Language, 33, 16–26.
Steinhauer, K., Alter, K., & Friederici, A. D. (1999). Brain potentials
indicate immediate use of prosodic cues in natural speech processing. Nature Neuroscience, 2, 191–196.
Steinhauer, K., & Friederici, A. D. (2001). Prosodic boundaries,
comma rules, and brain responses: The Closure Positive Shift in the
ERPs as a universal marker for prosodic phrasing in listeners and
readers. Journal of Psycholinguistic Research, 30, 267–295.
Streeter, L. A. (1978). Acoustic determinants of phrase boundary
perception. Journal of the Acoustical Society of America, 64, 1582–
1592.
Stromswold, K., Caplan, D., Alpert, N., & Rauch, S. (1996).
Localization of syntactic comprehension by positron emission
tomography. Brain and Language, 52, 452–473.
Swaab, T. Y., Brown, C., & Hagoort, P. (1995). Delayed integration of
lexical ambiguities in BrocaÕs aphasics: Evidence from even-related
potentials. Brain and Language, 51, 159–161.
Swaab, T. Y., Brown, C., & Hagoort, P. (1998). Understanding
ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in BrocaÕs aphasia. Neuropsychologia, 36, 737–761.
Thompson-Schill, S. L., Desposito, M., Aguirre, G. K., & Farah, M. J.
(1997). Role of the left inferior prefrontal cortex in retrieval of
semantic knowledge: A re-evaluation. Proceedings of the National
Academy of Sciences of the United States of America, 94, 14792–
14797.
Ullman, M. T. (2001). A neurocognitive perspective on language: The
declarative/procedural model. Nature Reviews Neuroscience, 2,
717–726.
Vandenberghe, R., Price, C., Wise, R., Josephs, O., & Frackowiak, R.
S. J. (1996). Functional anatomy of a common semantic system for
words and pictures. Nature, 383, 254–256.
Van Lancker, D., & Fromkin, V. A. (1973). Hemispheric specialization
for pitch and ‘‘tone’’: Evidence from Thai. Journal of Phonetics, 1,
101–109.
Van Lancker, D., & Sidtis, J. J. (1992). The identification of affectiveprosodic stimuli by left- and right-hemisphere-damaged subjects:
All errors are not created equal. Journal of Speech and Heaing
Researc, 35, 963–970.
Warren, P., Grabe, E., & Nolan, F. (1995). Prosody, phonology, and
parsing in closure ambiguities. Language and Cognitive Processes,
10, 457–486.
Weintraub, S., Mesulam, M.-M., & Kramer, L. (1981). Disturbances
in prosody: A right-hemisphere contribution to language. Archives
of Neurology, 38, 742–744.
West, W. C., & Holcomb, P. J. (2002). Event-related potentials during
discourse-level semantic integration of complex pictures. Cognitive
Brain Research, 13, 363–375.
Whitehouse, P., Caramazza, A., & Zurif, E. (1978). Naming in
aphasia: Interacting effects of form and function. Brain and
Language, 6, 63–74.
Wildgruber, D., Pihan, H., Ackermann, H., Erb, M., & Grodd, W.
(2002). Dynamic brain activation during processing of emotional
intonation: Influence of acoustic parameters, emotional valence,
and sex. NeuroImage, 15, 856–869.
Wise, R., Chollet, F., Hadar, U., Friston, K., Hoffner, E., &
Frackowiak, R. (1991). Distribution of cortical neural networks
involved in word comprehension and word retrieval. Brain, 114,
1803–1817.
Zatorre, R. J., Belin, P., & Penhune, V. B. (2002). Structure and
function of auditory cortex: Music and speech. Trends in Cognitive
Sciences, 6, 37–46.
Zatorre, R. J., Evans, A. C., Meyer, E., & Gjedde, A. (1992).
Lateralization of phonetic and pitch processing in speech perception. Science, 256, 846–849.