Document

To appear in: P. Kosta et al. (eds.). Minimalism and Beyond: Radicalizing the Interfaces.
Amsterdam: John Benjamins.
What is and what is not problematic about the T-model
Natalia Slioussar1
Abstract
This paper focuses on two important discrepancies between the T-model of the grammar and
performance systems responsible for production and comprehension. It argues that independently
from the assumed perspective on the competence-performance distinction, one of them is not
problematic and the other is. There is no real contradiction in directionality conflicts, i.e. in the
fact that the grammar works strictly bottom-up, while performance systems involve many topdown processes. However, the fact that the computational system takes only lexical items and
their features as its input presents a real problem, which manifests itself in the domains of scope
and Information Structure. This problem can be solved in the grammar architecture where the C-I
interface can be used during the derivation.
Acknowledgements
The Veni grant 016.104.065 from the Netherlands Organization for Scientific Research (NWO) is
gratefully acknowledged.
1
Utrecht institute of Linguistics OTS,
Trans 10, 3512JK Utrecht, the Netherlands,
and St. Petersburg State University,
Universitetskaya emb. 11, 199034 St.Petersburg, Russia
Tel.: +31 30 253 6006
Fax: +31 30 253 6000
E-mail: [email protected]
1. Introduction
The grammar model assumed in (Chomsky 1995, 2001, 2008) and most other minimalist theories
is a bottom-up derivational model that takes lexical items, constructs a syntactic structure out of
them and sends this structure to the SM and C-I interfaces. This architecture is known as the Tmodel. In comprehension, we move from left to right, recovering syntactic structures with their
meanings from linear strings. What happens in production is more controversial, but it is widely
assumed that there are at least some top-down left-to-right processes there.
Thus, the process of syntactic derivation in the T-model is not isomorphic to the processes
that take place during production and comprehension. Prima facie, how problematic this is
depends on one’s perspective on the competence-performance distinction. Most generative
linguists view the grammar and performance systems underlying production and comprehension
as separate systems, which means that the relations between them can be rather indirect. For other
authors, the grammar and performance systems are essentially theories of the same object, but at
different levels of description, which implies much closer structural resemblance.
However, in this paper I argue that some aspects of this non-isomorphism are not problematic
and the others are whatever perspective on the competence-performance distinction is assumed.
Directionality conflicts (bottom-up processes in the grammar and top-down processes in
performance systems) are unproblematic, while the absence of any dialogue between syntax and
semantics before the derivation is completed and the resulting indeterminacy of the grammar with
respect to anything that is not encoded by features present a real problem.
2. Competence-performance distinction
This section provides some background on the competence-performance distinction that will be
necessary for subsequent discussion. This distinction was introduced by Chomsky (1965: 3):
Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely
homogeneous speech-community, who knows its language perfectly and is unaffected by
such grammatically irrelevant conditions as memory limitations, distractions, shifts of
attention and interest, and errors (random or characteristic) in applying his knowledge of the
language [i.e. linguistic competence] in actual performance.
Similar ideas were expressed earlier — e.g. de Saussure distinguished langue ‘language’ and
parole ‘speech’. However, de Saussure’s langue refers to an abstract set of rules, which are
independent of individual users, while parole refers to the concrete instances of language use. For
Chomsky, linguistic competence is the property of an individual language user. Essentially, this
notion is synonymous to an individual grammar.
This is the view on the grammar that I focus on in this paper, so let me briefly show that it
does not exclude other possible views and vice versa. Having adopted this view, we can still
speak of the grammar in the Saussurean sense: the set of rules shared by the speakers of the same
language at a particular stage of its development. However, in reality defining such a set may be
problematic. Firstly, languages are constantly changing, and some groups of speakers are more
conservative than the others. Secondly, the borders between closely related languages and
dialects are rarely absolute, we usually deal with linguistic continua. Thus, wherever we
encounter variation, we are forced to consider smaller groups of speakers and eventually may go
down to individual grammars. Nevertheless, the Saussurean view on the grammar is still very
useful because discussing such variation, we want to specify which variant is regarded as
normative, is more widespread, more recent etc.
We can also speak of the whole set of possibilities existing in human languages. In the
generative tradition, it is usually termed universal grammar (UG) and described by means of
principles capturing what all languages have in common and parameters defining possible
differences between them. What the relation is between the UG in this sense and individual
grammars is a difficult question. It is widely assumed that language acquisition is possible
because children have access to UG. Then mature individual grammar at a particular point (after
all, they also undergo changes) can be seen as a variant of UG where all parameters are set. This
imposes very important restrictions on possible theories of linguistic competence, but I will not
discuss them in this paper.
Let me only note that one can definitely study UG separately from individual grammars,
figuring out how different parameter settings are grouped crosslinguistically, which options are
more or less widespread, why this could be so, and what generalizations can be made about
language change. On the other hand, once the parameters are set, possible variation in UG
presumably plays no role for individual grammars — one does not consider possible parameter
combinations every time when uttering or comprehending a sentence. Therefore the theories
describing this variation and some generalizations behind it, for example, optimality-theoretic
constraint systems, may have no relevance at the individual grammar level.
Contrasting linguistic competence, or the individual grammar, and performance was crucial
for Chomsky not only to justify the fact that linguists often work with idealized objects — in all
empirical sciences it is customary to abstract away from certain factors at higher levels of
analysis — but also to stress that competence cannot be reduced to performance. Many linguists
consider this uncontroversial: linguistic theory should be interested not only in the sentences one
actually produced or comprehended, but also in the infinite set of sentences one can produce and
comprehend, as well as in the limits of this set. What is more controversial is how exactly the
distinction between competence and performance should be implemented.
Neeleman and van de Koot (2010) outline two possible approaches to this problem.
According to the first one, the grammar is a knowledge base consulted by performance systems
responsible for production and comprehension. According to the second approach, the grammar
and performance systems can be seen as theories of the same object, but at different levels of
description. Marr (1982) demonstrated that information-processing systems must be understood
at three levels: (1) the logical structure of the mapping that they carry out from one type of
information to another; (2) the algorithm that yields the desired input-output mapping; (3) the
physical realization of this algorithm and its input and output. In case of language, grammar
corresponds to the first level and performance systems to the second.
The majority of generative linguists assume the first approach mentioned above. For example,
Chomsky (2000: 117) notes: “There is good evidence that the language faculty has at least two
different components: a ‘cognitive system’ that stores information in some manner, and
performance systems that make use of this information.” Neeleman and van de Koot argue for the
second approach, and I largely agree with their argumentation. In particular, they show that it
does not trivialize the grammar and does not reduce competence to performance. For example,
under this approach the grammar as the logical level is expected to be optimal, while the
algorithmic level may well be highly redundant if this yields faster and more robust results in
language production and comprehension.
In brief, my main reason for adopting this approach is the following. Let us look at
comprehension models. Earliest parsers and some recently proposed ones rely on heuristic
strategies (e.g. De Vincenzi 1991; Fodor and Inoue 1995; Frazier and Clifton 1996; Frazier and
Fodor 1978). Heuristic strategies, which usually appeal to syntactic simplicity, make use of the
core grammatical knowledge. However, several authors argue very convincingly that we also
crucially rely on much more complex grammatical principles when we parse and develop models
reflecting that (e.g. Phillips 1996; Pritchett 1992; Schneider 1999; Schneider and Phillips 2001).
If such grammatical principles must be built in our parsers, a separate grammar module, which
they can consult, becomes superfluous. The same can be shown for production models — they
are simply less elaborate than parsers so far.
Obviously, the two approaches outlined above offer fundamentally different views on the
competence-performance distinction and the nature of the individual grammar. Under the first
view, the relation between the grammar and performance systems can be rather indirect, while
the second presupposes deep structural parallelism. However, in the next section I will discuss
certain differences between the T-model of the grammar and production and comprehension
systems, and will show that some of them are not problematic for both views on the grammar,
while the others are.
3. Potential problems with the T-model: the directionality problem
Most authors working in the minimalist framework rely on the T-model of the grammar. This
model takes lexical items (words, morphemes or submorphemic units, depending on the theory)
as its input, constructs a syntactic structure out of them and sends this structure to the SM and C-I
interfaces. The derivation proceeds bottom-up, i.e. largely from right to left if we consider
linearized strings.
What is the sequence of steps in performance systems? In comprehension, we gradually
recover syntactic structures with their meanings from linear strings moving from left to right.
What happens in production is more difficult to determine. Existing models, such as (Levelt
1993), claim only that we go from intention to articulation, i.e. from meaning to a linear string,
through a formulator that has access to the lexicon and uses grammatical rules, but do not specify
how exactly this formulator works. Obviously, we cannot start with the full-fledged meaning of a
sentence and then build the corresponding syntactic structure because the former, being
compositional, relies on the latter. Most probably, syntax and semantics go hand in hand: we
elaborate our intention while building the syntactic structure of the sentence. And there is plenty
of evidence suggesting that we can start from different things in this process and often move topdown, from left to right.
Let us discuss one such piece of evidence coming from Russian. The analysis of speech
disfluencies and errors, in particular, in (Rusakova 2009), shows that people often start sentences
with a DP in Nominative or Accusative and then change it to a DP in an inherent case or a PP, as
in (1a-c). The opposite, as in (2), happens very rarely. Wherever introspective reports are
available, people say that had a particular predicate in mind and then changed their intentions in
the latter cases, but very often cannot point to a particular discarded predicate in the former.
(1) a. Kakuju
strategiju…
kakoj
my budem priderživat'sja?
which.ACC strategy.ACC which.INSTR we will
adhere
‘Which strategy will we adhere to?’ (a DPACC was changed to a DPINSTR).
b. Poėtomu ėti
voprosy
tože nado
otvečat'.
therefore these.ACC questions.ACC also necessary to-answer
‘Therefore it is also necessary to answer these questions’ (a DPACC was used instead
of a PP with the preposition na ‘to’).
c. Ona uže
Zajcevu…
proigrala Zajcevoj.
she already Zajceva.ACC lost
Zajceva.DAT
‘She has already lost to Zajceva’ (a DPACC was changed to a DPDAT).
(2) Ja ne
hoču, čtoby moimi
I NEG want that
den'gami
kto-to
kontroliroval.
my.INSTR money.INSTR somebody controlled
‘I do not want anybody to control my money’ (a DPINSTR was used instead of a DPACC,
according to the introspective report, the speaker was initially going to use the verb
rasporjažat’sja ‘to dispose of’).
These data suggest that in production, people can choose arguments before choosing the
predicate and provisionally assign them structural cases. Presumably, this leads to overt errors,
like in (1a-c), only in a small percentage of sentences because most predicates are indeed used
with structural cases and many errors are repaired as a result of the internal feedback loop before
the sentence is pronounced.
To conclude, the process of syntactic derivation in the T-model is not isomorphic to the
processes that take place during production and comprehension. Is this a problem? Some linguists
tend to think so and propose alternative grammar models relying (at least partly) on top-down
derivation (e.g. Phillips 1996; Richards 1999; Uriagereka 2011; Zwart 2009). Needless to say,
other grammar architectures have also been proposed that differ from the T-model in other
important ways — for example, Jackendoff’s (1997) multi-layered model, but I will not discuss
them here.
I do not think that the bottom-up directionality of the grammar is problematic even if the
grammar and performance systems are seen as different levels of the same system. On the
contrary, I believe that bottom-up models are better suited to describe the core grammatical
processes like constituent building and long-distance dependency formation. I find Epstein’s
(1999) explanation of the c-command relying on bottom-up structure building very insightful.
How can this be reconciled with top-down processes in production and comprehension?
Proving a theorem can be a good analogy. We can first decide what we need to prove and then
select the axioms to rely on. However, we know from the very start that the axioms are there, and
when particular axioms are finally chosen, they will precede the conclusions in the internal
hierarchy of the proof.
Similarly, when uttering an argument, speakers might not have a particular verb in mind, as in
(1a-c), but some abstract schematic structure might already be projected — this would explain
how structural cases are assigned in such situations. In my view, the most interesting question,
which still has to be addressed, is what this preliminary abstract structure might look like and
where it comes from: in particular, how detailed it is, what projections it may and must include
before the lexical material is chosen, whether it contains lower copies of arguments that were
pronounced before the predicate is selected, whether there is some universal template or a
number of them etc. Another noteworthy question is whether anything similar is preliminarily
projected during comprehension. What happens once the verb and other material are decided
upon is more or less clear: they occupy a lower place in the syntactic tree and essentially precede
the fronted argument in the internal hierarchy of the derivation. Thus, as long as derivational
timing is not confused with real timing — which regularly happens, as Neeleman and van de
Koot (2010) note — there is no contradiction here.
4. Potential problems with the T-model: the input problem
As I showed in the previous section, the fact that the grammar works strictly bottom-up, while
there are many top-down processes in the performance systems does not seem problematic to me.
But another core property of the T-model does. It takes as input lexical items and their features
and does not allow for any dialogue between syntax and semantics: first the grammar fully
completes a syntactic structure, and then the C-I systems can interpret it. Meanwhile, as I noted
above, syntax and semantics most probably go hand in hand during production: the process is
initiated on the C-I side — we start with an intention — but then the grammar regularly overtakes
the initiative dictating which aspects of the initial vague thought should be clarified and
obligatorily encoded in syntax. How exactly this happens is another fascinating question that we
know very little about.
By itself, the absence of dialogue between syntax and semantics in the T-model is not
necessarily problematic. As I showed above, the sequence of steps in the grammar does not have
to coincide with the sequence of steps in performance systems. What I do see as a problem is that
the T-model has to remain indeterministic with respect to any information that is not encoded by
lexical items and their features.
One thing that is standardly assumed not to be encoded by features is scope. As a result, all
theories of scope involve indeterminacy, overgeneration or look-ahead, all of which contradicts
the core minimalist principles and compromises the T-model architecture. A classical English
example in (3) is ambiguous with respect to scope:
(3) Somebody loves everybody. ∃>∀, ∀>∃
For this ambiguity to arise, quantifiers should be ale to raise in different order. How is the order
decided upon? For example, Fox (1995) has to conclude that syntax must see the semantic effects
of the relative scope of two quantifiers. Other authors (e.g. Reinhart 2006) opt for indeterminacy:
the grammar allows for both options and which quantifier ends up higher is accidental. The
problem might be even more evident in languages like Russian. Russian has surface scope (with
some rare exceptions), and different scopal configurations are achieved by overt movement
(Ionin 2001; Neeleman and Titov 2009; Slioussar 2011):
(4) a. Odin
mal’čik
ljubit každuju
devočku. ∃>∀, *∀>∃
one.NOM boy.NOM loves every.ACC girl.ACC
b. Odnu
devočku ljubit každyj
mal’čik. ∃>∀, *∀>∃
one.ACC girl.ACC loves every.NOM boy.NOM
c. Každuju
devočku ljubit odin
mal’čik. ∀>∃, *∃>∀
every.ACC girl.ACC loves one.NOM boy.NOM
These movements do not target specific positions. In (4b-c), objects move to the C domain to
scope above subjects, but other reorderings are also possible: for example, DP and PP internal
arguments can be reordered inside vP. What drives these movements remains unclear. Despite its
importance in the minimalist framework, most authors simply do not address this question.
Slioussar (2011) suggests that these movements are triggered by edge features. Introducing edge
features, Chomsky (2008) stated that they can attract any constituent if it is not prohibited for
independent reasons, and the interpretation of the moved element depends on its final position.
This is exactly what we see in (4b-c) and other similar cases. However, if we stay inside the Tmodel, one problem remains: we have to assume that the grammar allows for a wide variety of
movements to happen freely, but only when the sentence is completed and interpreted at the C-I
interface we can see what the effect of one or the other movement was and whether we actually
need it.
Obviously, this problem never arises during production: once two relevant elements are
selected, we know which scopes over which. If syntax and semantics were allowed to talk
through the interface while syntactic structure is constructed, a simple rule “if A scopes over B,
move A over the B” would be enough. In other words, this problem is not intrinsic to scope
encoding; it is an artifact of the T-model. Another domain where similar problems arise is
Information Structure (IS). I will discuss it in detail in the next section.
I believe that the grammar model should not solve problems that are never encountered in
production and comprehension, especially if solutions to them do not come for granted and
require various undesirable modifications or reservations to be made. This is true even if we view
the grammar and performance systems as two separate systems, i.e. the relation between them
may be rather indirect. After all, under this view the grammar is a knowledge database consulted
by performance systems, so it is strange that it goes into troubles answering questions that these
systems would never ask.
Therefore, I suggest that the grammar architecture should be modified so that syntaxsemantics interface could be used during the derivation. As far as I understand, the basic intuition
behind the T-model is that the grammar model should primarily describe what we can do rather
than what we actually do in production and processing. But I do not think that this intuition will
be lost after the suggested modification. In fact, this modification will not compromise any core
principles of the generative framework, such as architectural economy (the grammar would still
represent a single computational system with two conceptually indispensable interfaces),
autonomy of syntax (semantics would still be able to talk to syntax only through the interface) or
inclusiveness, which is discussed in more detail below. Of course, the core idea behind the Tmodel is beautiful in its simplicity: give the grammar a set of words and see what it can do with
them. But if the grammar is never given just the set of words and other incoming information is
exceedingly difficult to abstract away from one should consider sacrificing this idea.
5. A sample case: problems in the Information Structure domain
In most generative models (e.g. Bródy 1990, 1995; Laka 1990; Ouhalla 1994; Rizzi 1997;
Tsimpli 1995; Tuller 1992; Vilkuna 1995), IS notions are encoded by features similar to other
syntactic features: Top, F etc. For example, F feature triggers overt or covert movement to a
dedicated syntactic position [Spec; FocP]. In some languages, it is spelled out as a regular
morpheme, while in the others (including all European ones) as the main stress. Other authors
assume that F feature is different from other syntactic features (e.g. Jackendoff 1972; Rooth
1985, 1992; Selkirk 1984, 1995; Büring 2006). It only attracts stress and does not trigger
movement.
Obviously, there are many differences between these two approaches. For example, the
former is challenged by in situ foci: Rochemont (1986) and Szendrői (2005) present convincing
arguments against covert movement in such cases. The latter is by definition not suitable to
describe IS-related word order alternations. However, there is one problem that they share: it is
unclear how IS features are put on lexical items. Chomsky (1995: 228) introduced the
Inclusiveness principle, which forms the core of the T-model:
A ‘perfect language’ should meet the condition of Inclusiveness: any structure formed by the
computation [...] is constituted of elements already present in the lexical items selected for N
[numeration]; no new objects are added in the course of the computation apart from
rearrangements of lexical properties.
Let us take focus as an example. Usually, constituents rather than single words are focused.
Introducing an F feature on a constituent violates Inclusiveness. Putting it on a lexical item and
allowing it to percolate is also problematic. Lexical items inside the focused constituent have no
property corresponding to F (they are not focused per se, they are part of the focus). Even when a
single word is focused, it would be strange to assume it has ‘forms’ inherently specified for IS
features, like case forms or tense forms.
Alternatives to feature-based approaches rely on configurations. Most configurational IS
models are prosody-oriented (e.g. Reinhart 1995, 2006; Neeleman and Reinhart 1998; Costa
1998, 2004; Szendrői 2001, 2005). In these models IS-related word order alternations that cannot
be captured by base-generation are seen as movements into or out of the main stress position,
which correlates with focus and giveness or D-linkedness. Several other models are syntaxoriented (e.g. Neeleman and van de Koot 2008; Slioussar 2007, 2010, 2011). Neeleman and van
de Koot (2008) claim it is advantageous for interface mapping if topics and foci correspond to
syntactic constituents and topics c-command foci. Slioussar (2011) argues that IS-related
reorderings encode relative accessibility (whether A is more or less accessible than B) and
contrast (broadly conceived), rather than topics and foci.
All configurational approaches agree that IS-related movement does not target dedicated
syntactic positions and is not driven by specialized features, but have difficulties explaining how
exactly it is triggered. Some authors simply ignore this question; the others allow for free nonfeature-driven movement; Slioussar (2007, 2010, 2011) relies on edge features. However,
whether non-feature-driven movement or edge features are used, the models face the same
problem as in the case of scope discussed above. The grammar remains indeterministic with
respect to IS: some reorderings may or may not take place, and their effect will become clear
only when the finalized derivation is interpreted at the C-I interface.
Now let us see whether the possibility to use the C-I interface in the process of derivation that
was suggested above for scope encoding will solve the problems faced by different approaches to
IS. The situation will become better for feature-based models because it will be possible to access
constituents during the derivation. But putting IS features on these constituents will still violate
the inclusiveness principle, which lies at the heart of the generative framework.
For prosody-oriented configurational models to work, semantics should be able to talk not
only to syntax, but also to prosody. The authors developing such models either do not explain
how exactly this can happen (e.g. Neeleman and Reinhart 1998; Reinhart 1995, 2006) or opt for
grammar architectures that are substantially different from the T-model. For example, Szendrői
(2001) assumes that syntactic and prosodic structures are two separate levels of the computational
system that are coordinated by a set of mapping rules and are both accessible at the C-I interface.
Obviously, this system is less parsimonious than the standard generative one.
Finally, the possibility to use the C-I interface while the sentence is constructed can be
enough for syntax-oriented configurational models to function smoothly. For example, Slioussar
(2011) observes that in Russian, if a constituent A is more accessible than a constituent B, A is
moved to the first available position above B (unless it already is above B). But similar
movements can also take place to encode scope or contrast. In the T model, one can work only
with resulting configurations, and the rules for interpreting them become rather cumbersome: if a
constituent A moved over a constituent B, A scopes over B, and/or A is more accessible than B,
and/or B is contrasted. This is a genuine ambiguity that arises in comprehension (although it can
be easily solved in context), but it would be strange to assume that people face similar
complications in production. They simply wait until A and B are constructed in the process of
derivation and make sure that A is above B if A is more accessible than B. If syntax and
semantics can talk during the derivation, a very simple rule will suffice to describe this: if A is
more accessible than B move A over B. As a result, the model will also get rid of undesirable
indeterminacy.
Thus, all approaches to IS will benefit from the possibility of dialogue between syntax and
semantics, but syntax-oriented configurational models are in the best position. For them, this
modification of the grammar architecture is enough to solve all problems. Slioussar (2011)
discusses other arguments in favor of such models, but they are outside of the scope of this paper.
6. Conclusions
In this paper, I identify two important discrepancies between the T-model of the grammar, which
is assumed in the majority of minimalist studies, and performance systems responsible for
production and comprehension. First, the process of syntactic derivation proceeds bottom-up in
the T-model, while performance systems involve many top-down processes. Second, the T-model
takes as input only lexical items and their features and does not allow for any dialogue between
syntax and semantics before the derivation is completed. However, in production syntax and
semantics most probably go hand in hand: we elaborate our intention while building the syntactic
structure of the sentence.
I argue that the first discrepancy is not problematic, while the second one is, and this does not
depend on the chosen perspective on the competence-performance distinction. More specifically,
the absence of dialogue between syntax and semantics does not necessarily pose any challenges
to the T-model, but the fact that the grammar has to remain indeterministic with respect to any
information that is not encoded by lexical items and their features leads to serious problems in the
domains of scope and Information Structure. To solve these problems, I suggest modifying the
grammar architecture so that the C-I interface could be used during the derivation. I show that
this modification does not compromise any core principles of the generative framework, such as
architectural economy, autonomy of syntax or inclusiveness.
References
Bródy, Mihály. 1990. “Some remarks on the focus field in Hungarian.” In UCL Working Papers
in Linguistics 2, John Harris (ed), 201-225. London: University College London.
Bródy, Mihály. 1995. “Focus and checking theory.” In Levels and Structures, Approaches to
Hungarian 5, István Kenesei (ed), 31-43. Szeged: JATE.
Büring, Daniel. 2006. “Focus projection and default prominence.” In The Architecture of Focus,
Valéria Molnár and Susanne Winkler (eds), 321-346. Berlin: Mouton de Gruyter.
Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press.
Chomsky, Noam. 2000. New Horizons in the Study of Language and Mind. Cambridge:
Cambridge University Press.
Chomsky, Noam. 2001. “Derivation by phase.” In Ken Hale. A Life in Language, Michael
Kenstowicz (ed), 1-52. Cambridge, MA: MIT Press.
Chomsky, Noam. 2004. “Beyond explanatory adequacy.” In Structures and Beyond, Adriana
Belletti (ed), 104-131. Oxford: Oxford University Press.
Chomsky, Noam. 2008. “On phases.” In Foundational Issues in Linguistic Theory, Robert
Freidin, Carlos P. Otero and Maria-Luisa Zubizarreta (eds), 133-166. Cambridge, MA: MIT
Press.
Costa, João. 1998. Word Order Variation. A Constraint-based Approach. Doctoral dissertation,
University of Leiden.
Costa, João. 2004. Subject Positions and Interfaces: The Case of European Portuguese. Berlin:
Mouton de Gruyter.
De Vincenzi, Marica. 1991. Syntactic Parsing Strategies in Italian. The Minimal Chain
Principle. Dordrecht: Kluwer.
Epstein, Samuel. 1999. “Un-principled syntax: The derivation of syntactic relations.” In Working
Minimalism, Samuel Epstein and Norbert Hornstein (eds.), 317-345. Cambridge, MA: MIT
Press.
Fodor, Janet Dean and Inoue, Atsu. 1995. “The diagnosis and cure of garden path.” Journal of
Psycholinguistic Research 23: 407-434.
Fox, Danny. 1995. “Economy and scope.” Natural Language Semantics 3: 283-300.
Frazier, Lyn and Clifton, Charles. 1996. Construal. Cambridge, MA: MIT Press,
Frazier, Lyn and Fodor, Janet Dean. 1978. “The Sausage Machine: A new two-stage parsing
model.” Cognition 6: 291-325.
Ionin, Tanya. 2001. Scope in Russian: Quantifier movement and discourse function. Ms., MIT.
Jackendoff, Ray S. 1972. Semantic Interpretation in Generative Grammar. Cambridge, MA: MIT
Press.
Jackendoff, Ray S. 1997. The Architecture of the Language Faculty. Cambridge, MA: MIT Press.
Laka, Itziar. 1990. Negation in Syntax. Doctoral dissertation, MIT.
Levelt, Willem J. M. 1993. Speaking: From Intention to Articulation. Cambridge: MIT Press.
Marr, David. 1982. Vision. New York: W. H. Freeman.
Neeleman, Ad, and Titov, Elena. 2009. “Focus, contrast, and stress in Russian.” Linguistic
Inquiry 40: 514-524.
Neeleman, Ad and Reinhart, Tanya. 1998. “Scrambling and the PF interface.” In The Projection
of Arguments: Lexical and Compositional Factors, Miriam Butt and Wilhelm Geuder (eds),
309-353. Stanford, CA: CSLI Publications.
Neeleman, Ad and van de Koot, Hans. 2008. “Dutch scrambling and the nature of discourse
templates.” Journal of Comparative Germanic Linguistics 11: 137-189.
Neeleman, Ad and van de Koot, Hans. 2010. “Theoretical validity and psychological reality of
the grammatical code.” In The Linguistics Enterprise, Martin Everaert, Tom Lentz, Hannah
De Mulder, Øystein Nilsen and Arjen Zondervan (eds), 183-212. Amsterdam: John
Benjamins.
Ouhalla, Jamal. 1994. “Focus in Standard Arabic.” Linguistics in Potsdam 1: 65-92.
Phillips, Colin. 1996. Order and structure. Doctoral dissertation. MIT.
Pritchett, Bradley L. 1992. Grammatical Competence and Parsing Performance. Chicago, IL:
University of Chicago Press.
Reinhart, Tanya. 1995. Interface Strategies. Uil OTS Working Papers in Linguistics. Utrecht:
Utrecht University.
Reinhart, Tanya. 2006. Interface Strategies: Reference-set Computation. Cambridge, MA: MIT
Press.
Richards, Norwin. 1999. “Dependency formation and directionality of tree construction.” MIT
Working Papers in Linguistics 34: 67-105.
Rizzi, Luigi. 1997. “The fine structure of the left periphery.” In Elements of Grammar:
Handbook in Generative Syntax, Liliane Haegeman (ed), 281-337. Dordrecht: Kluwer.
Rochemont, Michael. 1986. Focus in Generative Grammar. Amsterdam: John Benjamins.
Rooth, Mats E. 1985. Association with Focus. Doctoral dissertation, University of Massachusetts.
Rooth, Mats E. 1992. “A theory of focus interpretation.” Natural Language Semantics 1: 75-116.
Rusakova, Marina. 2009. Rečevaja realizacija grammatičeskix ėlementov russkogo jazyka (in
Russian, ‘Speech realization of some grammatical features of Russian’). Habilitation
dissertation, St.Petersburg State University.
Schneider, David A. 1999. Parsing and Incrementality. Doctoral dissertation. University of
Delaware.
Schneider, David A. and Phillips, Colin. 2001. “Grammatical search and reanalysis.” Journal of
Memory and Language 44: 308-336.
Selkirk, Elisabeth O. 1984. Phonology and Syntax. Cambridge, MA: MIT Press.
Selkirk, Elisabeth O. 1995. “Sentence prosody: intonation, stress, and phrasing.” In The
Handbook of Phonological Theory, Jane Goldsmith (ed), 550-569. Oxford: Blackwell.
Slioussar, Natalia. 2007. Grammar and Information Structure. A Study with Reference to
Russian. Doctoral dissertation, Utrecht University.
Slioussar, Natalia. 2010. “Russian data call for relational Information Structure notions.” In
Formal Studies in Slavic Linguistics. Proceedings of Formal Description of Slavic Languages
7.5, Gerhild Zybatow, Philip Dudchuk, Serge Minor and Ekaterina Pshehotskaya (eds), 329344. Frankfurt am Main: Peter Lang.
Slioussar, Natalia. 2011. Grammar and Information Structure: A Novel View Based on Russian
Data. Ms., Utrecht institute of Linguistics OTS and St.Petersburg State University.
Szendrői, Kriszta. 2005. “Focus movement (with special reference to Hungarian).” In The
Blackwell companion to syntax. Vol. 2, Martin Everaert and Henk van Riemsdijk (eds), 272337. Oxford: Blackwell.
Szendrői, Kriszta. 2001. Focus and the Syntax-phonology Interface. Doctoral dissertation,
University College London.
Tsimpli, Ianthi-Maria. 1995. “Focusing in Modern Greek.” In Discourse Configurational
Languages, Katalin É. Kiss (ed), 176-206. Oxford: Oxford University Press.
Tuller, Laurice. 1992. “The syntax of postverbal focus constructions in Chadic.” Natural
Language and Linguistic Theory 10: 303-334.
Uriagereka, Juan. 2011. Spell-Out and the Minimalist Program. Oxford: Oxford University Press.
Vilkuna, Maria. 1995. “Discourse configurationality in Finnish.” In Discourse Configurational
Languages, Katalin É. Kiss (ed), 244-268. Oxford: Oxford University Press.
Zwart, Jan-Wouter. 2009. “Prospects for top-down derivation.” Catalan Journal of Linguistics 8:
161-187.