Predictable and unpredictable information and Richness of the Base

Linguistics 523 — Phonological Theory I
Spring 2016
Predictable and unpredictable information and Richness of the Base
Understanding some principles of the OT model:
• How is the difference between predictable and unpredictable infomation related to markedness
and faithfulness constraints?
• The core OT principle of “Richness of the Base”
1. Modeling predictable and unpredictable information in OT
(1)
(2)
Reminder: In generative grammar, there is a key conceptual distinction between
predictable and unpredictable information
(a)
Unpredictable information must be learned, memorized, stored in the lexicon
(b)
Predictable information is enforced by the mental grammar (if productive)
How does this look from the perspective of a constraint-based phonological model (OT)?
(a)
Predictable information is enforced by the phonological grammar
• This means that predictable information is enforced by the constraints as they are
ranked in a particular language
(b)
Unpredictable information is (as always!) stored in the mental lexicon
• But we must also keep in mind: Unpredictable information that is found in the
UR/input form must survive in the winning output form
(3)
Example: Is it predictable whether or not a syllable has a coda?
(a)
In Language #1, syllables never have codas, and codas are avoided by deletion
/tip/
→ a. ti
b. tip
c. ti.pV
NOCODA
DEP
MAX
*
*W
L
*W
L
Suppose we know the UR is /tip/ because there is a morpheme alternation:
if we add a suffix, the /p/ appears, as in: /tip+o/ → [tipo].
/ba/
→ a. ba
b. bat
NOCODA
DEP
*W
*W
MAX
L
• It doesn’t matter if the UR ends in a consonant or not; the output will have no
coda in either case
/tip/ → [ti]
and
/ba/ → [ba]
• What is the relationship between markedness and faithfulness constraints here?
1
(b)
In Language #2, codas are allowed
/tip/
→ a. tip
b. ti
c. ti.pV
/ba/
→ a. ba
b. bat
DEP
MAX
NOCODA
*
*!
*!
NOCODA
DEP
*(!)
*(!)
MAX
• If the UR ends in a consonant, the output will have a coda; otherwise not
/tip/ → [tip]
and
/ba/ → [ba]
• What is the relationship between markedness and faithfulness constraints here?
(4)
Summary: Whether some phonological property is predictable or unpredictable depends
on the markedness vs. faithfulness rankings
(a)
For unpredictable information to survive in the output form, the relevant
faithfulness constraints must all dominate the markedness constraint that would
remove that unpredictable information
(b)
If the markedness constraint dominates even one faithfulness constraint, the winner
will be unfaithful and the markedness constraint will always be satisfied—every
surface form will avoid the same phonological pattern (so the behavior is predictable)
2. The OT principle of “Richness of the Base”
(5)
(6)
Now consider Language #3
(a)
Similar to Language #1 in that no surface forms have codas (predictable)
(b)
But unlike Language #1, this time, there are no morpheme alternations
• Every morpheme always surfaces with no evidence for a final consonant: /pa/
always surfaces as [pa], /mifu/ always surfaces as [mi.fu], etc.
• This means that every morpheme’s UR has the same segmental structure as its SR
(c)
What is the constraint ranking for this language?
If we are serious about the idea that predictable information means markedness
constraints are ranked highest, we know that NOCODA » Faithfulness
• NOCODA dominates either MAX or DEP, although we don’t know which one
/pa/
NOCODA
DEP
MAX
→ a. pa
b. pat
/mifu/
→ a. mi.fu
b. mi.fut
c. mif
*W
*W
NOCODA
DEP
*W
*W
*W
2
MAX
*W
(7)
IMPORTANT: What we have now is a grammar with the power to get rid of codas
(a)
Even if we give the grammar an input with a final consonant, the output will still
have no coda
(b)
But...how can we give the grammar an input with a final consonant, if there is no
evidence that any morpheme ends in a consonant?
• Here is where we see that input and UR are not the same
• We can give the grammar a hypothetical input (not a real word of the language)
to see what it would do
/CVC/
NOCODA
DEP
?→ a. CV_
MAX
*
*
?→ b. CV.CV
c. CVC
*W
L(?)
• If DEP » MAX (MAX is lowest), candidate (a) will win
• If MAX » DEP (DEP is lowest), candidate (b) will win
• We don’t know which of these two will occur, but one of them will,
L(?)
• This is because NOCODA must be highest, because predictable information means
markedness constraints are most important
(c)
(8)
What this means: A grammar with NOCODA >> Faithfulness will productively get rid
of codas, even in new words (loanwords?!)
This example illustrates the OT principle known as “Richness of the Base”
• Richness of the Base (ROTB): There are no language-particular restrictions on input
forms (Prince & Smolensky 1993)
(a) Translation: If something is a possible input in one language (such as /CVC/), it is a
possible input in all languages
(b)
(9)
(10)
This means there are no “morpheme structure constraints” that tell you what is or is
not a possible UR in each language
In rule-based phonology, how would we model Language #3 (no codas; no alternations)?
(a)
We would not model this with a deletion rule, because there is no deletion process in
this language—morphemes simply never have consonants in a position where they
would become codas | and likewise for insertion
(b)
Instead, we would need to propose a morpheme structure constraint: “URs can
never have CC or C#”
In OT, Language #3 is modeled the same way as Language #1: NOCODA » Faithfulness
Language #1
Language #3
/tip/ [ti_], /tip+o/ [ti.po]
/pa/ [pa], /mifu/ [mi.fu]
Consonant deletion can be seen
There are no visible C~Ø or Ø~V alternations
Analysis: NOCODA » MAX
Analysis: NOCODA » (MAX or DEP)
• Predictable information is enforced by the constraint ranking, whether we can see an
active “rule” or not
3
3. Richness of the Base and productive predictable patterns
(11)
In general: If some phonological structure is absent in a language, this tells us that
markedness (M) » faithfulness (F)
(a)
Examples:
Codas in Hawai’ian
Front rounded vowels in English
(b)
Having an M » F ranking for some phonological structure makes a prediction:
It should be part of the native speaker’s knowledge that this structure is illegal
(c)
Evidence for this? Consider loanword phonology or invented words; what does the
native speaker do?
• Hawai’ian: English wine [wain] → [waina] (coda is avoided!)
• English:
French menu [meny] → [mɛnju] (front rounded vowel is avoided!)
(12)
(13)
However, languages also have accidental gaps
(a)
This is when some structure just happens to be missing in the morphemes of the
lexicon, but the grammar doesn’t actually prohibit it
• The absence of this structure is not productive
(b)
Example: [bw] onset clusters are extremely rare in English but in experiments, native
speakers do not treat them as ungrammatical (Moreton 2002)
We can model this difference in OT as follows:
(a)
(b)
(14)
True productive gaps have M » F
• Given a “new” word or a loanword, native speakers will actively avoid the
structure
Accidental gaps do not have M » F
• Given a “new” word or a loanword, native speakers will produce the structure
faithfully
What is an input in an OTgrammar?
(a)
Sometimes, an input is an actual UR of an actual morpheme or word of the
language
(b)
But sometimes, an input is a hypothetical input that we use to make the grammar is
doing its job: the grammar must actively enforce productive predictable patterns
References
Moreton, Elliott. (2002. Structural constraints in the perception of English stop-sonorant clusters. Cognition
84: 55–71.
Prince, Alan, and Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar.
[Published 2004, Blackwell.]
4