ADVANCED SEMANTICS
CLASS NOTES, SPRING 1994
revised spring 1998
corrected, spring 2005
revised, spring 2013
Fred Landman
1
CHAPTER 1: BACKGROUND ON MODELTHEORETIC SEMANTICS
[This chapter is background which you should read if necessary. I usually this chapter as
part of the earlier class on Quantification and Modality. Advanced semantics starts with the
last two sections.]
A semantic theory for a language like English or Hebrew is a theory in which we make
predictions about semantic phenomena like entailments, ambiguity.
A semantic framework is a framework for developing and comparing such theories, i.e. it is
a framework for studying semantic problems.
The kind of framework that we will be developing here is modeltheoretic semantics. It is
based on a couple of well-known assumptions: aboutness and compositionality.
1. Aboutness.
A core part of what we call meaning concerns the relation between linguistic expressions
and non-linguistic entities, or 'the world' as our semantic system assumes it to be, the world
as structured by our semantic system.
Some think about semantics in a realist way: semantics concerns the relation between
language and the world.
Others think about semantics in a more conceptual, or if you want idealistic way: semantics
concerns the relation between language and an intersubjective level of shared information, a
conceptualization of the world, the world as we jointly structure it. Both agree that
semantics is a theory of interpretation of linguistic expressions: semantics concerns the
relation between linguistic expressions and what those expressions are about. Both agree
that important semantic generalizations are to be captured by paying attention to what
expressions are about, and important semantic generalizations are missed when we don't pay
attention to that.
But semantics concerns semantic competence. Semantic competence does not concern
what expressions happen to be about, but how they happen to be about them.
Native speakers obviously do not have to know what, say, a name happens to stand for in a
certain situation, or what the truth value of a sentence happens to be in a certain situation.
That is not necessarily part of their semantic competence. What is part of their semantic
competence is reference conditions, truth conditions:
If I utter the sentence: the chalk is under the table, it is not necessarily part of your semantic
competence that you know that that sentence is true or false. What is part of your semantic
competence is that, in principle, you're able to distinguish situations where that sentence is
true, from situations where it is false, i.e. that you know what it takes for a possible
situation to be the kind of situation in which that string of words, that sentence, is true, and
what it takes for a situation to be the kind of situation where that sentence is false.
The first thing to stress is: semantics is not interested in truth; semantics is interested in truth
conditions.
2
From this it follows too that we're not interested in truth conditions per se, but in
truthconditions relative to contextual parameters.
Take the sentence: I am behind the table. The truth of this sentence depends on who the
speaker is, when it is said, what the facts in the particular situation are like. But we're not
interested in the truth of this sentence, hence we're not interested in who is the speaker, when
it was said, and what the facts are like.
What we're interested in is the following: given a certain situation (any situation) at a
certain time where a certain speaker (any speaker) utters the above sentence, and certain
facts obtain in that situation (any combination of facts): do we judge the sentence true or
false under those circumstantial conditions?
A semantic theory assumes that when we have set such contextual parameters, native
speakers have the capacity to judge the truth or falsity of a sentence in virtue of the meanings
of the expressions involved, i.e. in virtue of their semantic competence. And that is what
we're interested in.
To summarize: a semantic theory contains a theory of aboutness and this will include a
theory of truth conditions.
Given the above, when I say truth, I really mean, truth relative to settings of contextual
parameters.
Furthermore, given what I said before about realistic vs. idealistic interpretations of the
domain of non-linguistic entities that the expressions are about, you should not necessarily
think of truth in an absolute or realistic way: that depends on your ontological assumptions.
If you think that semantics is directly about the real world as it is in itself, then truth means
truth in a real situation. If you think that what we're actually talking about is a level of
shared information about the 'real' world, then situations are shared conceptualizations,
structurings of the real world, and truth means truth in a situation which is a structuring
of reality. This difference has very few practical consequences for most actual semantic
work: it concerns the interpretation of the truth definition rather than its formulation.
This is a gross overstatement, but for all the phenomena that we will be concerned with in
this course, this is true enough.
Specifying a precise theory of truth conditions, makes our semantic theory testable. We
have a general procedure for defining a notion of entailment in terms of truth conditions.
Once we have formulated a theory of the truth conditions of sentences containing the
linguistic expressions whose semantics we are studying, our semantic theory gives a theory
of what entailments we should expect for such sentences. Those predictions we can compare
with our judgments, the intuitions concerning the entailments that such sentences actually
have.
3
2. Compositionality.
The interpretation of a complex expression is a function of the interpretations of its parts and
the way these parts are put together.
Semantic theories differ of course in what semantic entities are assumed to be the
interpretations of syntactic expressions. They share the general format of a compositional
interpretation theory, which is often called 'the rule-to rule format of interpretation'. (The
terminology is slightly misleading because the interpretation theory is not necessarily
married to a particular rule-based view of syntax.)
Let us assume that we have a certain syntactic structure, say, a tree T. We can regard this
syntactic structure as built through certain syntactic operations from its parts.
For instance, the tree:
S
NP
VP
│
john V
NP
│
│
kiss mary
can be built by applying the following syntactic operations to the lexical items John, kiss,
and Mary:
S[ NP[John],VP[ V[Kiss],NP[Mary] ] ]
Where:
NP[α] is the result of forming a tree with mothernode NP and daughternode α; similarly for
V[α];
VP[α,β] is the result of forming a tree with left daughter α and right daughter β; similarly for
S[α,β].
In a compositional theory of interpretation, we choose semantic entities as the interpretations
of the parts: say, m(john), m(kiss) and m(Mary) (and, again, which semantic entities these
are depends on our semantic theory).
And we assume that corresponding to each (relevant) operation for building up syntactic
structure, i.e. for each operation on syntactic structures, there corresponds a semantic
operation on the semantic interpretations of those structures.
Thus, the syntactic operation NP[ ] will be interpreted as a semantic operation m(NP)( ).
While NP[ ] is an operation that takes a lexical item and gives you a tree, m(NP) is an
operation that takes the meaning of that lexical item and gives you the meaning of the tree.
Similarly, with VP[ , ] there will be a corresponding operation m(VP)( , ), which takes the
meanings of the V and the NP and gives as output the meaning of the VP.
4
In this way, the compositional interpretation theory is able to provide a compositional
semantics for complex expressions based on the meanings of their parts and the way they are
put together. For instance, the meaning of our example sentence will be:
m(S)( m(NP)(m(John)),m(VP)( m(V)(m(Kiss)),m(NP)(m(Mary)) ) ).
Of course, what precise predictions this semantics makes about the meaning of the sentence
John kiss Mary depends on what semantic entities we happen to choose here, and what
semantic operations.
One thing does follow already at this level: if our semantic theory determines that two
syntactic expressions α and β have the same meaning, and α occurs in an expressions φ, then
also the result of replacing α by β in φ (and leaving the syntactic operations the same),
φ[β/α] has the same meaning as φ:
Substitution of meaning: if m(α) = m(β) then m(φ[β/α]) = m(φ)
This follows from compositionality.
Look at the trees:
A
B
A
C
B
D
and assume that m(C) = m(D).
The meaning of the first tree is:
m(A) [ m(B),m(C) ]
The meaning of the second tree is:
m(A) [ m(B),m(D) ]
Obviously, this is the same meaning.
Of course, since semantic theories differ in their notion of meaning, semantic theories differ
in which expressions this holds for.
As we will see, in extensional theories, meanings are identified with extensions, and hence
substitution of expressions with the same extension preserves the extension (truth value) of
the whole. In intensional theories, meanings are not identified with extensions, hence there
is no requirement that in general substitution of expressions with the same extension will
yield complex expressions with the same extension, but, as we will see, meanings are
identified with intensions, and hence, substitution of expressions with the same intension
will lead to a complex expressions with the same intension.
But at this level of generality, it doesn't matter if we think about meanings as fried bananas:
if two expressions α and β are interpreted as the same fried banana, and φ[α] is a complex
expressions containing α, then the fried banana which is the interpretation of φ[α] is the
same as the fried banana which is the interpretation of φ[β/α].
5
3. The logical language.
We interpret natural language in structured domains of meanings. When we go beyond the
simplest natural language constructions, these domains and the meanings they contain tend
to become rather complicated. That is, when we use a metalanguage to describe the content
of these domains and these meanings, it becomes very hard to see which meaning we are
dealing with, what its properties are, whether or not two such metalanguage descriptions of
meanings describe the same thing, etc. This is because these meanings tend to be
complicated functions and the metalanguage of functions tends to be rather unreadable.
It is instructive to compare the situation with what is going on in your computer. Different
states that your computer can be in can be described as states of being on and off of a wide
array of switches in your computer. An action of the computer consists in a series of
changes of a large number of these switches. Such a change of switches corresponds to a
meaning. You could directly instruct the computer to do something by offering it a
description of how to set all its switches in order. This corresponds to a machine language
instruction.
Such an instruction is a description of a meaning, but the problem is that it is unreadable: it
is very difficult to tell in machine language code which actual meaning you are dealing with,
and what follows from a certain action.
For that reason we design programming languages. Their use is purely to make life easier
for us, and what their ingredients are is in this way purely functionally determined: we put
in these languages, whatever facilitates their readability and their easy use.
The idea behind programming languages is the following. A programming language is
designed in such a way that it has a fixed and understood relation to the machine language.
In other words, we make sure that the interpretation of the programming language in the
machine language (the domain of meanings) is fixed and given. In using the programming
language we translate our instructions, what we want the computer to do in the programming
language (which is easy if the programming language is rich enough and well designed), and
rely on the given relation between the programming language and the machine language for
this translation to be interpreted correctly into the correct action (the correct meaning).
Since we know a lot about the programming language, since the programming language
gives a way of making meanings (machine language instructions) readable to us, and since
we rely on a lot of known properties of the programming language (like entailment, which
actions entail other actions), we can in practice avoid working directly at the level of
meanings, computer actions, but rather we do all our work at the level of the programming
language, and assume that that has the right results at the level of meanings, because we
have set up the relation of interpretation between the programming language and the level of
meanings correctly.
This is exactly the way in which we use logical languages in modeltheoretic semantics. It is
often too complicated to work directly at the level of meanings all the time. Hence, we
define our structures of meanings, we define a suitable, useful logical language, in which we
put whatever makes things easy for us. We make sure that the interpretation of the logical
language in the domains of meanings is well defined. And then we define the compositional
6
interpretation of our natural language in two steps:
-we give a compositional interpretation of every expression and formation rule of the
logical language in the domains of meaning.
-we give a compositional translation of every expression and formation rule of the syntax
into our logical language.
This gives us in two steps the compositional interpretation of our natural language
expressions and the rules of their formation in the domain of meanings:
-the interpretation function for the logical language v b is a function which associates with
every expression of the logical language a meaning.
-the translation function for the natural language syntax tr is a function which associates with
every expression of the natural language syntax its translation in the logical language.
-The composition of these two functions is a function from natural language expressions to
meanings: if α is a natural language expressions, then vtr(α)b is its associated meaning.
A fact about these functions tr and v b is that:
If v b gives a compositional interpretation of logical language L into domain of
meanings M and tr gives a compositional translation of natural language N into
logical language L, then the composition vtr( )b gives a compositional interpretation
of natural language N into domain of meanings M.
This means, among others, that the role of the logical language is indeed purely to make
things easier for us: in a compositional semantics, we could always skip that level, not give
an indirect interpretation through translation and interpretation, but give the result of the
composition directly, i.e. associate directly with every natural language expression the result
of interpreting its translation: the level of the logical language is superfluous (in the same
sense that strictly speaking the programming language is superfluous).
But of course, in practice the logical language is far from superfluous, because we
understand much easier what meaning we associate with an expression by looking at its
translation in the logical language than by looking in the model, and we have techniques for
proving easily whether two expressions of the logical language are interpreted as the same
meaning, while that can be very difficult to see directly in the model.
I will stress the conventional nature of the logical language at various points in this course
and show some of the choices that you may want to make, depending on what you are
studying.
These logical languages all tend to look basically the same and their interpretation tends to
be along the same lines as well. Before we get into any details, it may be good to just give
you, with an example (predicate logic) the general structure of logical languages and the
general structure of their interpretation. To some extent, if you remember the ingredients of
an arbitrary logical language and the components of the interpretation process, you will
realize that in all the languages that you come across, exactly the same goes on, and you will
learn in studying their properties, to just ignore what is the same as in every other language
and directly look for what is special in this one.
7
4. The semantic interpretation of logical languages.
4.a. The syntax of the logical language.
The syntax of logical languages tends to be very simple and meant to bring out in as simple a
way possible, the ingredients that the language is meant to describe.
Predicate logic is a language to talk about the interactions between the following ingredients:
-predicates and relations.
-quantification over individuals.
-sentence connectives.
-the relation of identity.
The predication relation ( ,..., ), the quantifiers ,, the connectives ¬,,,, identity = are
called the logical constants. The meaning of these expressions is fixed and the same for
every model and every interpretation.
These are the expressions whose semantics is the focus of study in this language.
The language further contains expressions whose whole function is that their interpretation
can vary within one model through the quantification mechanism: variables. And it
contains expressions whose meaning depends on the model, but is, at this level of
description assigned rather arbitrarily (because at this level of description we are not
interested in what their precise meaning is, of which kind of meaning they are assigned).
These expressions are called the non-logical constants. It is appropriate to think of the nonlogical constants as those lexical items whose meaning we are not trying to fix in complete
detail in the semantics, expressions that, for the sake of our studying the interaction of the
meanings of expressions containing them, and the semantic contribution of, say, connectives
and quantifiers, we keep as primitives.
In specifying the syntax of a language like predicate logic, we specify what the non-logical
constants (and variables) are, and based on that, we define recursively all the ways of
forming complex expressions of the language, in particular, formulas.
A language of predicate logic L:
VAR = {x1,x2,...}
a (countably infinite) set of individuals variables.
CONL = {c1,c2,...}
a set of individual constants (at most countably infinite)
for every n > 0:
PREDnL = {P1,P2,...} a set of n-place predicate constants (at most countably infinite)
TERML = CONL VAR
(terms are individual constants or variables)
We complete the definition of the syntax of L with a recursive definition of the set of all
wellformed formulas of L:
FORML is the smallest set such that:
1. if P PREDnL and t1,...,tn TERML then P(t1,...,tn) FORML
2. if t1,t2 TERML then (t1=t2) FORML
3. if φ,ψ FORML then ¬φ, (φ ψ), (φ ψ), (φ ψ) FORML
4. if x VAR and φ FORML then xφ, xφ FORML
8
4.b the semantics of the logical language.
The semantics of the logical language consists of three parts:
1. a definition of a possible model for the language.
2. a definition of the interpretation of an expression in a model, for any expression and any
model: the truth definition.
3. a definition of entailment in terms of the truth definition.
A model for the language always consists of two components: a structure and an
interpretation for the non-logical constants.
The structure of the model can be thought of a possible structuring of the world, with just as
much structure as the logical language we are interpreting requires. In the case of predicate
logic we are only interested in the structure of the world in so far as it allows us to express
predication and quantification over individuals. For this purpose it suffices to assume that
the structure of the world in so far as predicate logic is concerned is just a set of individuals.
More precise, it is a set of individuals, a set of two truth values, and a set theoretic structure
determined by that, but all the latter is predictable from the basic set of individuals and it is
our habit not to mention what is predictable (from which we will deviate when we want to,
as we will see later).
If we put other things in our language, then the structures of our models will become richer.
For instance, if we include temporal operators in our language and expressions that make
temporal reference, we will add a temporal domain to our language, which will be a structure
of moments of time, ordered by a temporal ordering. If we add expressions that make event
reference, we will add a structure of events, etc.
So, a model consists of a structure and in interpretation. The structure determines what
kinds of things our expressions can refer to, what kinds of things they can quantify over.
The interpretation determines what the facts are. If we have basic predicates in our language
like LOVE and KISS, then the structure of the model determines who is there to stand in
those relations or not. The interpretation of the non-logical constants determines the facts: it
determines who loves whom and who doesn't, who kisses whom and who doesn't etc. In this
way, the structure and the interpretation together make the model into a possible structuring
of the world: determining what there is and what basic relations happen to hold.
Since the semantics will specify truth conditions rather than truth, it follows that we are
never interested in one model, but rather in defining truth for an arbitrary model for the
language, i.e. in defining how the truth of a complex sentence in a model depends on the
truth of it parts, and how the truth of a sentence varies across different models.
Thus, we define for our predicate logical language:
9
A model for predicate logical language L is a pair:
M = <D,F>, where:
1. D, the domain of M, is a non-empty set (the domain of individuals)
2. F, the interpretation function of M for the non-logical constants of L is a
function such that:
a. for every c CONL: F(c) D
b. for every P PREDnL: F(P) pow(Dn)
Here Dn = {<d1,...,dn>: d1,...,dn D}
the set of all n-tuples of elements of domain D.
pow(X), the powerset of X, is the set of all subsets of X.
Hence pow(Dn) is the set of all subsets of the set of all n-tuples of elements of D. This
means that each n-place predicate P is interpreted by F as some set of n-tuples, an n-place
relation.
As is wellknown, predicate logical formulas contain expressions, variables, that are not
interpreted by the interpretation function in the model. For them, we add special devices that
take care of their interpretation: assignment functions:
Let M = <D,F>.
An assignment function (on model M) is a function g from VAR into D.
i.e. an assignment function is any function g that assigns every variable in VAR an object in
D.
Furthermore, we define for any assignment function g and variable x and object d D:
gxd is that assignment function from VAR into D such that:
1. for every variable y VAR-{x}: gxd(y) = g(y)
2. gxd(x) = d
I.e. gxd is that assignment that differs at most from g in that it assigns d to x.
Given a model and an assignment function, we have now the means of specifying the
interpretation in any given model, relative to any given assignment function for all the nonlogical constants and variables. The truth definition extends this to a full interpretation for
any possible wellformed expression of our language L.
Thus, the truth definition defines for any well formed expression α of our language L: vαbM,g:
the interpretation of α in model M relative to assignment g.
10
Truth definition for L: vαbM,g
Given model M = <D,F> and assignment function g:
Interpretation of terms and predicates:
1. if c CONL then vcbM,g = F(c)
2. if P PREDnL then vPbM,g = F(P)
3. if x VAR
then vxbM,g = g(x)
Interpretation of formulas:
1. vP(t1,...,tn)bM,g = 1 iff <vt1bM,g,...,vtnbM,g> vPbM,g; 0 otherwise
2. v(t1=t2)bM,g = 1 iff vt1bM,g = vt2bM,g; 0 otherwise
v¬φbM,g = 1 iff vφbM,g = 0; 0 otherwise
v(φ ψ)bM,g = 1 iff vφbM,g = 1 and vψbM,g=1; 0 otherwise
v(φ ψ)bM,g = 1 iff vφbM,g = 1 or vψbM,g = 1; 0 otherwise
v(φ ψ)bM,g = 1 iff vφbM,g = 0 or vψbM,g = 1; 0 otherwise
3. vxφbM,g = 1 iff for every d D: vφbM,gxd = 1; 0 otherwise
vxφbM,g = 1 iff for some d D: vφbM,gxd = 1; 0 otherwise
(where gxd is what Bill Gates does to gxd when you make it a subscript.)
We have now given a complete compositional interpretation for our language L. Given any
model M and assignment function g, vαbM,g is well defined for any expression α of L.
The third aspect of the semantics is the definition of entailment in terms of the semantics.
This tends to be the same (or very similar) for the kinds of logical languages that we study
here.
First we define:
A sentence is a formula without free variables (relying on some definition of free and
bound occurrences of variables in an expressions).
Let φ be a sentence of L.
vφbM, the interpretation of φ in M (independent of assignment functions) is definid as
follows:
vφbM = 1, φ is true in M, iff for every assignment function g: vφbM,g = 1
vφbM = 0, φ is false in M,iff for every assignment function g: vφbM,g = 0
Let X be a set of sentences of L and φ a sentence of L:
X ╞ φ, X entails φ iff for every model M for L:
if for every δ X: vδbM = 1 then vφbM = 1
i.e. X entails φ iff for every model where all the premises δ in X are true, the conclusion φ is
true as well.
We say that φ entails ψ iff {φ} entails ψ.
φ and ψ are logically equivalent iff φ entails ψ and ψ entails φ.
11
5. Introduction to Advanced Semantics
1. Different notions of meaning:
Word meaning
Lexical semantics
Sentence meaning
logic
Constituent menaing semantically interpreted grammar
Arguments for constituent meanings:
intersectivity is not a property of adjectives but of adjuncts: adjectives, PPs, relative clauses
modifying nouns; adverbials, PPs, phrases , modifying verbs.
Hence intersectivity is not a property expressed at the level of word meaning, nor sentence
meaning, but constituent meaning: an intermediate grammatical level.
But then we need constituent meanings.
-meanings at different types
type theory
type theory: functional type theory + interpretation as Boolean algebras
2. Techniques for deriving meanings:
Intuitions about entailment, presupposition, felicity: derived from sentence meaning, but
triggered by consituent meaning.
Techniques for pulling out constituent meanings from sentence meanings:
λ-abstraction, λ-conversion (Kleene, Montague, Partee)
3. Semantically interpreted grammar (Montague grammar):
Semantic operations:
1.
-Functional application (Verbs and their arguments)
-Function composition (eg. inside nouns, inside verbs, inside measure phrases)
-λ-abstraction (operator-variable structures)
2.
General set of natural operatons that operate in different domains and are crosslinguistically salient:
Type shifting operations (Partee and Rooth, Klein and Sag).
examples: type lifting – existential closure – converse – definiteness…
The Partee triangle
3.
Syntax-semantic mismatches.
Type shifting operations are used to bridge the gap between syntax and semantics.
This leads to linguistic insights.
12
© Copyright 2026 Paperzz