Interactive learning, formal social epistemology and group
belief dynamics:
Logical, probabilistic and game-theoretic models
Alexandru Baltag and Sonja Smets
(ILLC, University of Amsterdam)
1
Methods
We combine various approaches to belief change:
• logic-based approaches (Dynamic Epistemic Logic, Belief Revision
theory) and other qualitative approaches (Social Choice Theory)
• probabilistic approaches (Bayesian Epistemology, Social
Epistemology),
• Game-Theoretic and Learning Theoretic approaches.
2
Applications
• rationality and equilibria in games;
• epistemic paradoxes;
• the formalization of classical epistemological conceptions;
• the issue of tracking the truth by either individual learning (via
belief-revision strategies) or by the wisdom of the crowds (pooling
up social information via belief-merging strategies);
• the apparently irrational aspects of epistemic group dynamics:
informational cascades, epistemic bandwagon effect, pluralistic
ignorance.
3
PLAN OF COURSE
• Lecture 1: Static models and logics for knowledge and belief
(epistemic and doxastic models, plausibility models, conditional
probabilistic models)
• Lecture 2: Formal Epistemology and more sophisticated
models of knowledge
(failures of introspection and logical omniscience, justified belief,
evidence dynamics, awareness models, Gettier problems, defeasible
knowledge)
• Lecture 3: One-Step Belief Dynamics and its Logic
(conditioning, updates, upgrades, event models, product update,
probabilistic update, lexicographic upgrade)
4
• Lecture 4: Long Term Dynamics, Convergence, Doxastic
Cycles
(iterated revision, fixed points; epistemic paradoxes and circular
dynamics)
• Lecture 5: Interactive Learning, Wisdom of the Crowds and
Informational Cascades
(tracking the truth, belief aggregation, epistemic democracy,
epistemic bandwagonning, pluralistic ignorance)
5
Plan of Lecture 1
1.0 Puzzles and motivating examples.
1.1 Single-agent models: epistemic and doxastic Kripke models.
Probabilistic models. Degrees of belief. Logics: S5, S4, KD45.
1.2 Belief Change : Updates. The Problem of Belief Revision.
1.3 Plausibility models: the single-agent case.
1.4 Multi-agent plausibility models.
1.5 Conditional probabilistic models (Popper functions, lexicographic
probabilities).
1.6 Connections between these approaches.
1.7 The logic of conditional beliefs.
6
PUZZLE no 1: The SURPRISE EXAM paradox
The students in a high-school class know for sure that the date of
the exam has been fixed in one of the five (working) days of
next week: it’ll be the last week of the term, and it’s got to be an
exam, and only one exam.
But they don’t know in which day.
Now the Teacher announces her students that the exam’s date will
be a surprise: even in the evening before the exam, the students will
still not be sure that the exam is tomorrow.
7
Paradoxical Argumentation
Intuitively, one can prove (by backward induction, starting with
Friday) that, IF this announcement is true, then the exam
cannot take place in any day of the week.
So, using this argument, the students come to “know” that
the announcement is false: the exam CANNOT be a surprise.
GIVEN THIS, they feel entitled to dismiss the announcement, and...
THEN, surprise: whenever the exam will come (say, on Tuesday), it
WILL indeed be a complete surprise!
8
PUZZLE no 2: The Lottery Paradox
A lottery with 1 million tickets (numbered from 1 to 1000000) was
announced. It is known that one (and only one) of the tickets is the
winner.
Alice receives a ticket (with the number 1, 570) as a birthday gift. Being
a good mathematician, Alice calculates that the probability that this
particular ticket is winning is 0.000001, i.e. practically infinitesimal. So
she believes that her ticket (no 1, 570) is not the winning one.
Of course, the same reasoning applies to any other ticket. So, for each
number between 1 to 1000000, Alice believes that the ticket with that
number is not winning.
On the other hand, she knows that one of these tickets IS the
winning one.
So Alice’s beliefs are inconsistent!
9
The Infinite Lottery
Maybe you think that Alice was wrong to believe that her ticket was
not winning, given that there was still some non-zero (though extremely
small) probability that it might actually be the winning ticket.
Well, then consider instead an infinite lottery, with tickets labeled by
arbitrary natural numbers.
Now, assuming the lottery is fair, the probability that a given number
is the winning one is actually = 0. So Alice is now absolutely right to
believe that a ticket with a(ny) given number is not winning. But
one of them has to be winning!
So Alice’s beliefs (taken as a whole) are still inconsistent! Though, in
this case, any finite subset of her beliefs is consistent...
10
Puzzle number 3: A Centipede Game
Consider the following game G, where Alice (a) is the first and third
player, and Bob (b) the second :
ONML
HIJK
v0 : a
/ ONML
HIJK
v1 : b
/ HIJK
ONML
v2 : a
XYZ[
o_^]\
1 : 3, 0
XYZ[
o_^]\
2 : 2, 3
XYZ[
o_^]\
3 : 5, 2
/ oXYZ[
_^]\
4 : 4, 5
In the leaves (“outcomes”, denoted by o’s), the first number is
Alice’s payoff, while the second is Bob’s payoff.
11
Backward Induction Method
We iteratively eliminate the obviously “bad” moves (that lead to
“bad” payoffs for the player making the move) in stages, proceeding
backwards from the leaves. The first elimination stage gives us:
ONML
HIJK
v0 : a
/ ONML
HIJK
v1 : b
/ ONML
HIJK
v2 : a
XYZ[
o_^]\
1 : 3, 0
XYZ[
o_^]\
2 : 2, 3
_^]\
oXYZ[
3 : 5, 2
12
Backward Induction, Continued
Next Stage:
ONML
HIJK
v0 : a
/ ONML
HIJK
v1 : b
XYZ[
o_^]\
1 : 3, 0
XYZ[
o_^]\
2 : 2, 3
13
Backward Induction: The Outcome
Final Stage:
ONML
HIJK
v0 : a
XYZ[
o_^]\
1 : 3, 0
So, according to this method, the outcome of the game should
by o1 : Alice gets 3 dollars, while Bob gets nothing!
So the game stops at the first step, and the players have to be
“rationally” satisfied with (3, 0), when they could have got (4, 5) or
at least (5, 2) if they continued to play!
This conclusion strucks most people as pretty “irrational”.
14
Aumann’s Argument
But... it seems to be an inescapable conclusion of (commonly known)
“rationality”!
Indeed, suppose that it is common “knowledge” that
everybody is “rational”: always plays to maximize his/her profit.
Then, in particular, Alice is rational, so when choosing between
outcomes o3 and o4 (at node v2 ), she will choose o3 (giving her 5
dollars rather than 4). This justifies the first elimination stage.
Now, since “rationality” is common “knowledge”, Bob knows
that Alice is “rational”, so he can “simulate” the above
elimination argument in his mind : so now Bob “knows” that, if the
node v2 is reached during the game, then Alice will choose outcome
o3 .
15
Given this information, he knows that, if arriving at node v1 , the
only possible outcomes would be o2 and o3 . From these two, o2 gives
Bob higher payoff (3 instead of 2). Since Bob is rational, it
follows that, if node b1 is reached during, Bob would choose o2 . This
justifies the second elimination stage.
Again, all this is known to Alice: she knows that Bob is
rational and that he knows that she is rational, so she can
“simulate” all the above argument, concluding that at the initial node
v0 , the possible outcomes are only o1 and o2 . Being rational
herself, she has to choose o1 (giving her a higher payoff 3 > 2).
16
Counterargument
In the view of the above argument, let’s re-examine Bob’s reasoning
when he plans his move for node v1 in the Centipede game:
ONML
HIJK
v0 : a
/ ONML
HIJK
v1 : b
/ HIJK
ONML
v2 : a
XYZ[
o_^]\
1 : 3, 0
XYZ[
o_^]\
2 : 2, 3
XYZ[
o_^]\
3 : 5, 2
/ oXYZ[
_^]\
4 : 4, 5
Based on the above argument, Bob knows that, IF “rationality”
is “common knowledge”, then Alice should choose outcome
o1 , thus finishing the game.
17
So Bob reasons like this: IF node v1 would be reached AT ALL, then
this could only happen if the above assumption (“common
knowledge of rationality”) was wrong! So, in this eventuality,
he will have to give up his “knowledge of Alice’s rationality”: she
would have already made what appeared as an “irrational” choice (of
v1 over o1 ). Even if he started the game believing that Alice was
rational, he may now reassess this assumption.
This undermines the justification for the first elimination step, at
least in Bob’s mind: once he’s not sure of Alice’s rationality, he
cannot be sure anymore that she will choose o3 over o4 , if given this
opportunity.
18
The Consequences of Pessimism
So it seems perfectly rational for Bob to think that, IF node v1 would
be reached at all THEN (Alice is “irrational”. He can then conclude
that, if she’d later be given the opportunity to choose between o3 and
o4 , Alice would “stupidly” choose o4 . So, as far as Bob’s
beliefs go, the “first” elimination stage goes now as follows:
ONML
HIJK
v0 : a
/ ONML
HIJK
v1 : b
XYZ[
o_^]\
1 : 3, 0
XYZ[
o_^]\
2 : 2, 3
/ HIJK
ONML
v2 : a
19
/ oXYZ[
_^]\
4 : 4, 5
Next Stage
Given that, and Bob’s rationality, the next stage according to
Bob is:
ONML
HIJK
v0 : a
/ ONML
HIJK
v1 : b
/ HIJK
ONML
v2 : a
XYZ[
o_^]\
1 : 3, 0
20
/ oXYZ[
_^]\
4 : 4, 5
Final Stage
We assumed common knowledge of rationality, so in fact Alice IS
rational: Bob is wrong to revise his belief, since she would never
choose o4 over o3 . Even her choice of v1 over o1 is perfectly
justified, if we assume that she knows Bob’s belief revision
policy. Then the best move for Alice is to first choose v1 and later
choose o3 :
ONML
HIJK
/ ONML
HIJK
/ ONML
HIJK
v0 : a
v2 : a
v1 : b
XYZ[
o_^]\
3 : 5, 2
21
Inconsistency?
So, assuming common knowledge of “rationality”, we “proved” both
that the backward induction outcome is reached and that it is not
reached!
Moreover, this reasoning can be generalized (as pointed out by
Binmore, Bonanno, Bicchieri, Reny, Brandenburger and others):
the argument underlying the backward induction method seems to
give rise to a fundamental paradox (the so-called “BI paradox ”).
22
PUZZLE no 4: Wisdom of the Crowds
The “implicit knowledge” of a group is typically much higher than
even the knowledge of the most expert member of the group.
Estimating the weight of an ox. (Francis Galton)
Counting jelly beans in a jar. (Jack Treynor)
Navigating a maze. (Norman Johnson)
Finding Out which company was responsible for the Challenger
disaster!
Predicting election results!
And yet...
23
PUZZLE no 5: Informational Cascades
It is commonly known that there are two urns. Urn A contains twice as
many black marbles as white marbles. Urn B contains twice as many
white marbles as black marbles.
It is known that one (and only one) of the urns in placed in a room,
where people are allowed to enter one by one. Each person draws
randomly one marble from the room, looks at it and has to make a
guess: whether the urn is the room is Urn A or Urn B. The guesses are
recorded on a board in the room, so that the next person can see the
previous guesses.
What will happen?
24
The Circular Mill
An army ant, when lost, obeys a simple rule: follow the ant in front of
you!
Most of the time, this works well.
But the American naturalist William Beebe came upon a strange sight
in Guyana:
a group of army ants was moving in a huge circle, 1200 feet in
circumference. It took each ant two and a half hours to complete the
tour.
The ants went round and round for two days, till they all died.
If you think people are smarter than ants, think of the arms’ race in the
Cold War.
25
The Human Mill: Men Who Stare at Goats
About the U.S. Army’s exploration of psi research and
military applications of the paranormal.
General Brown: When did the Soviets begin this type of
research?
Brigadier General Dean Hopgood: Well, Sir, it looks like they
found out about our attempt to telephathically communicate
with one of our nuclear subs. The Nautilus, while it was under
the Polar cap.
General Brown: What attempt?
Dean: There was no attempt. It seems the story was a French
hoax.
26
Dean: But the Russians think the story about the story being a
French hoax is just a story, Sir.
General Brown: So they started doing psi research because they
thought we were doing psi research, when in fact we weren’t
doing psi research?
Dean: Yes sir. But now that they *are* doing psi research,
we’re gonna have to do psi research, sir.
Dean: We can’t afford to have the Russian’s leading the field in
the paranormal.
27
PUZZLE no 6: Pluralistic Ignorance
Emperor’s New Clothes.
If there is anything that you feel it’s too difficult to understand in my
lecture, please ask questions! :-)
Anybody?!
28
Anti-bandwagon effect
The following things are commonly known about the stock market:
PRINCIPLE 1:
If a majority (> 80%) of “players” sells stock A in a given round, then
its price will go down in the next round.
If a majority (> 80%) buys stock A in a given round, then its price will
go up in the next round.
PRINCIPLE 2:
Someone who just bought (sold) Stock A will gain money if its
price goes up (down) in the next round.
Someone who just bought (sold) Stock A will lose money if its
price goes down (up) in the next round.
29
PUZZLE no 7: The Perfect Stock Market Predictor
Alice, a genius economist, discovers a theory that perfectly
predicts the future trend of the stocks.
So she will get rich beyond belief, won’t she?!
Here’s how:
Just to be sure, she keeps her theory secret.
By always buying (selling) exactly when her theory predicts
the stocks will go up (down) in the next round, she will become
richer and richer, forever after.
Or will she??
30
On the contrary, one can prove that, if a majority of players are
minimally rational, then either Alice’s theory is false or it is
impossibly to apply again and again!
Suppose, towards a contradiction, that the theory is true (i.e.
always predicts the trend) and is applicable by Alice ad
infinitum. Thus, it is consistent to assume that she will indeed
apply it: repeatedly buying (selling) exactly iff the theory predicts
the stocks will go up (down).
After a while, the majority of the other players (being rational) will
notice that Alice has consistently bought/sold at the right time. So
they will follow Alice’s actions: buying/selling whenever she does.
By Principle A, the stock’s price will go down when Alice (hence the
majority) bought, and will go up when she (and hence the majority)
sold, thus disproving Alice’s theory!
31
Single Epistemic-Doxastic Logics
Epistemic Logic was first formalized by Hintikka (1962), who also
sketched the first steps in formalizing doxastic logic.
They were further developed and studied by philosophers and logicians
(Parikh, Stalnaker, van Benthem etc.), computer-scientists (Halpern,
Vardi, Fagin etc.) and economists (Aumann, Brandeburger, Samet etc).
32
Syntax of Epistemic-Doxastic Logic
ϕ ::=
p
|
¬ϕ | ϕ ∧ ϕ | Kϕ | Bϕ
33
Models for Single-Agent Information
We are given a set of “possible worlds”, meant to represent all the
relevant epistemic/doxastic possibilities in a certain situation.
EXAMPLE 1: a coin is on the table, but the (implicit) agent doesn’t
know (nor believe he knows) which face is up.
H T 34
A Probabilistic Version
A (discrete) probabilistic model is obtained by simply adding a
map, assigning a probability to each state:
H : 0.5 T : 0.5 This model reflects the agent’s belief that the coin is fair (or the
agent’s complete lack of further information, according to another
interpretation).
If the agent believes (has information) that the coin is biased, some
other map is chosen, e.g.
H : 0.66 T : 0.34 35
Discrete Probabilistic Measures
A discrete probabilistic space is a pair (S, µ), where S is a finite set of
states and µ : P(S) → [0, 1] satisfies the standard axioms of a
probability measure.
This is equivalent to having simply a probability assignment (on S
P
finite), i.e. a map µ : S → [0, 1] such that s∈S µ(s) = 1. This can
be uniquely extended to P(S) by putting
X
µ(P ) :=
µ(s)
s∈P
36
Note that, to uniquely determine such a probability assignment on a
set S = {s1 , . . . , sn } with n elements, it is enough to specify n − 1
probabilities: {µ(si ) : 1 ≤ i ≤ n − 1}.
Conversely, any map µ : {si : 1 ≤ i ≤ n} → [0, 1] on n − 1 states, such
P
that 1≤i≤n−1 µ(si ) ≤ 1, determines a discrete probabilistic space.
37
Knowledge or Belief
The universal quantifier over the domain of possibilities is interpreted
as knowledge, or belief, by the implicit agent.
So we say the agent knows, or believes, a sentence ϕ if ϕ is true in
all the possible worlds of the model.
The specific interpretation (knowledge or belief) depends on the
context.
In the previous example, the agent doesn’t know (nor believe) that the
coin lies Heads up, and neither that it lies Tails up.
38
Subjective Probability: “Degrees of Belief”
The Bayesian, or “subjective”, interpretation of probability:
µ(P ) = α means that the agent’s “degree of belief” in P , the
“intensity” of his belief in P , is given by the number α. “Certainty”
corresponds to α = 1.
But what about (simple) belief? As in Ba P .
39
A “big enough” probability is not enough!
One might be tempted to equate simple “belief” with “a high degree
of belief”, by putting e.g.
BP iff µ(P ) ≥ α
for some big enough α, say α = 0.99. Or even α = 0.5.
But none of these will make KD45 axioms sound. In fact, even K
will fail! The K-validity
BP ∧ BQ ⇒ B (P ∧ Q)
fails, except if α = 0 (i.e. the agent has no non-trivial beliefs) or
α = 1.
40
The Lottery Paradox
Argument: the Lottery paradox.
There are 1000 tickets in a fair lottery. For each single ticket, agent a
believes with degree 0.999 that it is not the winning one.
But she shouldn’t believe with degree 0.999 the conjunction of all
these, i.e. that no ticket is the winning one!
41
Belief=Probability 1
We can dismiss the case α = 0, which amounts to assuming that the
agent has inconsistent beliefs.
So the only natural probabilistic interpretation for (consistent) belief
is to take α = 1:
BP iff µ(P ) = 1.
42
Simple Belief=Certain Belief
So the only natural (and non-trivial) probabilistic interpretation for
belief is to take α = 1, i.e. to equate (simple) belief with “certain”
belief:
BP iff µ(P ) = 1
This is independent on the current state, so it only applies to the case
in which the agent has no knowledge at all about the current state.
43
Learning: Update
EXAMPLE 2:
Suppose now the agent looks at the upper face of the coin and he sees
it’s Heads up.
The model of the new situation is now:
H Only one epistemic possibility has survived: the agent now
knows/believes that the coin lies Heads up.
44
Update as World Elimination
In general, updating corresponds to world elimination:
an update with a proposition (set of states) P ⊆ S is simply the
operation of deleting all the non-P possibilities
After the update, the worlds not satisfying P are no longer possible:
the actual world is known not to be among them.
45
Probabilistic version: Bayesian Conditionalization
In probabilistic models, this corresponds to Bayesian
conditionalization:
the new probability after updating with P is given by
µ(P ∩ Q)
µ(P |Q) · µ(Q)
=
µ (Q) := µ(Q|P ) =
µ(P )
µ(P )
0
provided that µ(P ) 6= 0.
46
Truth and Reality
But is ϕ “really” true (in the “real” world), apart from the agent’s
knowledge or beliefs?
For this, we need to specify which of the possible worlds is is the
actual world.
47
Real World
Suppose that, in the original situation (before learning), the coin lied
Heads up indeed (though the agent didn’t know, or believe, this).
We represent this situation by marking the actual (“real” state of
the) world with a red star:
* H T 48
Mistaken Updates
But what if the real world is not among the “possible” ones? What if
the agent’s sight was so bad that she only thought she saw the coin
lying Heads up, when in fact it lied Tails up?
After the “update”, her epistemically-possible worlds are just
H but we cannot mark the actual world here, since it doesn’t belong
to the agent’s model!
49
False Beliefs
Clearly, in this case, the model only represents the agent’s beliefs, but
NOT her “knowledge” (in any meaningful sense): the agent believes
that the coin lies Heads up, but this is wrong!
Knowledge is usually assumed to be truthful, but in this case the
agent’s belief is false.
But still, how can we talk about “truth” in a model in which the
actual world is not represented?!
50
Third-person Models
The solution is to go beyond the agent’s own model, by taking
an “objective” (third-person) perspective: the real possibility is always in the model, even if the agent believes it to
be impossible.
To point out which worlds are believed to be possible by the agent
we encircle them: these worlds form the “sphere of beliefs”.
“Belief ” now quantifies ONLY over the worlds in this sphere,
while “knowledge” still quantifies over ALL possible worlds.
EXAMPLE 3:
ON
HI HML
JK
* T 51
Example 4
In the Surprise Exam story, a possible initial situation (BEFORE the
Teacher’s announcement) might be given by:
ON
HI 1
2 3 4 ML
JK
5
where i means that: the exam takes places in the i-th (working) day of
the week.
This encodes an initial situation in which the student knows that
there will be an exam in (exactly) one of the days, but he doesn’t
know the day, and moreover he doesn’t have any special belief
about this: he considers all days as being possible.
We are not told when will the exam take place: no red star.
52
Beliefs
EXAMPLE 5:
If however, the Student receives the (possibly false!) information
that the exam will take place either Monday or Tuesday, then after
conditioning on P = {1, 2} the model is:
_^
XY
1 ]\
2Z[
3 4 5 Again, we are not told when is the exam, so no red star.
53
However, if we are told that the exam is in fact on Thursday (though
the student still doesn’t know this), then the model is:
_^
XY
1 ]\
2Z[
3 *4 5 In this model, some of the student’s beliefs are false, since the real
world does NOT belongs to his “sphere of beliefs”.
54
Probabilistic Version
ON
HI 1 : 0.2
2 : 0.2 3 : 0.2 If we start with the model
and apply Bayesian conditioning with P , we obtain:
_^
XY
1 : 0.5 ]\
2 : 0.5Z[
3 : 0 55
4 : 0.2 4 : 0 5 : 0.2
5 : 0 Simple Models for Knowledge and Belief
For a set Φ of facts, a (single-agent, pointed) epistemic-doxastic
model is a triple
S
=
(S, S0 , k.k, s∗ ) ,
consisting of:
1. A set S of ”possible worlds” (or possible “states of the world”, also
known as “ontic states”). S defines the agent’s epistemic state:
these are the states that are “epistemically possible”.
2. A non-empty subset S0 ⊆ S, S0 6= ∅, called the “sphere of beliefs”,
or the agent’s doxastic state: these are the states that
“doxastically possible”.
3. A map k.k : Φ → P(S), called the valuation, assigning to each
p ∈ Φ a set kpkS of states.
4. A designated world s∗ ∈ S, called the “actual world”.
56
Interpretation
• The epistemic state S gives us an (implicit) agent’s state of
knowledge: he knows the real world belongs to S, but cannot
distinguish between the states in S, so cannot know which of them
is the real one.
• The doxastic state S0 gives us the agent’s state of belief : he
believes that the real world belongs to S0 , but his beliefs are
consistent with any world in S0 .
• The valuation tells us which ontic facts hold in which world:
we say that p is true at s if s ∈ kpk.
• The actual world s∗ gives us the “real state” of the world: what
really is the case.
57
Truth
For any world w in a model S and any sentence ϕ, we write
w |=S ϕ
if ϕ is true in the world w.
When the model S is fixed, we skip the subscript and simpky
write w |= ϕ.
For atomic sentences, this is given by the valuation map:
w |= p iff w ∈ kpk,
while for other propositional formulas is given by the usual truth
clauses:
58
w |= ¬ϕ iff w 6|= ϕ,
w |= ϕ ∧ ψ iff w |= ϕ and w |= ψ,
w |=S ϕ ∨ ψ iff either w |= ϕ or w |= ψ.
(We take ϕ ⇒ ψ to be just an abbreviation for ¬ϕ ∨ ψ, and ϕ ⇔ ψ to
be an abbreviation for (ϕ ⇒ ψ) ∧ (ψ ⇒ ϕ).)
59
Interpretation Map
We can extend the valuation kpkS to an interpretation map kϕkS for
all propositional formulas ϕ:
kϕkS := {w ∈ S : w |=S ϕ}.
Obviously, this has the property that
k¬ϕkS = S \ kϕkS ,
kϕ ∧ ψkS = kϕkS ∩ kψkS ,
kϕ ∨ ψkS = kϕkS ∪ kψkS .
We now want to extend the interpretation to all the sentences in
doxastic-epistemic logic.
60
Knowledge and Belief
Knowledge is defined as “truth in all epistemically possible
worlds”, while belief is “truth in all doxastically possible
worlds”:
w |= Kϕ iff t |= ϕ for all t ∈ S,
w |= Bϕ iff t |= ϕ for all t ∈ S0 .
61
Validity
A sentence is valid over epistemic-doxastic models if it is true at every
state in every epistemic-doxastic model.
A sentence is satisfiable (over epistemic-doxastic models) if it is true
some state in some doxastic-epistemic model.
62
Consequences
For every sentences ϕ, ψ etc, the following are valid over
epistemic-doxastic models:
1. Veracity of Knowledge:
Kϕ ⇒ ϕ
2. Positive Introspection of Knowledge:
Kϕ ⇒ KKϕ
3. Negative Introspection of Knowledge:
¬Kϕ ⇒ K¬Kϕ
4. Consistency of Belief:
¬B(ϕ ∧ ¬ϕ)
63
5. Positive Introspection of Belief:
Bϕ ⇒ BBϕ
6. Negative Introspection of Belief:
¬Bϕ ⇒ B¬Bϕ
7. Strong Positive Introspection of Belief:
Bϕ ⇒ KBϕ
8. Strong Negative Introspection of Belief:
¬Bϕ ⇒ K¬Bϕ
9. Knowledge implies Belief:
Kϕ ⇒ Bϕ
64
Epistemic-Doxastic Logic: Sound and Complete Proof System
In fact, a sound and complete proof system for single-agent
epistemic-doxastic logic can be obtained by taking as axioms validities
(1)-(4) and (7)-(9) above, together with “Kripke’s axioms” for
knowledge and belief
K(ϕ ⇒ ψ) ⇒ (Kϕ ⇒ Kψ)
B(ϕ ⇒ ψ) ⇒ (Bϕ ⇒ Bψ)
, and together with following inference rules:
Modus Ponens:
From ϕ and ϕ ⇒ ψ infer ψ.
Necessitation:
From ϕ infer Kϕ.
65
Generalization
Many philosophers deny that knowledge is introspective, and some
philosophers deny that belief is introspective. In particular, both
common usage and Platonic dialogues suggest that people may
believe they know things that they don’t actually know.
Other of the above validities may also be debatable: e.g. some “crazy”
agents may have inconsistent beliefs.
So it is convenient to have a more general semantics, in which the
above principles do not necessarily hold, so that one can pick whichever
principles one considers true.
66
Kripke Semantics
For a set Φ of facts, a Φ-Kripke model is a triple
S
=
(S, Ri , k.k, s∗ )i∈I
consisting of
1. a set S of ”possible worlds”
2. an (indexed) family of binary accessibility relations
Ri ⊆ S × S
3. and a valuation k.k : Φ → P(S), assigning to each p ∈ Φ a set kpkS
of states
4. a designated world s∗ : the “actual” one.
67
Kripke Semantics: Modalities
For atomic sentences and for Boolean connectives, we use the same
semantics (and notations) as on epistemic-doxastic models.
For relation R in the indexed family and every sentence ϕ, we can
define a new sentence [R]ϕ by (universally) quantifying over
R-accessible worlds:
s |= [R]ϕ iff t |= ϕ for all t such that sRt.
The operator [R] is called a “(universal) Kripke modality”. When the
relation R is unique, we can leave it implicit and abbreviate [R]ϕ as 2ϕ.
The dual existential modality is given by
< R > ϕ := ¬[R]¬ϕ.
Again, when R is unique, we can abbreviate < R > ϕ as 3ϕ.
68
Kripke Models for Knowledge and Belief
In a context when we interpret a modality 2ϕ as knowledge, we use
the notation Kϕ instead, and we denote by ∼ the underlying binary
relation R.
When we interpret the modality 2ϕ as belief, we use the notation Bϕ
instead, and we denote by → the underlying binary relation R.
So a Kripke model for (single-agent) knowledge and belief is of
the form (S, ∼, →, k.k, s∗ ), with K interpreted as the modality [∼] for
the epistemic relation ∼, and B as the modality [→] for the doxastic
relation →.
69
Example 3, again: knowledge
The agent’s knowledge in the concealed coin scenario can now be
represented as:
o
H
4 / T
j
The arrows represent the epistemic relation ∼, which captures the
agent’s uncertainty about the state the world. An arrow from state s
to state t means that, if s were the real state, then the agent wouldn’t
distinguish it from state t: for all he knows, the real state might be t.
70
Knowledge properties
The fact that K in this model satisfied our validities (1)-(3) is now
reflected in the fact that ∼ is an equivalence relation in this model:
• The Veracity (known as axiom T in modal logic) Kϕ ⇒ ϕ
corresponds to the reflexivity of the relation ∼.
• Positive Introspection (known as axiom 4 in modal logic)
Kϕ ⇒ KKϕ corresponds to the transitivity of the relation ∼.
• Negative Introspection (known as axiom 5 in modal logic)
¬Kϕ ⇒ K¬Kϕ corresponds to Euclideaness of the relation ∼:
if s ∼ t and s ∼ w then t ∼ w.
In the context of the other two, Euclideaness is equivalent to
symmetry:
if s ∼ t then t ∼ s.
71
Epistemic Models
An epistemic model (or S5-model) is a Kripke model in which all
the accessibility relations are equivalence relations, i.e. reflexive,
transitive and symmetric (or equivalently: reflexive, transitive
and Euclidean).
72
S4 Models for weak types of knowledge
But, we can see that, in the generalized setting of Kripke models, these
properties are NOT automatically satisfied.
So one can use Kripke semantics to interpret weaker notions of
“knowledge”, e.g. a type of knowledge that is truthful (factive) and
positively introspective, but NOT necessarily negative introspective.
An S4-model for knowledge is a Kripke model satisfying only
reflexivity and transitivity (but not necessarily symmetry or
Euclideaness).
73
Example 3, again: beliefs
The agent’s beliefs after the mistaken update are now representable as:
o
H
T 4 In both worlds (i.e. irrespective of what world is the real one), the
agent believes that the coin lies Heads up.
74
Belief properties
The fact that belief in this model satisfied our validities (4)-(6) is now
reflected in the fact that the doxastic accessibility in the above model
has the following properties:
• Consistency of beliefs (known as axiom D in modal logic)
¬(Bϕ ∧ ¬ϕ) corresponds to the seriality of the relation →:
∀s∃t such that s → t.
• Positive Introspection for Beliefs (axiom 4) Bϕ ⇒ BBϕ
corresponds to the transitivity of the relation →.
• Negative Introspection for Beliefs (axiom 5) ¬Bϕ ⇒ B¬Bϕ
corresponds to Euclideaness of the relation →.
75
Doxastic Models
A doxastic model (or KD45-model) is a Φ-Kripke model satisfying
the following properties:
• (D) Seriality: for every s there exists some t such that s → t ;
• (4) Transitivity: If s → t and t → w then s → w
• (5) Euclideaness : If s → t and s → w then t → w
76
Properties connecting Knowledge and Belief
The fact that knowledge and belief in this model satisfied our validities
(7)-(9) is now reflected in the fact that the accessibility relations → in
the above model have the following properties:
• Strong Positive Introspection of beliefs Bϕ ⇒ KBϕ
corresponds to
if s ∼ t and t → w then s → w.
• Strong Negative Introspection of beliefs ¬Bϕ ⇒ K¬Bϕ
corresponds to
if s ∼ t and s → w then t → w.
• Knowledge Implies Beliefs Kϕ ⇒ Bϕ corresponds to
if s → t then s ∼ t.
77
Epistemic-Doxastic Kripke Models
A Kripke model satisfying all the above conditions on the relations ∼
and → is called an epistemic-doxastic Kripke model.
There are two important observations to be made about these models:
first, they are completely equivalent to our simple, sphere-based
epistemic-doxastic models;
second, the epistemic relation is completely determined by the doxastic
relation.
78
Equivalence of Models
EXERCISE: For every epistemic-doxastic model S = (S, S0 , k.k, s∗ )
there exists a doxastic-epistemic Kripke model S0 = (S, ∼, →, k.k, s∗ )
(having the same set of worlds S, same valuation k.k and same real
world s∗ ), such that the same sentences of doxastic-epistemic
logic are true at the real world s in model S as in model S0 :
s |=S ϕ iff s |=S0 ϕ ,
for every sentence ϕ.
Conversely, for every doxastic-epistemic Kripke model
S0 = (S, ∼, →, k.k, s∗ ) there exist a doxastic-epistemic model
S = (S, S0 , k.k, s∗ ) such that, for every sentence ϕ, we have:
s |=S ϕ iff s |=S0 ϕ .
79
Doxastic Relations Uniquely Determine Epistemic Ones
EXERCISE:
Given a doxastic Kripke model (S, →, k.k, s∗ ) (i.e. one in which to
→ is serial, transitive and Euclidean), there is a unique relation
∼⊆ S × S such that (S, ∼, →, k.k, s∗ ) is a doxastic-epistemic
Kripke model.
This means that, to encode an epistemic-doxastic model as a Kripke
model, we only need to draw the arrows for the doxastic
relation.
80
Multi-Agent Doxastic-Epistemic Models
These are Kripke models
(S, ∼a , →a , k.k, s∗ )a∈A
where the indices a ∈ A represent different agents, each having her own
doxastic and epistemic relations →a , ∼a , satisfying the above
conditions.
Alternatively, the equivalence relations ∼a can be replaced by the
corresponding partitions
Πa = {s(a) : s ∈ S}
of the state space S, consisting of partition cells s(a) that correspond to
the equivalence classes
s(a) = {t ∈ S : t ∼a s}.
81
So multi-agent models can be repackaged as structures
(S, →a , Πa , k.k, s∗ )a∈A .
82
© Copyright 2026 Paperzz