Lorentz Center 20010
1
Doxastic Attitudes and Group Communication
Alexandru Baltag, Oxford University
(Based on joint work with J. van Benthem and Sonja
Smets.)
Lorentz Center 20010
2
Doxastic Attitudes: “static” and “dynamic”
A “static” doxastic attitude Ai ϕ captures an agent’s
opinion about some sentence ϕ; e.g. agent i knows ϕ:
Ki ϕ; agent i believes ϕ etc.
In addition to knowledge or belief, we will consider other
such attitudes: strong belief Sbi ϕ, “defeasible knowledge”
(=safe belief ) 2i ϕ, knowledge-to-the-contrary Ki ¬ϕ etc.
But there also exist doxastic attitudes of a “dynamic”
nature, governing an agent’s belief revision.
Lorentz Center 20010
3
Dynamic Attitudes
The way an agent revises her beliefs after receiving some
new information depends on the agent’s doxastic
attitude towards the source of information.
Hence, such a “dynamic” attitude captures the agent’s
opinion about the reliability of information
coming from this particular source.
In a “communication” setting, the sources of information
are other agents. We use τji to denote listener’s j’s
attitude towards information coming from speaker i.
Lorentz Center 20010
4
Dynamic Attitudes are Doxastic Transformers
While formally τji will be atomic sentences, the semantics
will associate them with (single-agent) doxastic
transformations: maps τ taking any input-sentence ϕ
and any single-agent plausibility relation (S, ≤) (of some
anonymous agent), and returning a new plausibility
relation (S, ≤0 ) on the same set of states.
The meaning of τji will be that, whenever receiving
information ϕ from source i, agent j will revise her beliefs
by applying transformation τ ϕ to her plausibility relation
≤j .
Lorentz Center 20010
5
The Primacy of Dynamics
Our strategy will be to take the dynamic doxastic
attitudes τji as basic (and simply call them “doxastic
attitudes”).
Moreover, we will use them to define (most of the) static
attitudes as fixed points of the transformations τ .
Lorentz Center 20010
6
Simplifying Restrictions
In this talk (and the next one), we assume that all
communication is public: speakers make only public
announcements to the whole group.
Lorentz Center 20010
7
Stable attitudes
For this talk, I will also assume these mutual doxastic
attitudes are stable: they do not change during the
conversation (since τji are treated as “ontic facts”, that
are not affected by communication acts).
Later (but maybe not in this talk?), we will consider the
issue of revising agents’ doxastic attitudes.
Lorentz Center 20010
8
PLAN
1. Static Epistemics: Plausibility Models.
2. (Dynamic) Doxastic Attitudes as Relational
Transformers.
3. Static Attitudes as Fixed Points.
4. Sincerity, Honesty, Persuasiveness.
Lorentz Center 20010
9
1. Static Epistemics: Kripke semantics
For any binary accessibility relation R ⊆ S × S and set
P ⊆ S, the corresponding Kripke modality is:
[R]P := {s ∈ S : ∀t (sRt ⇒ t ∈ P )}.
When we think of sets P ⊆ as propositions and of
elements s ∈ S as states, we write s |= P instead of s ∈ P .
Hence, the modalities satisfy:
s |=S [R]P iff ∀t (sRt ⇒ t |= P ).
Lorentz Center 20010
10
Preference Models for Information
Interpret the accessibility relation Ra of a multi-modal
Kripke model as a “doxastic preference” , a plausibility
relation, meant to represent “soft” information: in this
reading, sRa t means that world t is at least as
plausible for agent a as world s. For this
interpretation, it is customary to use the notation s ≤a t
for the plausibility relation Ra (and ≥a for its converse),
and also to denote the associated “knowledge” modality
by 2a rather than Ka . It is also customary, though not
necessary, to assume that ≤a is a connected, or at least
a locally connected preorder.
Lorentz Center 20010
11
SEMANTICS: Plausibility Structures
A (finite) plausibility frame:
M =< I, W, (≤i )i∈I >
• I a finite set of agents
• W a finite (and non-empty) set of states (“worlds”)
• ≤i “locally connected” preorders on W (“i’s
plausibility order”), one for each agent i ∈ I
Read s ≤i t as “t is at least as plausible for i as
world s”.
Lorentz Center 20010
12
Preorder: reflexive and transitive.
Locally connected:
s ≤i t ∧ s ≤i w ⇒ t ≤i w ∨ w ≤i t,
t ≤i s ∧ w ≤i s ⇒ t ≤i w ∨ w ≤i t.
i
As a consequence, the comparability relation ∼, given
by
i
s ∼ t iff either s ≤i t or t ≤i s,
is an equivalence relation, called i’s epistemic
indistinguishability. This induces an information
partition for each agent i.
Lorentz Center 20010
13
Strict Order and Equi-plausibility
We also consider the “strict” plausibility relation:
s <i t iff: s ≤i t but t 6≤i s
s ∼i t iff either s ≤i t or t ≤i s.
Equi-plausibility is the equivalence relation ∼
=a induced by
the preorder ≤i :
s∼
=i t iff: both s ≤i t and t ≤i s
When using the Ri notation for the relation ≤i , the
correspond strict version, indistinguishability and
∼
<
∼
equi-plausibility relations are denoted by Ri , Ri , Ri= .
Lorentz Center 20010
14
Plausibility Models
A (finite, pointed) plausibility model :
• a finite plausibility frame M =< I, W, (≤i )i∈I >,
• a designated world s0 ∈ W , called the “real world”,
• a valuation map, assigning to each atomic sentence
p (in a given set At of atomic sentences) some set
kpk ⊆ W .
The valuation tells us which “ontic” (non-epistemic,
objective) “facts” hold at each world.
Lorentz Center 20010
15
Knowledge
A sentence ϕ is known by agent i at state s if it is true
at all states in the same information cell as s; i.e.
in all the worlds in the set
i
{t ∈ S : t ∼ s}.
In other words, “knowledge” is the Kripke modality for
i
the comparability relation ∼:
i
Ki = [∼]
Call this “irrevocable knowledge”: it is an absolutely
certain, fully introspective and unrevisable attitude.
Lorentz Center 20010
16
Belief
A sentence ϕ is believed by agent i at state s if ϕ is
true in all the “most plausible” worlds in s’s
i
information cell; i.e. in all states t ∼ s such that
i
∀w(w ∼ s ⇒ w ≤i t).
In other words, belief is the Kripke modality
Bi = [→i ]
for the “doxastic relation” →i that relates each world to
the most plausible worlds in its information cell
i
i
s →i t iff t ∼ s and ∀w(w ∼ s ⇒ w ≤i t).
Lorentz Center 20010
17
Conditional Belief
More generally, a sentence ϕ is believed conditional on
P (in which case we write B P ϕ) if ϕ is true at all most
plausible worlds satisfying P in s’s information
cell.
In other words, conditional belief is the Kripke modality
BiP = [→P
i ]
for the relation given by:
s
→P
i
i
i
t iff t ∈ P, t ∼ s and ∀w(w ∈ P, w ∼ s ⇒ w ≤i t).
Lorentz Center 20010
18
Contingency Plans for Belief Change
We can think of conditional beliefs B ϕ ψ as ‘‘strategies”,
or “contingency plans” for belief change:
in case I will find out that ϕ was the case, I will
believe that ψ was the case.
Lorentz Center 20010
19
Single-Agent Plausibility Models
For a single-agent model S = (S, ≤, s0 , kk), we can assume
(up to bisimilarity ') that the plausibility relation is
connected (total): just restrict the model to s0 ’s
information cell.
Lorentz Center 20010
20
Example 0: Prof Winestein
Professor Albert Winestein feels that he is a genius. He
knows that there are only two possible explanations for
this feeling: either he is a genius or he’s drunk. He doesn’t
feel drunk, so he believes that he is a sober genius.
However, if he realized that he’s drunk, he’d think that
his genius feeling was just the effect of the drink; i.e.
after learning he is drunk he’d come to believe that
he was just a drunk non-genius.
In reality though, he is both drunk and a genius.
Lorentz Center 20010
21
A Model for Example 0
The actual world is (D, G). Albert considers (D, ¬G) as
being more plausible than (D, G), and (¬D, G) as
more plausible than (D, ¬G). But he can distinguish all
these worlds from (¬D, ¬G), since (in the real world) he
knows (K) he’s either drunk or a genius.
º¹
º¹
¸·
¸· a º¹
¸· a º¹
¸·
/ D, ¬G
/ ¬D, G
¬D, ¬G
D, G
³´
µ¶
³´
µ¶
³´
µ¶
³´
µ¶
Drawing Convention: We use labeled arrows for
plausibility relations ≤a , but we skip loops and composed
arrows (obtainable by transitivity and reflexivity of ≤a ).
Lorentz Center 20010
22
True Belief is not Knowledge
At the real world (D, G), we can check that Albert
truthfully believes he’s a genius
(D, G) |= Ba G,
but he doesn’t “know” he’s a genius:
(D, G) |= ¬Ka G.
However, Albert knows that he’s either drunk or a genius:
(D, G) |= Ka (D ∨ G)
Lorentz Center 20010
23
Mary Curry
Albert Winestein’s best friend is Prof. Mary Curry (not
to be confused with Marie Curie!).
She’s pretty sure that Albert is drunk: she can see
this with her very own eyes. All the usual signs are there!
She’s completely indifferent with respect to
Albert’s genius: she considers the possibility of genius
and the one of non-genius as equally plausible.
Lorentz Center 20010
24
However, having a philosophical mind, Mary Curry is
aware of the possibility that the testimony of her
eyes may in principle be wrong: it is in principle
possible that Albert is not drunk, despite the presence of
the usual symptoms.
º¹
¸·
¬D, ¬G o
³´
µ¶
m
º¹
¸·
/ ¬D, G
³´
µ¶
m
º¹
¸·
/ D, G o
³´
µ¶
m
º¹
¸·
/ D, ¬G
³´
µ¶
Lorentz Center 20010
25
Safe Belief, or “defeasible knowledge”
One can also define safe belief as the Kripke modality
for the plausibility relation:
2i ϕ := [≤i ]ϕ
In other words, ϕ is safely believed at state s if it is
true in all states that are at least as plausible as s.
This can be seen as a formalization of the notion of
“defeasible” knowledge, proposed by a number of
philosophers.
Lorentz Center 20010
26
“Soft” versus “hard” information
A plausibility relation is in general transitive, but not
symmetric, so “indefeasible knowledge” is not the S5-type:
it is positively introspective, but not necessarily negatively
introspective. One could say that the fully introspective
(S5-type) knowledge Ka captures a notion of ‘‘hard”
information, that is guaranteed to be truthful beyond any
doubt; while the plausibility-based (positively, but
negatively, introspective) knowledge 2a captures a more
realistic notion of “soft” information.
Lorentz Center 20010
27
“Irrevocable knowledge” embodies “hard” informat
In a plausibility model, the comparability relation ∼a is
an equivalence relation, that includes the plausibility
relation.
So irrevocable knowledge is S5-like (truthful and fully
introspective) and stronger than the plausibility-based
“knowledge” modality 2a . Irrevocable knowledge can
thus be said to embody “hard information”.
Their relative strength is captured by the entailment:
Ka P =⇒ 2a P,
Lorentz Center 20010
28
Difference between our two knowledge concepts
To see the difference, note that we have
Ba Ka P = Ka P,
but
Ba 2a P = Ba P,
so that
Ba 2a P 6= 2a P.
This, by the way, solves the so-called “Perfect Believer”
Puzzle.
Lorentz Center 20010
29
(In)defeasibility
Something is “irrevocable knowledge” K if it is a belief
that cannot be defeated by any new information (including
false information):
s |= Ka Q iff s |= BaP Q for all P ⊆ S.
Something is “deafesible knowledge” (=safe belief) if it is
a belief that cannot be defeated by any new true
information:
s |= 2a Q iff s |= BaP Q for all P ⊆ S such that s ∈ P.
Lorentz Center 20010
30
Strong Belief
A sentence ϕ is strongly believed by agent i at state s
if the following two conditions hold
1. ϕ is consistent with the agent’s knowledge at s:
i
∃w ∼ s such that w |= ϕ
2. within each information cell, all ϕ-worlds are
strictly more plausible than all non-ϕ-worlds:
i
∀w ∼ w0 (w |= ϕ ∧ w0 6|= ϕ ⇒ w0 <i w) .
We write Sbiϕ for strong belief. It is easy to see that
strong belief implies belief.
Lorentz Center 20010
31
Strong Belief is Believed Until Proven Wrong
Actually, strong belief is so strong that it will never be
given up except when one learns information that
contradicts it!
More precisely:
ϕ is strongly believed iff ϕ is believed and is also
conditionally believed given any new evidence
(truthful or not) EXCEPT if the new information
is known to contradict ϕ; i.e. if:
1. Bϕ holds, and
2. B θ ϕ holds for every θ such that ¬K(θ ⇒ ¬ϕ).
Lorentz Center 20010
32
Example
The “presumption of innocence” in a trial is a rule
that asks the jury to hold a strong belief in innocence
at the start of the trial.
Lorentz Center 20010
33
More Examples
In our Winestein example
º¹
¸·
º¹
¸·
/ D, ¬G
D, G
³´
µ¶
³´
µ¶
º¹
¸·
/ ¬D, G
³´
µ¶
Albert’s belief that he is sober (¬D) is a strong
belief (though it is a false, hence non-safe, belief).
Albert’s true belief in genius is neither safe nor
strong.
Mary’s belief that Albert is drunk in the model
º¹
¸· m º¹
¸· m º¹
¸· m º¹
¸·
/ ¬D, G
/ D, ¬G
/ D, G o
¬D, ¬G o
³´
µ¶
³´
µ¶
³´
µ¶
³´
µ¶
is both safe and strong.
Lorentz Center 20010
34
Example 1: A Multi-Agent Model S
To put together Marry’s order with Albert’s order, we
need to know what do they know about each other.
Suppose all their opinions (=all their conditional
beliefs) are common knowledge. Then we obtain the
following multi-agent plausibility model S:
º¹
¸·
¬D, ¬G o
³´
µ¶
m
º¹
¸·q
/ ¬D, G
³´
µ¶
a
m
º¹
¸·q
1³´ D, ¬G µ¶m
a
m
º¹
¸·
2³´ D, G µ¶
Lorentz Center 20010
35
Example 2
Let’s relax our assumptions about agents’ mutual
knowledge: suppose that only Albert’s opinions are
common knowledge; in addition, it is common knowledge
that Mary has no opinion on Albert’s genius(she considers
genius and non-genius as equi-plausible), but that she has
a strong opinion about his drunkness: she can see him, so
judging by this she either strongly believes he’s drunk or
she strongly believes he’s not drunk. (But her actual
opinion about this is unknown to Albert, who thus
considers both opinions as equally plausible.)
Lorentz Center 20010
36
The resulting model is:
¸·
º¹
¬D, ¬G o
³´
µ¶
O
m
a
²
º¹
¸·
¬D, ¬G o
³´
µ¶
º¹
¸·q
/ ¬D, G
³´
µ¶
O
a
m
a
m
²
º¹
¸·q
/ ¬D, G
³´
µ¶m
º¹
¸·q
1³´ D, ¬G µ¶m
O
a
m
a
a
m
²
º¹
¸·q
D, ¬G m
³´
µ¶
º¹
¸·
2³´ D, G µ¶
O
a
a
m
º¹ ² ¸·
2³´ D, G µ¶
where the real world is represented by the upper
(D, G) state.
Lorentz Center 20010
37
2. Doxastic Transformers
Given a (bisimulation-invariant) doxastic language L, a
(single-agent) doxastic transformer is a map τ
taking any sentence ϕ ∈ L and any (single-agent) total
plausibility model S = (S, ≤, s0 , kk) into a new total
plausibility model S0 = (S 0 , ≤0 , s0 , kk0 ), having:
• as new set of worlds: some subset S 0 ⊆ S,
• as new valuation: the restriction k · k ∩ S 0 of the
original valuation to S 0 ,
• as new plausibility relation: some total preorder ≤0
Lorentz Center 20010
38
satisfying the condition
if S ' T and kϕkS = kψkT then τ (ϕ, S) = τ (ψ, T).
We denote by τ ϕ the map τ (ϕ, •) induced on
single-agent plausibility models.
Lorentz Center 20010
39
Examples
(1) Update !ϕ (conditionalization with ϕ):
this is executable only if ϕ holds in the real world s0 ;
in which case, all the non-ϕ states are deleted and
the same relations are kept between the remaining states.
(2) Radical Upgrade ⇑ ϕ:
this is executable only if there exist ϕ-worlds in S;
in which case, all ϕ-worlds become “better” (more
plausible) than all ¬ϕ-worlds, and within the two
zones, the old relations are kept.
Lorentz Center 20010
40
(3) Conservative Upgrade ↑ ϕ:
this is executable only if there exist ϕ-worlds in S;
in which case,
the “best” ϕ-worlds become better than all other
worlds; all else stays the same.
Lorentz Center 20010
41
Different attitudes towards the new information
These correspond to three different possible attitudes of
the learners towards the reliability of the source:
• Update: the source is known to be infallible.
• Radical upgrade: the source is highly reliable (or
at least very persuasive). The source is strongly
believed to be truthful. This can only happen if the
listener doesn’t already know that ϕ is false.
• Conservative upgrade: the source is trusted, but
only “barely”. The source is (“simply”) believed to
be truthful ; but this belief can be easily given up.
Lorentz Center 20010
42
More Examples: Negative Attitudes
(4) Negative Update !− ϕ: in which case, all the ϕ
states are deleted and the same relations are kept
between the remaining states.
(5) Negative Radical upgrade ⇑− ϕ:
all ¬ϕ-worlds become “better” (more plausible)
than all ϕ-worlds, and within the two zones, the old
ordering remains. This reflects strong distrust distrust:
the listener strongly believes the speaker is lying.
Lorentz Center 20010
43
(6) Negative Conservative upgrade ↑− ϕ:
the “best” ¬ϕ-worlds become better than all
other worlds, and in rest the old order remains. This
reflects relative distrust: the listener barely believes the
speaker is lying.
Lorentz Center 20010
44
Example: Neutrality
(7) Doxastic Neutrality idϕ is the attitude according
to which the source cannot be trusted nor distrusted: the
listener simply ignores the new information ϕ, keeping
her old plausibility order as before. This is the identity
map id on plausibility models.
Lorentz Center 20010
45
Mixed Attitudes
An agent’s attitude towards a source of information might
depend on the type of information received from
that source: she might treat differently different types of
information. She might mix two or more basic
transformers, using semantic or syntactic conditions
to decide which to apply.
For instance, the agent may strongly trust the source to
be right about sentences belonging to a given sublanguage
L0 , while she may only barely trust it with respect to any
other announcements. This attitude could be denoted by
⇑L0 ↑.
Lorentz Center 20010
46
Example
If the source i is a mathematician, j may accept him as
an infallible source of mathematical statements, and thus
perform an update !ϕ whenever i announces a sentence ϕ
about Mathematics.
In any other case, j might treat the new information
more cautiously (say, barely believing it ↑ ϕ, or even
ignoring it, and thus applying id): indeed, the
mathematician i may be utterly unreliable concerning any
other area of conversation!
Lorentz Center 20010
47
Formalizing Doxastic Attitudes
Let us now add to our language two ingredients:
• dynamic modalities [i : ϕ] corresponding to public
announcements by agent i;
• new atomic sentences τji , for each pair of distinct
agents i 6= j, where τ comes from a given finite set of
doxastic attitude symbols, including !, ⇑, ↑, id etc,
encoding the agent j’s attitude towards agent i’s
the announcements.
Lorentz Center 20010
48
Semantics
For semantics, we are given a multi-agent plausibility
model, with the valuation map extended to the new
atomic sentences and satisfying a number of semantic
conditions (to follow);
and, in addition, we are also given, for each attitude
symbol τ in the syntax, some single-agent doxastic
transformation, also denoted by τ .
Lorentz Center 20010
49
Semantic Constraints
We put some natural semantic constraints (on the
valuation of the atomic sentences of the form τ , for any
doxastic attitude type τ ).
The first says that, in any possible world, every agent
has some unique attitude towards every other
agent:
∀s∃!τ such that s |= τji
The second is an introspection postulate: the agentknows
her own doxastic attitudes
j
s ∼ t =⇒ (s |= τji ⇐⇒ t |= τji ).
Lorentz Center 20010
50
A further simplifying assumption
In fact, sometimes (e.g. in Sonja’s talk), we will make an
even more restrictive assumption, namely all the
agents’ doxastic attitudes towards each other are
common knowledge:
∀s, t : s |= τji ⇐⇒ t |= τji
Lorentz Center 20010
51
The Mutual Trust Graph
In the presence of this last simplifying assumption, the
extra-structure required for the semantics (i.e. the
extension of the valuation plus the assignment of a
transformation to each attitude symbol) can be
summarized as a graph having agents as nodes and
arcs labeled by doxastic transformations:
the fact that τji holds in (any state of) the model is
captured by an arc labeled τ from node j to node i.
This graph will be called the “mutual trust graph” of this
group of agents.
Lorentz Center 20010
52
Example
↑
•T k
+
1
!
⇑
⇑
↑
!
· ¡
•3
2
•
@
Lorentz Center 20010
53
Counterexample
Not all graphs are consistent with the assumption that all
doxastic attitudes are common knowledge:
↑
•T k
+
1
!
⇑
⇑
↑
↑
· ¡
•3
is NOT a consistent graph.
2
•
@
Lorentz Center 20010
54
Consistency Conditions
!ji ⇒!ki ,
−
!−
⇒!
ji
ki ,
for all j, k 6= i.
Lorentz Center 20010
55
Semantics of Communication Acts
The semantics of i : ϕ will be given by the multi-agent
doxastic transformation that takes any plausibility model
S = (S, ≤j , kk)j and returns a new model (i : ϕ)(S) :
= (S 0 , ≤0j , kk0 ), where:
the listeners’ new preorders ≤j (for j 6= i) are given
by applying within each ∼j -information cell the
transformer τ to the order ≤j within that cell, when
τji holds throughout that cell;
while the speaker’s preorder ≤i is kept the same;
the new set S 0 is the union of all the new information
cells; and the new valuation kpk0 := kpk ∩ S 0 .
Lorentz Center 20010
Semantics of Dynamic Modalities
The dynamic modalities are defined as usual:
s |=S [i : ϕ]ψ iff s |=i:ϕ(S) ψ.
56
Lorentz Center 20010
57
EXAMPLE 3
Suppose that in the situation in Example 2 above The
resulting model is:
º¹
¸·
¬D, ¬G o
³´
µ¶
O
m
a
²
¸·
¬D, ¬G o
³´
µ¶
º¹
º¹
¸·q
/ ¬D, G
³´
µ¶
O
a
m
º¹
¸·q
1³´ D, ¬G µ¶m
O
a
m
²
º¹
¸·q
/ ¬D, G
³´
µ¶m
a
m
º¹
¸·
2³´ D, G µ¶
O
a
a
m
²
¸·q
D, ¬G m
³´
µ¶
º¹
a
a
m
² ¸·
2³´ D, G µ¶
º¹
Mary is an infallible source, who publicly announces
that she believes that Albert is drunk : the model after this
act is the one in Example 1.
Lorentz Center 20010
58
EXAMPLE 4
Consider the situation in example 1, but now Mary is
assumed be a fallible, but highly trusted and
persuasive source ⇑, who announces that Albert is
drunk. This is honest, since she strongly believes D.
After the announcement m : D, Albert starts to prefer
every D-world to every ¬D-world. The upgraded model is
º¹
¸·
¬D, ¬G o
³´
µ¶
m
º¹
¸·
/ ¬D, G
³´
µ¶
a
m
º, ¹
¸·
2³´ D, G µ¶l
a
m
º- ¹
¸·
1³´ D, ¬G µ¶
Lorentz Center 20010
59
Counterexample 5
Note that simply announcing that she believes D, or
even that she strongly believes D, WON’T DO: this
will NOT be persuasive, since it will not change
Albert’s beliefs about the facts of the matter (D or ¬D),
although it may change his beliefs about her beliefs.
Being informed of another’s beliefs is not enough
to convince you of their truth.
Indeed, Mary’s beliefs are already common knowledge in
the initial model of Example 1: so an upgrade ⇑ (Bm D)
would be superfluous!
Lorentz Center 20010
Sincerity of a Communication Act
A communication act i : ϕ is sincere if the speaker
believes her own announcement, i.e. Bi ϕ holds.
60
Lorentz Center 20010
61
Common Knowledge of Sincerity
In cooperative situations, it is sometimes natural to
assume common knowledge of sincerity.
This can be done by modifying the above semantics
of i : ϕ, by restricting the applicability of the above
doxastic transformation (modeling the act i : ϕ) to states
in which Bi ϕ holds. We call this an “inherently sincere”
communication act.
Note that the Reduction Laws have then to be changed,
by adding the precondition Bi ϕ.
Lorentz Center 20010
62
Sincere Lies and (Lack of ) Introspection
In general, sincerity does not imply truthfulness:
“I really am the man of your dreams” is a typical sincere
lie!
But, when applied to introspective properties, sincerity
DOES imply truthfulness!
“I believe I am the man of your dreams” is sincere only if
it’s true.
Lorentz Center 20010
63
Another Mixed Attitude
This observation suggests a natural mixed attitude !i ⇑:
the listener strongly trusts the speaker i to tell the truth,
but moreover she considers the speaker to be infallibly
truthful when announcing sentences (such as “I believe..”
or “I know...”) that he can know by introspection.
This attitude is natural when common knowledge of
sincerity is assumed: when applied to introspective
properties, sincerity is the same as infallible truthfulness,
hence ! is appropriate; for other properties, sincerity can
at least be (in the case of a highly reliable source) a
warranty of high plausibility.
Lorentz Center 20010
64
Super-Strong Trust !i ⇑
One can show that all i-introspective sentences are
equivalent to ones of the form Ki ϕ.
Hence, the attitude ! ⇑ji can be described as follows: if i
announcement ϕ is (equivalent to) an i-introspective
sentence of the form Ki ϕ, then j applies an update !ϕ;
otherwise, he performs an upgrade ↑ ϕ.
Let us call this attitude super-strong trust.
Lorentz Center 20010
65
Another Description
The attitude ! ⇑ji can also be described in more
semantical terms:
after i announces a sentence ϕ, j will apply an update !ϕ
iff she knows that j knows ϕ’s truth-value, i.e. iff
we have Kj (Ki ϕ ∨ Ki ¬ϕ);
otherwise, j will perform a radical upgrade ⇑ ϕ.
Lorentz Center 20010
66
Reduction Laws: examples
Infallibility:
!ji ⇒
³
[i : ϕ]BjP θ ⇔
Strong Trust:
³
⇑ji ⇒
[i : ϕ]BjP θ ⇔
³
P ∧[i:ϕ]P
ϕ ⇒ Bj
³
´´
[i : ϕ]θ
ϕ∧[i:ϕ]P
¬Kj ¬[i : ϕ]P ⇒ Bj
´´
[i : ϕ]θ
Super-Strong Trust:
P ∧[i:ϕ]P
[i : ϕ]θ)
! ⇑ji ⇒ ([i : ϕ]BjP θ ⇔ (Kj (Ki ϕ ∨ Ki ¬ϕ) ∧ ϕ ⇒ Bj
ϕ∧[i:ϕ]P
∧(¬Kj (Ki ϕ ∨ Ki ¬ϕ) ∧ ¬Kj ¬[i : ϕ]P ⇒ Bj
[i : ϕ]θ))
Lorentz Center 20010
67
Static Attitudes as Fixed Points
To each (dynamic) doxastic attitude given by a
transformer τ , we can associate a static attitude τ .
We write τ i ϕ, and say agent i has the attitude τ towards
ϕ, if i’s plausibility structure is a fixed point of the
doxastic transformation τ ϕ. Formally,
s |=S τ i ϕ iff τ ϕ(S, ≤i , s) ' (S, ≤i , s)
(where ' is the bisimilarity relation).
Lorentz Center 20010
68
Examples
The fixed point of update is “irrevocable knowledge” K:
!j ϕ ⇔ Kj ϕ
and similarly
!− j ϕ ⇔ Kj ¬ϕ .
The fixed point of radical upgrade is strong belief Sb:
⇑j ϕ ⇔ Sbj ϕ
and similarly
⇑− j ϕ ⇔ Sbj ¬ϕ .
Lorentz Center 20010
69
The fixed point of conservative upgrade is belief B:
↑j ϕ ⇔ Bj ϕ
and similarly
↑− j ϕ ⇔ Bj ¬ϕ .
The fixed point of identity is tautological:
idj ϕ ⇔ true .
Lorentz Center 20010
70
Explanation
The importance of fixed points τ is that they
capture the attitudes (towards the sentence ϕ) that
are induced in an agent after receiving
information (ϕ) from a source towards which she
has the attitude τ :
indeed, τ j is the strongest attitude such that
τji ⇒ [i : p]τ j p ,
for all non-doxastic sentences p.
Lorentz Center 20010
71
More generally, after an agent with attitude τ towards a
source upgrades with information ϕ from this source, she
comes to have attitude τ towards the fact that ϕ WAS
the case (before the upgrade):
τji ⇒ [i : ϕ]τ j (BEF OREϕ) ,
where BEF ORE is a one-step past tense operator.
Lorentz Center 20010
72
Explanation continued: examples
Conservative upgrades induce simple beliefs:
after ↑ ϕ, the agent only comes to believe that ϕ (was
the case).
Radical upgrades induce strong beliefs:
after ⇑ ϕ, the agent comes to strongly believe that ϕ
(was the case).
Updates induce (irrevocable) knowledge:
after !ϕ, the agent comes to know that ϕ (was the case).
Lorentz Center 20010
73
Fixed Points and Redundancy
A fixed point expresses the “redundancy”, or
un-informativity, of a doxastic transformation: in this
sense, we can say that the fact that a fixed-point-attitude
is induced in the listener by an announcement captures
the fact that repeating the announcement would be
redundant.
This is literally true for non-epistemic sentences:
[i : p][i : p]θ ⇔ [i : p]θ ,
but can be generalized to arbitrary sentences in the form:
[i : ϕ][i : BEF OREϕ]θ ⇔ [i : ϕ]θ .
Lorentz Center 20010
74
Honesty
We say that a communication act i : ϕ is honest (with
respect) to a listener j, and write
Honest(i : ϕ → j) ,
if the speaker i has the same attitude towards ϕ (before
the announcement) as the one (he believes to be) induced
in the listener j by his announcement of ϕ.
Lorentz Center 20010
75
By the above results, it seems that, if τji holds then
honesty should be given by τ i ϕ.
This is indeed the case ONLY if we adopt the simplifying
assumption that all doxastic attitudes τji are common
knowledge.
But, in general, honesty depends only on (having) the
attitude that the speaker believes to induce in the
listener:
^
Honest(i : ϕ → j) = (Bi τji ⇒ τ i ϕ) .
τ
Lorentz Center 20010
76
General Honesty
A (public) speech act i : ϕ is honest iff it is honest (with
respect) to all the listeners:
^
Honest(i : ϕ) :=
Honest(i : ϕ → j).
j6i
Lorentz Center 20010
77
Example: honesty of an infallible speaker
Assume that it is common knowledge that a speaker i
is infallible (i.e. that !ji holds for all j 6= i).
Then an announcement i : ϕ is honest iff the speaker
knows it to be true; i.e. iff Ki ϕ holds.
The same condition ensures that the announcement
i : Ki ϕ is honest.
Lorentz Center 20010
78
Example: honesty of a strongly trusted speaker
Assume common knowledge that a speaker i is only
barely trusted (i.e. that ↑ji holds for all j 6= i).
Then an announcement i : ϕ is honest iff the speaker
believes it to be true; i.e. iff Bi ϕ holds.
The same condition ensures that the announcement
i : Bi ϕ is honest.
Lorentz Center 20010
79
Example: honesty of a strongly trusted speaker
Assume common knowledge that a speaker i is
strongly trusted, but not infallible (i.e. that ⇑ji
holds for all j 6= i).
Then an announcement i : ϕ is honest iff the speaker
strongly believes it to be true; i.e. iff Sbi ϕ holds.
Lorentz Center 20010
80
Announcing what you safely believe
But what can our strongly trusted speaker
honestly announce, when he only believes (but
not necessarily strongly believes) ϕ?
Well, he might announce that he believes ϕ: i : Bi ϕ is
certainly honest whenever Bi ϕ holds.
But, in fact, he can honestly claim much more! He can
claim he “defeasibly knows” (= safely believes) ϕ:
the act i : 2i ϕ is honest (for a strongly trusted
speaker) if Boxi ϕ holds.
Lorentz Center 20010
81
Honest Exaggerations
Announcing 2i ϕ might sound like a wild exaggeration,
but it is an sincere announcement: indeed, by the
identity
Bi 2i ϕ = Bi ϕ
we have that, whenever i believes ϕ, he also believes 2i ϕ.
But, moreover, it is an honest announcement, since
belief implies strong belief in safe belief ; in fact,
the two are equivalent:
Bi ϕ = Sbi (2i ϕ) .
Lorentz Center 20010
82
In fact, the above types of “defeasible knowledge”
announcements are the only honest announcements
(up to logical equivalence) by a strongly trusted agent:
For a speaker i who is commonly known to be
strongly trusted, an announcement i : ϕ is honest
iff ϕ is equivalent to some sentence of the form
2i ψ, such that Bi ψ holds.
Lorentz Center 20010
83
Dishonest sincerity
Sincerity does NOT imply honesty.
Example: Suppose a strongly trusted agent i who is
(commonly known to be) believes P , but does not strongly
believe it.
Then the announcement i : ϕ is sincere but dis-honest:
indeed, this announcement induces in the listeners a
strong belief in P , while the speaker did not have such a
strong beloef!
Lorentz Center 20010
84
Un-sincere honesty
In general, honesty does not imply sincerity.
Exercise for audience: Think of a counterexample!
Nevertheless, when (it is common knowledge that)
the listeners have a “positive” attitude towards the
speaker (one that implies belief ), then honesty
does implies sincerity!
Lorentz Center 20010
85
Persuasiveness
We say that an announcement i : ϕ is persuasive to
a listener j with respect to an issue θ,
and we write P ersuasive(i : ϕ → j; θ),
if
the effect of the announcement is that the listener is
“converted” to the speaker’s attitude towards θ.
Lorentz Center 20010
86
Formally, for non-doxastic sentences:
^
P ersuasive(i : ϕ →j ; p) :=
(τ i ϕ ⇒ [i : ϕ]τ j p) .
τ
For doxastic sentences, this needs to be modified:
^
(τ i ϕ ⇒ [i : ϕ]τ j (BEF OREθ))
P ersuasive(i : ϕ →j ; θ) :=
τ
Lorentz Center 20010
87
How to be Honest, (Sincere) and Persuasive
Suppose a strongly trusted agent i wants to be honest,
but persuasive with respect to some issue θ that
he believes in (although he does not necessarily
have a strong belief ). So we assume ⇑ji for all j, and
Bi θ (but NOT necessarily Sbi θ).
QUESTION: What can i announce honestly (and
thus sincerely), in order to be persuasive?
This is a very important question: how can you “convert”
others to your (weak) beliefs, while still maintaining your
honesty and sincerity?
Lorentz Center 20010
88
What NOT to say
i : θ would be sincere and persuasive, but it’s dishonest
(unless Sbi θ holds)!
i : Ki θ is equally dishonest.
i : Bi θ is honest, but not persuasive: it won’t change j’s
beliefs about θ (although it will change her beliefs about
i’s beliefs).
RECALL: Being informed of another’s beliefs is
not enough to convince you of their truth.
Lorentz Center 20010
89
ANSWER: honest exaggerations are persuasive!
It turns out that the only honest and persuasive
announcement in this situation is to make an “honest
exaggeration”:
i : 2i θ.
In other words:
to honestly convert others to your belief in θ, you
need to say that you “defeasibly know” θ (when in
fact you only believe θ).
Lorentz Center 20010
90
Conclusion
THE LESSON (known by most successful politicians):
If you want to convert people to your beliefs,
don’t be too scrupulous with the truth:
Announce that you “know” things even if you don’t
know for sure that you know them!
History will deem you “honest”, as long as you...
believed it yourself !
Simple belief, a loud voice and strong guts is all
you need to be persuasive!
Lorentz Center 20010
91
“We now know that Saddam Hussein has acquired
weapons of mass destruction...” (G.W. Bush,
2002)
Oh, well..., and you also need suckers to strongly
trust you...
© Copyright 2026 Paperzz