Document

NASSLLI 2010
1
Multi-Agent Belief Dynamics
Alexandru Baltag, Oxford University
website:
http://alexandru.tiddlyspot.com
Sonja Smets, University of Groningen
website:
http://sonja.tiddlyspot.com
NASSLLI 2010
2
Lecture 5
1. Merging Information: Connections with Social Choice Theory
2. Event Models and the Action-Priority Product Update
3. Formalizing Dynamic Doxastic Attitudes and Dialogue
Presuppositions: Sincerity, Honesty, Persuasiveness
4. Dynamic Merging of Beliefs
NASSLLI 2010
3
5.1. Merging Information: Connections with Preference Merge
Suppose that each agents in a group has some private (“hard” and
“soft”) information.
What is the information possessed by the group as a whole?
In other words: if the agents would come together and share
their information, what would the group “know” (in the sense
of K, 2, Stb etc)?
NASSLLI 2010
4
Preference Merge and Information Merge
In Social Choice Theory: the main issue is how to merge the agent’s
individual preferences.
A merge operation for a group G is a function
J
,
taking preference relations {Ri }i∈G into
J
a “group preference” relation
i∈I Ri (on the same state
space).
NASSLLI 2010
5
Merge Operations
• So the problem is to find a “natural” merge operation (subject
to various fairness conditions), for merging the agents’ preference
relations.
• Depending on the conditions, one can obtain either
- an Impossibility Theorem (Arrow 1950) or
- a classification of the possible types of merge operations
(Andreka, Ryan & Schobbens 2002).
NASSLLI 2010
6
Belief Merge and Information Merge
• If we want to merge the agents’ beliefs Bi , to get a notion of
“group belief”, then it is enough to merge the belief relations →i .
• To merge the agents’ knowledge Ki , it is enough to merge the
i
epistemic indistinguishability relations ∼.
• To merge the agents’ soft information (all their“strong beliefs”
Sbi , or equivalently all their “conditional beliefs” BiP Q), we have to
merge the plausibility relations ≤i .
NASSLLI 2010
7
Merge by Intersection
The so-called parallel merge (or “merge by intersection”)
simply takes the merged relation to be
\
Ri .
i∈G
In the case of two agents, it takes:
K
Ra
RB := Ra ∩ Rb
This could be thought of as a “democratic” form of preference merge.
NASSLLI 2010
8
Distributed Knowledge is Parallel Merge
• This form of merge is well suited for “knowledge” K: since
this type of knowledge is absolutely certain, there is no danger of
inconsistency.
• The agents pool their information in a completely symmetric
manner, accepting the other’s bits without reservations.
Parallel merge of the agents’ irrevocable knowledge
gives us the standard concept of “distributed knowledge” DK of a group: DK is the Kripke modality for the
intersection of all the agents’ indistinguishability relations
\ i
DKG P = [
∼]P.
i∈G
NASSLLI 2010
9
“Dynamic” intuition: pooling information
Another characterization is:
s |= DKa,b P
iff
∃Pa , Pb such that s |= Ka Pa ∧ Kb Pb and Pa ∩ Pb ⊆ P.
The intuition underlying this concept is dynamic: distributed
knowledge captures the potential knowledge obtainable by the group
via inter-agent communication: what the agents could know if they
would share all their information.
But to make this intuition precise, we would need to be able to model
“sharing information” dynamically: this is exactly what
dynamic-epistemic logic will allow us to do!
NASSLLI 2010
10
Lexicographic Merge
In lexicographic merge, a “priority order” is given on agents,
to model the group’s hierarchy. The “lexicographic
merge” Ra/b gives priority to agent a over b:
The strict preference of a is adopted by the group; if a is
indifferent, then b’s preference (or lack of preference) is
adopted; finally, a-incomparability gives group
incomparability.
Formally:
∼
Ra/b := Ra> ∪ (Ra= ∩ Rb ) = Ra> ∪ (Ra ∩ Rb ) = Ra ∩ (Ra> ∪ Rb ).
NASSLLI 2010
11
Lexicographic merge of soft information
This form of merge is particularly suited for “soft information”,
given by either indefeasible knowledge 2 or belief B, in the absence
of any hard information: since soft information is NOT fully
reliable (because of lack of negative introspection for 2, and of
potential falsehood for B), some “screening” must be applied (and so
some hierarchy must be enforced) to ensure consistency of merge.
NASSLLI 2010
12
s |= 2a/b P
iff
∃Pa , Pb s. t. s |= 2a Pa ∧ 2b Pb ∧ 2weak
Pb and Pa ∩ Pb ⊆ P.
a
In other words, all a’s “indefeasible knowledge” is unconditionally
accepted by the group, while b’s indefeasible knowledge is “screened”
by a using her “weakly indefeasible knowledge”.
NASSLLI 2010
13
Relative Priority Merge
Note that, in lexicographic merge, the first agent’s priority is
“absolute”.
But in the presence of hard information, the lexicographic merge of soft information must be modified:
by first pooling together all the hard information and
then using it to restrict the lexicographic merge of soft
information.
NASSLLI 2010
14
This leads us to a “more democratic” combination of Merge by
Intersection and Lexicographic Merge , called “(relative)
priority merge” Ra⊗b :
∼
Ra⊗b := (Ra> ∩ Rb∼ ) ∪ (Ra= ∩ Rb ) = Ra ∩ Rb∼ ∩ (Ra> ∪ Rb ).
NASSLLI 2010
15
This means that both agents have a “veto” with respect to
group incomparability:
The group can only compare options that both agents can compare;
and whenever the group can compare two options,
everything goes on as in the lexicographic merge: agent a’s
strong preferences are adopted, while b’s preferences are adopted only
when a is indifferent.
NASSLLI 2010
16
Priority Merge of Soft Information
The corresponding notion of “indefeasible knowledge” of the group is
obtained as in the lexicographic merge, except that both agents’
“strong knowledge” is unconditionally accepted. Formally:
s |= 2a⊗b P
iff
0
∃Pa , Pb , Pb0 s. t. s |= 2a Pa ∧ Kb Pb ∧ 2b Pb0 ∧ 2weak
P
a
b
and Pa ∩ Pb ∩ Pb0 ⊆ P.
NASSLLI 2010
17
In other words, relative-priority group “knowledge” is obtained by
pooling together the following: agent a’s “indefeasible knowledge”;
agent b’s “irrevocable knowledge”; and the result of screening agent
b’s “indefeasible knowledge” using agent a’s “weakly indefeasible
knowledge”.
NASSLLI 2010
18
Example: merging Marry’s beliefs with Albert’s
If we give priority to Marry (the more sober of the two!), the
relative priority merge Rm⊗a of Marry’s and Albert’s original
plausibility orders
º¹
¸·
¬D, ¬G o
³´
µ¶
º¹
¸·
¬D, ¬G
³´
µ¶
m
º¹
¸·
/ ¬D, G
³´
µ¶
º¹
¸·
D, G
³´
µ¶
m
a
º¹
¸·
/ D, G o
³´
µ¶
º¹
¸·
/ D, ¬G
³´
µ¶
m
a
º¹
¸·
/ D, ¬G
³´
µ¶
º¹
¸·
/ ¬D, G
³´
µ¶
gives us:
¸·
º¹
¬D, ¬G
³´
µ¶
¸·
º¹
¬D, G
³´
µ¶
º¹
¸·
/ D, G
³´
µ¶
º¹
¸·
/ D, ¬G
³´
µ¶
NASSLLI 2010
19
5.2. Information Dynamics as Information Merge
We can turn the tables around by thinking of information
DYNAMICS (belief revision) as a “kind” of information (or
belief ) MERGE.
The idea is that we can think of agent a’s new observations (her
plausibility order on epistemic/doxastic actions) as being the
beliefs/informtion of “another agent” (=an “alter ego” of the first
agent) ã.
The way the new observations/actions change a’s beliefs can then
understood as a merging of a’s beliefs with ã’s beliefs.
This is the idea behind the notion of “event models”.
NASSLLI 2010
20
Models for ‘Events’
Until now, our models captured only epistemic situations, i.e. they only
contain static information: they all are state models. We can thus
represent the result of each of our Scenarios, but not what is actually
going on. Our scenarios involve various types of changes that may affect
agents’ beliefs or state of knowledge: a public announcement, a ’legal’
(non-deceitful) act of private learning, ’illegal’ (unsuspected) private
learning etc.
We want to use now plausibility models to represent such types of
epistemic events, in a way that is similar to the representations we have
for epistemic states.
NASSLLI 2010
21
Event Plausibility Models
A multi-agent event plausibility model
Σ
=
(Σ, ∼a , ≤a , pre, σ∗ )
is just like a multi-agent state plausibility model, except that its
elements σ ∈ Σ are now called events (or actions), and instead
of the valuation we have a precondition map pre, associating a
sentence preσ to each action σ: preσ is true in a world iff σ can
be performed.
Now, the preorders σ ≤a σ 0 capture the agent’s plausibility
relations on events: a considers it at least as plausible that
σ 0 is happening than that σ is happening. As before, ∼a are
definable in terms of ≤a , so we can skip them when describing
the event model.
NASSLLI 2010
22
EXAMPLE: joint update
The event model for a joint radical update !ϕ is essentially the same as
in standard DEL (the event model for a “public announcement”):
º¹
¸·
∗ϕ
³´
µ¶
(As usual for plausibility models, we do NOT draw the loops, but they
are there.)
NASSLLI 2010
23
EXAMPLE: joint radical upgrade
The event model for a joint truthful upgrade ⇑ ϕ is:
º¹
¸·
¬ϕ
³´
µ¶
a,b,c,···
º¹
¸·
/ ∗ϕ
³´
µ¶
NASSLLI 2010
24
EXAMPLE: publicly-announced private upgrade
The event model for a publicly-announced private (radical) upgrade
with a true sentence ϕ is:
º¹
¸·
¬ϕ l
³´
µ¶
a
b6=a
º, ¹
¸·
2³´ ∗ϕ µ¶
NASSLLI 2010
25
Example: Secret (Fully Private) Learning
Let us consider again the “Concealed Coin” situation from the first
Lecture: a coin was on the table, covered, so it was common knowledge
that nobody saw its upper face.
But now, when nobody looks, Charles (agent c) secretely takes a
peek at the coin and sees it’s Heads up. Alice (a) and Bob (b)
don’t suspect anything: they believe that nothing is really happening.
A possible representation of this action as an event plausibility model is
º¹
¸·
º¹ ¸·
a,b
o
/ T
∗H
³´ J µ¶E
³´ T µ¶
z
EE
z
z
E
z
a,b,c
a,b,c
EE
z
z a,b
a,b EE
z
}z¸·
º¹ "
³´ true
T µ¶
a,b,c
NASSLLI 2010
26
The real event (marked with a star) is the one in which Charles
secretely takes a peek and sees the coin Heads up. What Alice and Bob
believe it’s happening is “nothing” (their most plausible event has
precondition “true” and it’s completely public): i.e. that nobody learns
anything new.
However, if later Alice and Bob would be told that this was NOT the
case, and that in fact Charles took a peek, they would still not know
whether he saw the coin Heads up or Tails up. So they’d equally
plausible that Charles saw Heads (=the event in the upper left corner,
marked with a star) and he saw Tails (the event in the upper right
corner).
NASSLLI 2010
27
Looking for a General Update Rule
We want to compose any initial state plausibility model (S, ≤a
, k.k, s∗ )a∈A with any event plausibility model (Σ, ≤a , pre, σ∗ )a∈A ,
in order to compute the new state plausibility model after the
event:
i.e. find an operation ⊗, such that
(S, ≤a , k.k, s∗ )a∈A ⊗ (Σ, ≤a , pre, σ∗ )a∈A
correctly represent the agents’ (hard and soft) information after
the event.
NASSLLI 2010
28
A Product Construction
We think of the basic events of our event model as deterministic
actions: for a given input-state s and a given event σ, there is at most
one output-state. This means that we can represent the new
(output) states as pairs (s, σ) of an input-state s and an event σ.
Hence, the new state space after the event will be represented as a
subset of the Cartesian Product S × Σ.
Our events are purely epistemic/doxastic (i.e. learning or
communication actions), so they do not change the ontic facts of
the world:
(s, σ) |= p iff s |= p.
Finally, it’s obvious that the real world after the event is (s∗ , σ∗ ).
But how should we define the new plausibility ≤a on
output-pairs (s, σ) ?
NASSLLI 2010
29
Use Merge!
IDEA: Merge the information in the two models!
Since the agents may have both “hard” information (K) and “soft”
information (2 and B), the relative priority merge seems
appropriate, giving priority to the new information (in the spirit
of the AGM paradigm), i.e. to the EVENT plausibility order.
NASSLLI 2010
30
First Case
Well, in case that the event model includes a strict plausibility order
between two events σ1 , σ2 with precondition ϕ1 , ϕ2
º¹
¸·
σ 1 : ϕ1
³´
µ¶
a
º¹
¸·
/ σ2 : ϕ2
³´
µ¶
then after the event: all the ϕ2 -worlds (s2 , σ2 ) should become strictly
more plausible than all the ϕ1 -worlds (s1 , σ1 ).
NASSLLI 2010
31
But, since we have also worlds that are known to be impossible by the
agent, the above rule should NOT apply to those:
if the agent can already distinguish between s1 and s2 , then he knows
which of the two is the case, so he doesn’t have to compare the outputs
(s1 , σ1 ) and (s2 , σ2 ).
So we get the following conditions:
s1 ∼a s2 and σ1 <a σ2 imply (s1 , σ1 ) <a (s2 , σ2 ),
and also
s1 6∼a s2 implies (s1 , σ1 ) 6∼a (s2 , σ2 ).
NASSLLI 2010
32
Second Case
What if the event model includes two equally plausible events?
º¹
¸·
σ 1 : ϕ1 o
³´
µ¶
a
º¹
¸·
/ σ2 : ϕ2
³´
µ¶
We interpret this as lack of information: when the (unknown)
event happens, it doesn’t bring any information indicating which
is more plausible to be currently happening: σ1 or σ2 . In this case
it is natural to expect the agents to keep unchanged their original
beliefs, or knowledge, about which of the two is more plausible.
NASSLLI 2010
Let us denote by ∼
= the equi-plausibility relation on events,
given by:
σ∼
=a σ 0 iff σ ≤a σ 0 ≤a σ.
Then the last case gives us another condition:
s1 ≤a s2 and σ1 ∼
=a σ2 implies (s1 , σ1 ) ≤a (s2 , σ2 ).
33
NASSLLI 2010
34
Third Case
Finally, what if the two events are epistemically distinguishable:
σ 6∼a σ 0 ?
Then, when one of them happens, the agent knows it is not the other
one.
By perfect recall, he can then distinguish the outputs of the events, and
hence the two outputs are not comparable. So
σ 6∼a σ 0 implies (s1 , σ1 ) 6≤a (s2 , σ2 ).
NASSLLI 2010
35
The Action-Priority Rule
Putting all these together, we get the following update rule, called
the Action-Priority Rule:
0
0
(s, σ) ≤a (s0 , σ 0 ) iff: either σ <a σ 0 , s ∼a s0 or σ ∼
=a σ , s ≤a s .
This essentially says that we order the product space using the antilexicographic preorder relation on comparable pairs (s, σ).
NASSLLI 2010
36
The Action-Priority Update
The set of states of the new model S ⊗ Σ is:
S ⊗ Σ := {(s, σ) : s |=S preσ }
The valuation is given by the original valuation: (s, σ) |= p iff
s |= p.
The real world is (s∗ , σ∗ ).
The plausibility relation is given by the Action-Priority Rule.
NASSLLI 2010
37
Interpretation
The anti-lexicographic preorder gives “priority” to the action
plausibility relation. This is not an arbitrary choice: it is in the
spirit of AGM revision. The action plausibility relation captures
the agent’s current beliefs about the current event: what
the agents really believe is going on at the moment.
In contrast, the input-state plausibility relations only capture
past beliefs.
The past beliefs need to be revised by
the current beliefs, and NOT the other way around! The
doxastic action is the one that “changes” the initial doxastic state,
and NOT vice-versa.
NASSLLI 2010
38
Special Case: Product Update as Parallel Merge
In particular, our rule has the consequence that the agent’s pieces of
“hard” information (K) about the initial state and about the on-going
event are merged by parallel merge:
(s, σ) ∼a (s0 , σ 0 ) iff s ∼a s0 and σ ∼a σ 0 .
This special case is the “Product Update”, that you encountered in
the Hans van Ditmarsch’s course on Dynamic Epistemic Logic
(and in the book DEL). It originates in a 1998 (TARK conference)
short paper by Baltag, Moss and Solecki (known as the “BMS paper”).
NASSLLI 2010
39
EXAMPLE: joint radical upgrade again
EXERCISE: Check that, if we take the Action Priority Product of any
state model S with the event model Σ!ϕ for joint upgrade ⇑ ϕ, given as
above by
º¹
¸·
¬ϕ
³´
µ¶
a,b,c,···
º¹ ¸·
/ ϕ
³´ µ¶
the resulting new state model S, S ⊗ Σ!ϕ is indeed (isomorphic to) the
result of performing the joint radical upgrade ⇑ ϕ on S.
NASSLLI 2010
40
Example: Secret Learning, again
In the coin scenario in which Charles is secretely taking a peek, the
Action-Priority update of the original state (plausibility) model with
this event plausibility model (skipping the loops):
º¹
¸·
o
∗H
³´
µ¶
a,b,c
º¹ ¸·
/ T
³´ µ¶
⊗
º¹
¸·
º¹ ¸·
a,b
o
/ T
∗H
³´
µ¶E
³´ µ¶
z
EE
z
z
EE
z
z
E
z a,b
a,b EE
z
}z¸·
º¹ "
³´ true µ¶
gives us:
NASSLLI 2010
41
º¹
¸·
º¹ ¸·
a,b
o
/ T
∗H
³´
µ¶@
³´ µ¶
~
@@
~
~
@@
~
@@ ~~
~
@
~
@
a,b
a,b
~
@
~
@
a,b~
a,b
@@
~
~
@@
~
~
@Ã ²
²
~
~
º¹ ¸·
º¹ ¸·
o
/ T
H
³´ µ¶
³´ µ¶
a,b,c
So e.g. Alice still believes that Charles doesn’t know the face. However,
if later she’s given the information that he took a peek (without being
told what he saw), she’d know that he knows the face; but as for
herself, she’d still consider both faces equally plausible.
NASSLLI 2010
42
A Secret Communication Contradicting Prior Beliefs
What if now Charles secretely tells Alice that he knows the face of
the coin is Heads up?
This is a private knowledge announcement !a (Kc H):
º¹
¸·
¸·
º¹
b
o
/ Kc T
∗
K
H
c
³´
³´
µ¶
Hµ¶H
x
HH
x
x
HH
x
HH
xx b
b
H#
x
{x¸·
º¹
³´ true µ¶
Alice previously didn’t even suspect that Charles took a peek. So this
private knowledge announcement is inconsistent with her prior beliefs.
Nevertheless, after applying the Action Priority Product, we see that
Alice, instead of going crazy, simply deduces that Charles somehow
learned the face of the coin:
NASSLLI 2010
43
º¹
¸·
º¹
b
o
/
∗H
³´
µ¶@
³´
~
@@
~
~
@@
~
@@ ~~
@~@b~
b
~ @@b
~
~
@@
~
~
@@
~
~
@Ã
²
~
~
º¹ ¸·
º¹
a,b
o
/
H
³´ µ¶@
³´
~
@@
~
~
@@
~
@@ ~~
@~@~
a,b
~ @a,b
a,b~~ @
@@
~
~
@@
~
~
@Ã
²
~
~
º¹ ¸·
º¹
o
/
H
³´ µ¶
³´
a,b,c
As for Bob, he’s still clueless.
¸·
T µ¶
b
² ¸·
T µ¶
a,b
² ¸·
T µ¶
NASSLLI 2010
44
Solving the standard “Muddy Children”
Three children, child 1 and child 2 are dirty. Originally, assume each
child considers equally plausible that (s)he’s dirty and that (s)he’s
clean:
º¹ ¸·
³´ddd
µ¶gNN
7
O
p
NNN
p
1 pppp
3
N
N
NNN
2
p
p
p
NN'
p
p
²
w
p
º¹ ¸·k
º¹ ¸·
º3 ¹ ¸·
³´cdd
µ¶
³
´
µ¶
³´ddc
dcd
O
= a
O µ¶
2
3
1
2
3
1
!
}
²
º¹ ¸·s
º+ ¹ ² ¸·
º¹ ¸·
³´ccdµ¶Ng N
³´cdc
µ¶
³7 ´dccµ¶
O
p
NNN
p
p
p
NNN
p
p
2
p
p 1
p
3 NNN
p
N' ² wpp
³º´¹ccc¸·µ¶
NASSLLI 2010
45
Father makes the announcement: “At least one of you is dirty”. If he’s
an infallible source (classical Muddy children), then this is an update
!(d1 ∨ d2 ∨ d3 ), producing:
º¹ ¸·
³´ddd
µ¶gNN
7
O
p
NNN
p
p
p
1 pp
NN3N
2
p
NNN
p
p
p
Nº' ¹ ¸·
º¹ ¸·kwpp
º¹ ² ¸·
3³´ddc
³´cdd
³´dcd
O µ¶
O µ¶
= a µ¶
2
º¹ ² ¸·s
³´ccdµ¶
3
1
1
3
º¹! ¸·}
³´cdcµ¶
2
+º³¹´ ² ¸·µ¶
dcc
If the children answer “I don’t know I am dirty”, and they are
V
infallible, then the update !( i ¬Ki di ) produces:
NASSLLI 2010
46
º¹ ¸·
³´ddd
µ¶gNN
7
O
p
NNN
p
p
p
1 pp
NN3N
2
p
NNN
p
p
p
N'º¹ ¸·
º¹ ² ¸·
º¹ ¸·wpp
³´cddµ¶
³´dcdµ¶
³´ddcµ¶
Now, in the real world (d, d, c), children 1 and 2 know they are dirty.
NASSLLI 2010
47
Soft version of the puzzle
What happens if the sources are not infallible? Father’s announcement
becomes either a radical upgrade ⇑ (d1 ∨ d2 ∨ d3 ) or a conservative one
↑ (d1 ∨ d2 ∨ d3 ), producing:
º¹ ¸·
³´ddd
µ¶gNN
7
O
p
NNN
p
p
p
1 pp
NN3N
2
p
NNN
p
p
p
Nº' ¹ ¸·
º¹ ¸·kwpp
º¹ ² ¸·
3³´ddc
³´cdd
³´dcd
O µ¶
= a µ¶
O µ¶
2
3
1
1
3
2
º¹ ² ¸·s
º+ ¹ ² ¸·
º¹! ¸·}
³´ccdµ¶Ng N
³´cdc
µ¶
³7 ´dccµ¶
O
p
NNN
p
p
p
NNN
p
p
2
p
p 1
p
3 NNN
p
N
p
º³´¹ccc¸·µ¶p
NASSLLI 2010
48
Do you believe you’re dirty?
What if next the father only asks them if they believe they are dirty?
And what if they are not infallible agents either (i.e. don’t trust each
other, but not completely), so that their answers are also soft
announcements?
After a (radical or conservative) upgrade with the sentence
we obtain:
V
i
¬Bi di ,
NASSLLI 2010
49
º¹ ¸·
³´ddd
µ¶gNN
7
O
p
NNN
p
p
p
1 pp
NN3N
2
p
NNN
p
p
p
Nº' ¹ ¸·
º¹ ² ¸·
º¹ ¸·kwpp
3³´ddc
³´cdd
³´dcd
O µ¶
= a µ¶
O µ¶
2
3
1
1
3
2
º¹ ¸·
º¹ ¸·
º¹ ¸·
³´ccdµ¶Ng N
³´cdc
µ¶
³7 ´dccµ¶
O
p
NNN
p
p
p
NNN
p
p
2
p
p 1
p
3 NNN
p
N
p
p
³º´¹ccc¸·µ¶
Now (in the real world ddc), children 1 and 2 believe they are dirty:
so they will answer “yes, I believe I’m dirty”.
NASSLLI 2010
50
Cheating Muddy Children
Let’s get back to the original puzzle: assume again that it is common
knowledge that nobody lies, so we have infallible announcement
(updates). After Father’s announcement, we got
º¹ ¸·
³´ddd
µ¶gNN
7
O
p
NNN
p
p
p
1 pp
NN3N
2
p
NNN
p
p
p
Nº' ¹ ¸·
º¹ ¸·kwpp
º¹ ² ¸·
3³´ddc
³´cdd
³´dcd
O µ¶
O µ¶
= a µ¶
2
º¹ ² ¸·s
³´ccdµ¶
3
1
1
3
º¹! ¸·}
³´cdcµ¶
2
º+ ¹ ² ¸·
³´dccµ¶
NASSLLI 2010
51
Secret Communication
Suppose now the dirty children cheat, telling each other that they
are dirty. This is a secret communication between 1 and 2, in
which 3 doesn’t suspect anything: he thinks nothing happened.
So it has the event model:
º¹
¸·
³´ d1 ∧ d2 µ¶
3
º¹
¸·
/ true
³´
µ¶
NASSLLI 2010
52
EXERCISE
Taking the Action-Priority Update of the previous model with this
event model.
Then model the next announcement (in which the two children say “I
know I’m dirty”, while the third says “I don’t know”) as a joint update
!(K1 d1 ∧ K2 d2 ∧ K3 d3 ).
Note that, after this, child 3 does NOT get crazy: unlike in the standard
DEL (with Product update), he simply realizes that the others cheated!
NASSLLI 2010
53
The Laws of Dynamic Belief Revision
A complete proof system can be obtained by adding to the usual
reduction laws the following:
[α]Ka P ↔
preα →
^
Ka [β]P
β∼a α
[α]2a P ↔
preα →
^
β<a α
Ka [β]P ∧
^
2a [γ]P
γ'a α
These are the “fundamental equations” of Interactive Belief
Revision, giving us the dynamics of “hard” and “soft” information.
NASSLLI 2010
54
The first law embodies the essence of Product Update Rule,
governing the most general dynamics of “hard information” K:
[α]Ka P
↔
preα →
^
Ka [β]P
β∼a α
A proposition P will be irrevocably known after an epistemic event α
iff, whenever the event can take place, it is irrevocably known (before
the event) that P will be true after all events that are epistemically
indistinguishable from α.
NASSLLI 2010
55
The second law embodies the essence of the Action-Priority Rule,
governing the most general (qualitative) dynamics of “soft
information” 2:
[α]2a P
↔
preα →
^
β<a α
Ka [β]P ∧
^
2a [γ]P
γ'a α
A proposition ϕ will be defeasibly known after a doxastic event iff,
whenever the event can take place, it is irrevocably known that ϕ will
become true after all more plausible events and in the same time it is
defeasibly known that ϕ will become true after all equi-plausible
events.
NASSLLI 2010
56
This notion can be related to the causal conception of knowledge.
Doxastic events α can be said to embody the “dynamic causes” of
belief change, while the corresponding dynamic modalities [α]Aϕ
capture a weak sense of ‘ “causality”: the “weakest (static) cause” of
coming to “know” (in the specific sense of the attitude A) something,
given the dynamic cause α.
Using this weak notion of causality, the causal-informational
conception of knowledge (in e.g. the sense of Dretske) can be given a
formal expression in DEL:
So “knowledge”, in this sense, is a belief that is created/induced by
specifically “epistemic”, “informative” actions.
“Knowledge” is simply the kind of belief that satisfies the
“right” Reduction Law.
NASSLLI 2010
57
5.3. Formalizing Dynamic-Doxastic Attitudes
RECALL: The way an agent revises her beliefs after receiving some new
information depends on the agent’s doxastic attitude towards the
source of information.
Hence, such a “dynamic” attitude captures the agent’s opinion
about the reliability of information coming from this
particular source.
In a “communication” setting, the sources of information are other
agents. We use τji to denote listener’s j’s attitude towards information
coming from speaker i.
NASSLLI 2010
58
Dynamic Attitudes are Doxastic Transformers
While formally τji will be atomic sentences, the semantics will
associate them with (single-agent) doxastic transformations
(=upgrades): maps τ taking any input-sentence ϕ and any
single-agent plausibility relation (S, ≤) (of some anonymous agent),
and returning a new plausibility relation (S, ≤0 ) on the same set of
states.
The meaning of τji will be that, whenever receiving information ϕ
from source i, agent j will revise her beliefs by applying
transformation τ ϕ to her plausibility relation ≤j
WE DO NOT ASSUME THAT ALL ATTITUDES ARE
“POSITIVE”.
NASSLLI 2010
59
The Primacy of Dynamics
Our strategy will be to take the dynamic doxastic attitudes τji as
basic (and simply call them “doxastic attitudes”).
Moreover, we will use them to define (most of the) static attitudes
as fixed points of the transformations τ .
NASSLLI 2010
60
Simplifying Restrictions
In this part of the course, we assume that all communication is
public: speakers make only public announcements to the whole
group.
NASSLLI 2010
61
Stable attitudes
For this part of the course, we will also assume these mutual doxastic
attitudes are stable: they do not change during the conversation
(since τji are treated as “ontic facts”, that are not affected by
communication acts).
Later (but maybe not in this talk?), we may consider the issue of
revising agents’ doxastic attitudes.
NASSLLI 2010
62
Formalizing Doxastic Attitudes
Let us now add to our language two ingredients:
• dynamic modalities [i : ϕ] corresponding to public
announcements by agent i;
• new atomic sentences τji , for each pair of distinct agents i 6= j,
where τ comes from a given finite set of doxastic attitude
symbols, including !, ⇑, ↑, id etc, encoding the agent j’s
attitude towards agent i’s the announcements.
NASSLLI 2010
63
Semantics
For semantics, we are given a multi-agent plausibility model, with the
valuation map extended to the new atomic sentences and satisfying a
number of semantic conditions (to follow);
and, in addition, we are also given, for each attitude symbol τ in the
syntax, some single-agent doxastic transformation, also denoted by τ .
NASSLLI 2010
64
Semantic Constraints
We put some natural semantic constraints (on the valuation of the
atomic sentences of the form τ , for any doxastic attitude type τ ).
The first says that, in any possible world, every agent has some
unique attitude towards every other agent:
∀s∃!τ such that s |= τji
The second is an introspection postulate: the agentknows her own
doxastic attitudes
j
s ∼ t =⇒ (s |= τji ⇐⇒ t |= τji ).
NASSLLI 2010
65
A further simplifying assumption
In fact, sometimes (e.g. in Sonja’s talk), we will make an even more
restrictive assumption, namely all the agents’ doxastic attitudes
towards each other are common knowledge:
∀s, t : s |= τji ⇐⇒ t |= τji
NASSLLI 2010
66
The Mutual Trust Graph
In the presence of this last simplifying assumption, the
extra-structure required for the semantics (i.e. the extension of the
valuation plus the assignment of a transformation to each attitude
symbol) can be summarized as a graph having agents as nodes
and arcs labeled by doxastic transformations:
the fact that τji holds in (any state of) the model is captured by an
arc labeled τ from node j to node i.
This graph will be called the “mutual trust graph” of this group of
agents.
NASSLLI 2010
67
Example
↑
•T k
+
1
!
⇑
⇑
↑
!
· ¡
•3
2
•
@
NASSLLI 2010
68
Counterexample
Not all graphs are consistent with the assumption that all doxastic
attitudes are common knowledge:
↑
•T k
+
1
!
⇑
⇑
↑
↑
· ¡
•3
is NOT a consistent graph.
2
•
@
NASSLLI 2010
69
Consistency Conditions
!ji ⇒!ki ,
−
!−
⇒!
ji
ki ,
for all j, k 6= i.
NASSLLI 2010
70
Semantics of Communication Acts
The semantics of i : ϕ will be given by the multi-agent doxastic
transformation that takes any plausibility model S = (S, ≤j , kk)j and
returns a new model (i : ϕ)(S) : = (S 0 , ≤0j , kk0 ), where:
the listeners’ new preorders ≤j (for j 6= i) are given by applying
within each ∼j -information cell the transformer τ to the order
≤j within that cell, when τji holds throughout that cell;
while the speaker’s preorder ≤i is kept the same;
the new set S 0 is the union of all the new information cells; and the
new valuation kpk0 := kpk ∩ S 0 .
NASSLLI 2010
71
Semantics of Dynamic Modalities
The dynamic modalities are defined as usual:
s |=S [i : ϕ]ψ iff s |=i:ϕ(S) ψ.
NASSLLI 2010
72
EXAMPLE 4
Consider the situation in the Winestein example:
º¹
¸·
¬D, ¬G o
³´
µ¶
m
º¹
¸·q
/ ¬D, G
³´
µ¶
a
m
º¹
¸·q
D, ¬G m
³1 ´
µ¶
º¹
¸·
2³´ D, G µ¶
a
m
Now Mary announces that Albert is drunk. Assume that Albert sees
Mary as a fallible, but highly trusted source ⇑. Given this, the
announcement is HONEST, since she strongly believes D. After the
announcement m : D, Albert starts to prefer every D-world to every
¬D-world. The upgraded model is
º¹
¸·
¬D, ¬G o
³´
µ¶
m
º¹
¸·
/ ¬D, G
³´
µ¶
a
m
º, ¹
¸·
D, G l
³2 ´
µ¶
a
m
º- ¹
¸·
1³´ D, ¬G µ¶
NASSLLI 2010
73
Counterexample 5
Note that simply announcing that she believes D, or even that she
strongly believes D, WON’T DO: this will NOT be persuasive,
since it will not change Albert’s beliefs about the facts of the matter
(D or ¬D), although it may change his beliefs about her beliefs.
Being informed of another’s beliefs is not enough to
convince you of their truth.
Indeed, Mary’s beliefs are already common knowledge in the initial
model of Example 1: so an upgrade ⇑ (Bm D) would be superfluous!
NASSLLI 2010
74
Sincerity of a Communication Act
A communication act i : ϕ is sincere if the speaker believes her own
announcement, i.e. Bi ϕ holds.
NASSLLI 2010
75
Common Knowledge of Sincerity
In cooperative situations, it is sometimes natural to assume
common knowledge of sincerity.
This can be done by modifying the above semantics of i : ϕ, by
restricting the applicability of the above doxastic transformation
(modeling the act i : ϕ) to states in which Bi ϕ holds. We call this an
“inherently sincere” communication act.
Note that the Reduction Laws have then to be changed, by adding
the precondition Bi ϕ.
NASSLLI 2010
76
Sincere Lies and (Lack of ) Introspection
In general, sincerity does not imply truthfulness:
“I really am the man of your dreams” is a typical sincere lie!
But, when applied to introspective properties, sincerity DOES imply
truthfulness!
“I believe I am the man of your dreams” is sincere only if it’s true.
NASSLLI 2010
77
A Mixed Attitude
This observation suggests a natural mixed attitude !i ⇑: the listener
strongly trusts the speaker i to tell the truth, but moreover she
considers the speaker to be infallibly truthful when announcing
sentences (such as “I believe..” or “I know...”) that he can know by
introspection.
This attitude is natural when common knowledge of sincerity is
assumed: when applied to introspective properties, sincerity is the
same as infallible truthfulness, hence ! is appropriate; for other
properties, sincerity can at least be (in the case of a highly reliable
source) a warranty of high plausibility.
NASSLLI 2010
78
Super-Strong Trust !i ⇑
One can show that all i-introspective sentences are equivalent to ones
of the form Ki ϕ.
Hence, the attitude ! ⇑ji can be described as follows: if i
announcement ϕ is (equivalent to) an i-introspective sentence of the
form Ki ϕ, then j applies an update !ϕ; otherwise, he performs an
upgrade ⇑ ϕ.
Let us call this attitude super-strong trust.
NASSLLI 2010
79
Another Description
The attitude ! ⇑ji can also be described in more semantical terms:
after i announces a sentence ϕ, j will apply an update !ϕ iff she
knows that j knows ϕ’s truth-value, i.e. iff we have
Kj (Ki ϕ ∨ Ki ¬ϕ);
otherwise, j will perform a radical upgrade ⇑ ϕ.
NASSLLI 2010
80
Reduction Laws: examples
Infallibility:
³
!ji ⇒
[i : ϕ]BjP θ ⇔
³
P ∧[i:ϕ]P
ϕ ⇒ Bj
´´
[i : ϕ]θ
Strong Trust:
³
⇑ji ⇒
[i : ϕ]BjP θ ⇔
³
ϕ∧[i:ϕ]P
¬Kj ¬[i : ϕ]P ⇒ Bj
´´
[i : ϕ]θ
Super-Strong Trust:
P ∧[i:ϕ]P
! ⇑ji ⇒ ([i : ϕ]BjP θ ⇔ (Kj (Ki ϕ ∨ Ki ¬ϕ) ∧ ϕ ⇒ Bj
[i : ϕ]θ)
ϕ∧[i:ϕ]P
∧(¬Kj (Ki ϕ ∨ Ki ¬ϕ) ∧ ¬Kj ¬[i : ϕ]P ⇒ Bj
[i : ϕ]θ))
NASSLLI 2010
81
Static Attitudes as Fixed Points
To each (dynamic) doxastic attitude given by a transformer τ , we can
associate a static attitude τ .
We write τ i ϕ, and say agent i has the attitude τ towards ϕ, if i’s
plausibility structure is a fixed point of the doxastic transformation
τ ϕ. Formally,
s |=S τ i ϕ iff τ ϕ(S, ≤i , s) ' (S, ≤i , s)
(where ' is the bisimilarity relation).
NASSLLI 2010
82
Examples
The fixed point of update is “irrevocable knowledge” K:
! j ϕ ⇔ Kj ϕ
and similarly
!− j ϕ ⇔ Kj ¬ϕ .
The fixed point of radical upgrade is strong belief Sb:
⇑j ϕ ⇔ Sbj ϕ
and similarly
⇑− j ϕ ⇔ Sbj ¬ϕ .
The fixed point of conservative upgrade is belief B:
↑j ϕ ⇔ Bj ϕ
NASSLLI 2010
83
and similarly
↑− j ϕ ⇔ Bj ¬ϕ .
The fixed point of identity is tautological:
idj ϕ ⇔ true .
NASSLLI 2010
84
Explanation
The importance of fixed points τ is that they capture the
attitudes (towards the sentence ϕ) that are induced in an agent
after receiving information (ϕ) from a source towards which
she has the attitude τ :
indeed, τ j is the strongest attitude such that
τji ⇒ [i : p]τ j p ,
for all non-doxastic sentences p.
NASSLLI 2010
85
More generally, after an agent with attitude τ towards a source
upgrades with information ϕ from this source, she comes to have
attitude τ towards the fact that ϕ WAS the case (before the
upgrade):
τji ⇒ [i : ϕ]τ j (BEF OREϕ) ,
where BEF ORE is a one-step past tense operator.
NASSLLI 2010
86
Explanation continued: examples
Conservative upgrades induce simple beliefs:
after ↑ ϕ, the agent only comes to believe that ϕ (was the case).
Radical upgrades induce strong beliefs:
after ⇑ ϕ, the agent comes to strongly believe that ϕ (was the
case).
Updates induce (irrevocable) knowledge:
after !ϕ, the agent comes to know that ϕ (was the case).
NASSLLI 2010
87
Fixed Points and Redundancy
A fixed point expresses the “redundancy”, or un-informativity,
of a doxastic transformation: in this sense, we can say that the fact
that a fixed-point-attitude is induced in the listener by an
announcement captures the fact that repeating the announcement
would be redundant.
This is literally true for non-epistemic sentences:
[i : p][i : p]θ ⇔ [i : p]θ ,
but can be generalized to arbitrary sentences in the form:
[i : ϕ][i : BEF OREϕ]θ ⇔ [i : ϕ]θ .
NASSLLI 2010
88
Honesty
We say that a communication act i : ϕ is honest (with respect) to
a listener j, and write
Honest(i : ϕ → j) ,
if the speaker i has the same attitude towards ϕ (before the
announcement) as the one (he believes to be) induced in the listener j
by his announcement of ϕ.
NASSLLI 2010
89
By the above results, it seems that, if τji holds then honesty should
be given by τ i ϕ.
This is indeed the case ONLY if we adopt the simplifying assumption
that all doxastic attitudes τji are common knowledge.
But, in general, honesty depends only on (having) the attitude that
the speaker believes to induce in the listener:
^
Honest(i : ϕ → j) = (Bi τji ⇒ τ i ϕ) .
τ
NASSLLI 2010
90
General Honesty
A (public) speech act i : ϕ is honest iff it is honest (with respect) to
all the listeners:
^
Honest(i : ϕ → j).
Honest(i : ϕ) :=
j6i
NASSLLI 2010
91
Example: honesty of an infallible speaker
Assume that it is common knowledge that a speaker i is infallible
(i.e. that !ji holds for all j 6= i).
Then an announcement i : ϕ is honest iff the speaker knows it to be
true; i.e. iff Ki ϕ holds.
The same condition ensures that the announcement i : Ki ϕ is honest.
NASSLLI 2010
92
Example: honesty of a “barely trusted” speaker
Assume common knowledge that a speaker i is only barely
trusted (i.e. that ↑ji holds for all j 6= i).
Then an announcement i : ϕ is honest iff the speaker believes it to
be true; i.e. iff Bi ϕ holds.
The same condition ensures that the announcement i : Bi ϕ is honest.
NASSLLI 2010
93
Example: honesty of a strongly trusted speaker
Assume common knowledge that a speaker i is strongly trusted,
but not infallible (i.e. that ⇑ji holds for all j 6= i).
Then an announcement i : ϕ is honest iff the speaker strongly
believes it to be true; i.e. iff Sbi ϕ holds.
NASSLLI 2010
94
Announcing what you safely believe
But what can our strongly trusted speaker honestly
announce, when he only believes (but not necessarily
strongly believes) ϕ?
Well, he might announce that he believes ϕ: the act i : Bi ϕ is
certainly honest whenever Bi ϕ holds.
But, in fact, he can honestly claim much more! He can claim he
“defeasibly knows” (= safely believes) ϕ:
the act i : 2i ϕ is honest (for a strongly trusted speaker) if
Bi ϕ holds.
NASSLLI 2010
95
Honest Exaggerations
Announcing 2i ϕ might sound like a wild exaggeration, but it is an
sincere announcement: indeed, by the identity
Bi 2i ϕ = Bi ϕ
we have that, whenever i believes ϕ, he also believes 2i ϕ.
But, moreover, it is an honest announcement, since
“belief” implies “strong belief in safe belief”;
in fact, the two are equivalent:
Bi ϕ = Sbi (2i ϕ) .
NASSLLI 2010
96
In fact, the above types of “defeasible knowledge” announcements are
the only honest announcements (up to logical equivalence) by a
strongly trusted agent:
For a speaker i who is commonly known to be strongly
trusted, an announcement i : ϕ is honest iff ϕ is equivalent to
some sentence of the form 2i ψ, such that Bi ψ holds.
NASSLLI 2010
97
Dishonest sincerity
Sincerity does NOT imply honesty.
Example: Suppose a strongly trusted agent i believes P , but does not
strongly believe it.
Then the announcement i : ϕ is sincere but dis-honest:
indeed, this announcement induces in the listeners a strong belief in
P , while the speaker did not have such a strong beloef!
NASSLLI 2010
98
Un-sincere honesty
In general, honesty does not imply sincerity.
Exercise for audience: Think of a counterexample!
Nevertheless, when (it is common knowledge that) the
listeners have a “positive” attitude towards the speaker (one
that implies belief ), then honesty does implies sincerity!
NASSLLI 2010
99
Persuasiveness
We say that an announcement i : ϕ is persuasive to a listener j
with respect to an issue θ,
and we write P ersuasive(i : ϕ → j; θ),
if
the effect of the announcement is that the listener is “converted”
to the speaker’s attitude towards θ.
NASSLLI 2010
100
Formally, for non-doxastic sentences:
P ersuasive(i : ϕ →j ; p) :=
^
(τ i ϕ ⇒ [i : ϕ]τ j p) .
τ
For doxastic sentences, this needs to be modified:
^
P ersuasive(i : ϕ →j ; θ) :=
(τ i ϕ ⇒ [i : ϕ]τ j (BEF OREθ))
τ
NASSLLI 2010
101
How to be Honest, (Sincere) and Persuasive
Suppose a strongly trusted agent i wants to be honest, but
persuasive with respect to some issue θ that he believes in
(although he does not necessarily have a strong belief ). So
we assume ⇑ji for all j, and Bi θ (but NOT necessarily Sbi θ).
QUESTION: What can i announce honestly (and thus
sincerely), in order to be persuasive?
This is a very important question: how can you “convert” others to
your (weak) beliefs, while still maintaining your honesty and
sincerity?
NASSLLI 2010
102
What NOT to say
i : θ would be sincere and persuasive, but it’s dishonest (unless Sbi θ
holds)!
i : Ki θ is equally dishonest.
i : Bi θ is honest, but not persuasive: it won’t change j’s beliefs
about θ (although it will change her beliefs about i’s beliefs).
RECALL: Being informed of another’s beliefs is not enough
to convince you of their truth.
NASSLLI 2010
103
ANSWER: honest exaggerations are persuasive!
It turns out that the only honest and persuasive announcement in
this situation is to make an “honest exaggeration”:
i : 2i θ.
In other words:
to honestly convert others to your belief in θ, you need to
say that you “defeasibly know” θ (when in fact you only
believe θ).
NASSLLI 2010
104
Conclusion
THE LESSON (known by most successful politicians):
If you want to convert people to your beliefs, don’t be too
scrupulous with the truth:
Announce that you “know” things even if you don’t know for
sure that you know them!
History will deem you “honest”, as long as you... believed it
yourself !
Simple belief, a loud voice and strong guts is all you need to
be persuasive!
“We now know that Saddam Hussein has acquired weapons
of mass destruction...” (G.W. Bush, 2002)
NASSLLI 2010
105
5.4. Dynamic Merging of Beliefs
QUESTION: How can “Agreement” be reached by
“Sharing”?
MORE PRECISELY: we want to investigate the issue of reaching
doxastic agreement among the agents of a group by “sharing”
information or beliefs.
NASSLLI 2010
106
How can “Agreement” be reached by “Sharing”?
Example of a particular scenario:
Albert
Mary
ǒ . ǒ
PQRS
WVUT
WVUT
PQRS
ô . ô
Albert knows (D or G), believes G
Mary doesn’t know (D or G)
conditional on D he believes ¬G
believes D
_
^
They share their information
⇓
Together they know the same: (D or G) and both believe D and ¬G
NASSLLI 2010
107
Main Issues:
• Agents’ goal = Reach a total doxastic/epistemic agreement
(“merge”).
• Different types of agreements can be reached: agreement only
on the things they know, on some simple beliefs, strong beliefs etc.
• Depending on the type of agreement to be reached, what should the
strategy be? Which communication protocol ? (given that the
agents have some limited abilities in the way they communicate)
• We are interested in “sharing”: joint (group) belief revision
induced by sincere, persuasive honest public communication
by either of the agents (the “speaker ”).
NASSLLI 2010
108
Speak loud, be positive, honest (sincere) and persuasive!
Rules of the game for our agents:
• Fully Public communication: common knowledge of what (the
message) is announced and of the fact that all agents adopt the same
attitude towards other agents. In fact, we’ll further assume that this
common attitude is a “positive” one (i.e. it implies belief ), and in
particular we’ll focus on the attitudes !, ⇑ and ! ⇑.
• Honesty: the speaker should have the same attitude towards the
announced information that she expects to induce in the listeners.
When the group’s common attitude is positive, honesty will imply
sincerity.
• Persuasiveness: listeners come to share the same attitude as
the speaker towards the relevant issues.
NASSLLI 2010
109
• (Common Knowledge that) All Agents Have the SAME
(positive) doxastic attitudes towards all other agents:
all the labels in the trust graph are identical (and positive).
Let τ be this same doxastic attitude. Then we introduce a sentence
^
τ :=
τji
i6j
saying that all agents adopt attitude τ towards all other agents.
NASSLLI 2010
110
Example 1: (Common Knowledge of ) Infallibility
For three agents, ! holds if the graph is
!
•T k
+
1
!
!
!
!
!
· ¡
•3
2
•
@
NASSLLI 2010
111
Example 2: (Common Knowledge of ) Strong Trust
⇑ holds if the graph is
⇑
•T k
+
1
⇑
⇑
2
•
@
⇑
⇑
⇑
· ¡
•3
NASSLLI 2010
112
Example 2: (Common Knowledge of ) Super-Strong Trust
! ⇑ holds if the graph is
!⇑
•T k
+
1
!⇑
!⇑
2
•
@
!⇑
!⇑
!⇑
· ¡
•3
NASSLLI 2010
113
Goal of Sharing is Total Agreement
• After each act of sharing, all agents reach a partial agreement,
namely with respect to the piece of information that was
communicated.
• The natural end of the sharing process is when total agreement
has been reached: all the agents’ doxastic structures are exactly the
same.
• After this, nothing is left to share: any further sincere persuasive
communication is redundant from than on.
NASSLLI 2010
114
Dynamic Merge
• When total agreement IS reached in this way, we say that the
agents’ doxastic structures have been dynamically “merged” into
one.
Connects to the problem of “preference aggregation” in Social Choice
Theory. “Aggregating beliefs” (or rather, belief structures).
Questions: What types of merge can be dynamically
realized by what type of “sharing”?
Do the communication agenda (order of the items announced, allowing agents to interrupt the speaker) and the
group’s hierarchy make any difference?
NASSLLI 2010
115
“Realizing” Preference Merge Dynamically
J
Intuitively, the purpose of “preference merge” i∈G Ri is to achieve
a state in which the G-agents’ preference relations are “merged”
accordingly, i.e.
to perform a sequence π of communication acts, transforming the initial model (S, Ri )inG into a model (S, Ri0 )i∈G such
that
K
0
Rj =
Ri
i∈G
for all j ∈ G.
J
us call this a “realization” of the merge operation .
Let
NASSLLI 2010
116
Realizing Distributed Knowledge
In the case of knowledge, it is easy to design a protocol to
realize the parallel merge of agents’ knowledge by a sequence
of joint updates, IF we assume (common knowledge of )
infallibility !:
PROTOCOL: Assume !. Then, in no particular order,
the agents have to publicly and sincerely announce “all
that they know” .
More precisely, a communication act a : Ka P is performed, for each
set of states P ⊆ S such that P is (or comes to be) known to a given
agent a (after the previous communication acts. This essentially is
the algorithm in van Benthem’s paper “One is a Lonely Number”.
NASSLLI 2010
117
The Protocol
Formally, if (a1 , . . . , an ) is some arbitrary listing of all agents
in G without repetitions, then a protocol for realizing distributed knowledge within group G at state s, given
attitude ! is:
Y
ρi
π :=
where
Q
i=1,n
is sequential composition of a sequence of actions and
Y
Y
ρi :=
{(ai : Kai P ) : P ⊆ S such that s |= [
ρj ]Kai P }
j=1,i−1
Q
The order of the agents in the first i and the order in which the
Q
announcements are made by each agent (in the second ) are
arbitrary.
NASSLLI 2010
118
Order-independence
The announcements may even be interleaving:
if the initial model is finite, then any “public” dialogue (of agents
announcing facts they know) will converge to the realization of
distributed knowledge,
as long as the agents keep announcing new things (i.e. that are not
already common knowledge).
NASSLLI 2010
119
Realizing Lexicographic Merge
Assuming that we have NO PRIVATE “HARD”
INFORMATION (i.e. that all knowledge is common
knowledge) and strong trust ! is the common attitude:
then we can realize the lexicographic merge of SOFT
INFORMATION, via a protocol very similar to the one for
distributed knowledge:
PROTOCOL: Assume ⇑ and some given priority order. Then, respecting the priority order, each agent has
to publicly and sincerely announce that she “defeasibly
knows” all that she believes to “know”. Here, “knowledge” means now defeasible knowledge 2a .
NASSLLI 2010
120
Order-dependence
The main difference is that now the speakers’ order matters!
To realize lexicographic merge, the agents that have “priority”
in the merge has to be given priority in the protocol.
A lower-priority agent will be permitted to speak ONLY
after the higher-priority agents finished announcing they
“know” ALL that they believe they “know”.
No interruptions, please!
NASSLLI 2010
121
The PROTOCOL
Formally, if (a1 , . . . , an ) is a listing of all agents in descending priority order, the protocol π 0 for realizing lexicographic merge of plausibility relations {≤a }a∈G at state s,
given common attitude ⇑, is the following:
Y
0
π :=
ρ0i
i=1,n
where
ρ0i
:=
Y
{(ai : 2ai P ) : P ⊆ S such that s |= [
Y
ρ0j ]Bai P }.
j=1,i−1
Q
Here, the order (i1 , . . . , ik ) in the first i is the priority order, while
Q
the order of announcements in the second
is still arbitrary.
NASSLLI 2010
122
Be Persuasive!
Note that simply announcing what they believe would be
DISHONEST, while simply announcing THAT they believe it, or that
they strongly believe it, won’t be persuasive enough: indeed, this will
not in general be enough to achieve preference merge (or even simple
belief merge!).
Being informed of another’s beliefs is not enough to
convince you of their truth.
What is needed for belief merge is that each agent tries to be
persuasive: to“convert” the other to her own beliefs by announcing
2a ϕ when they just believe ϕ (and hence they believe that 2a ϕ).
NASSLLI 2010
123
Honesty
This may look like a form of deceit, or at least a “deliberate
exaggeration” (though maybe not an outright lie).
But, as we saw, such a communication act a : 2a P is sincere (since
Ba 2a P , i.e. Ba P , holds), and moreover it is honest: since we have
Ba P implies Sba 2a P , hence a’s statement is strongly believed by her.
As a result, this act converts the listeners to SAME doxastic attitude
towards 2a P as the speaker (a) had: they all will strongly believe
that 2a P was true.
As a consequence, if the speaker announces all such 2a ’s (that she
believes to hold), she will end up converting all listeners to all
her beliefs and strong beliefs: this is a perfect act of “honest
persuasion” by a sequence of “sincere exaggerations”!
NASSLLI 2010
124
Realizing Priority Merge
Finally:
If we assume (common knowledge of super-strong
trust) ! ⇑ (as the common attitude), then we can realN
ize the Priority Merge i ≤i of the whole PLAUSIBILITY
ORDERS (encoding BOTH SOFT AND HARD INFORMATION) by simply executing the second protocol above:
respecting the priority order, each agent has to publicly
and sincerely announce that she “defeasibly knows” all
that she believes (to “know”).
NASSLLI 2010
125
Explanation
This is because, in the context of having ! ⇑ as common attitude, the
second protocol supersedes the first protocol :
indeed, for every P of the form Ka Q, agent a announces 2a Ka Q
whenever in fact Ba Ka Q;
but, by Introspection of K, this is equivalent to announcing Ka Q
whenever Ka Q actually holds;
since Ka Q is an a-introspective sentence, it is generally known that a
either knows it or she knows it’s false (in the irrevocable sense of
knowledge K);
hence, the common attitude ! ⇑ prescribes that every other agent will
perform an update !(Ka Q) with this information.
NASSLLI 2010
126
Example
In the situation from Albert and Mary together:
º¹
¸·
¬D, ¬G o
³´
µ¶
m
º¹
¸·q
/ ¬D, G
³´
µ¶
a
m
º¹
¸·q
D, ¬G m
³1 ´
µ¶
a
m
º¹
¸·
2³´ D, G µ¶
the protocol to realize the Priority Merge Rm⊗a consists of:
Mary’s sincere announcement (of her strong belief D); then Albert’s
infallible announcement (of his “hard” knowledge that D ∨ G); ; then
Albert’s sincere announcement (of ¬G, which he strongly believes
after Mary’s announcement):
m : 2m D ; a : Ka (D ∨ G) ; a : 2a ¬G
º¹
¸·
¬D, ¬G
³´
µ¶
º¹
¸·
¬D, G
³´
µ¶
m,a
º¹
¸·
/ D, G
³´
µ¶
m,a
º¹
¸·
/ D, ¬G
³´
µ¶
NASSLLI 2010
127
Order-dependence: counterexample
The priority merge of the ordering
a
º¹ ¸·
³´ s µ¶
a
º¹ ¸·
4³´ u µ¶
a
º#¹ ¸·
4³´ w µ¶
with the ordering
m
º¹ ¸·
³´ w µ¶
m
º¹ ¸·
4³´ s µ¶
m
º# ¹ ¸·
4³´ u µ¶
is equal to either of the two orders (depending on which agent has
priority). But...
NASSLLI 2010
128
... suppose we have the following public dialogue
m : 2m u ; a : 2a (u ∨ w)
This respects the “sincerity” rule of our protocol, since initially m
strongly believes u; then after the first upgrade a strongly believes
u ∨ w.
But this doesn’t respect the “order” rule: m lets a answer before she
finishes all she has to say. The resulting order is neither of two
priority merges:
a,m
º¹ ¸·
³´ s µ¶
a,m
º¹ ¸·
4³´ w µ¶
a,m
º# ¹ ¸·
³4 ´ u µ¶
NASSLLI 2010
129
The Power of Agendas
All this illustrates the important role of the person who “sets
the agenda”:
the “Judge” who assigns priorities to witnesses’ stands;
Or the “Speaker of the House”, who determines the order of the
speakers as well as the the issues to be discussed and the relative
priority of each issue.
NASSLLI 2010
130
Open Problem 1
So, depending on the “agenda”, common super-strong trust ! ⇑ can
realize a whole plethora of merge operations. Nevertheless,
NOT everything goes: the requirements imposed on the
plausibility relations generally pose restrictions to which kinds of
merge are realizable. E.g. it is easy to see that neither intersection
nor lexicographic merge preserve the “local connectedness” of
plausibility relations, and so none of them is realizable in our (locally
connected) setting.
OPEN QUESTION: characterize the class of merge operations
realizable when we have (common knowledge of )
super-strong trust.
NASSLLI 2010
131
Different Attitudes
What happens if we drop the uniformity of attitudes?
E.g. it is common knowledge that Mary is infallible !am , but that
Albert is only super-strongly trusted ! ⇑ma :
Let us also assume that we give higher priority to the infallible agent
Mary.
Intuitively, Mary is given more persuasive powers, so the merged
order should be closer to hers.
However, intuition is deceiving: an infallible agent can honestly
announce less things than a fallible one!
NASSLLI 2010
132
Counterexample
Given (common knowledge of) !am ∧! ⇑ma , the priority merge of
Albert’s ordering
a
º¹ ¸·
³´ s µ¶
a
º¹ ¸·
4³´ u µ¶
a
º#¹ ¸·
4³´ w µ¶
with Mary’s ordering
m
º¹ ¸·
³´ w µ¶
m
º¹ ¸·
4³´ s µ¶
m
º# ¹ ¸·
4³´ u µ¶
is equal to Albert’s ordering, NO MATTER WHAT THE PRIORITY
ORDER IS!