A Model-Theoretic Analysis of Knowledge

A Model-Theoretic Analysis of Knowledge
Ronald Fagin
Joseph Y. Halpern
Moshe Y. Vardiy
IBM Research
Almaden Research Center
650 Harry Road
San Jose, California 95120-6099
Chuangtse and Hueitse had strolled on to the bridge over the Hao, when the
former observed, \See how the small sh are darting about! That is the happiness of the sh." \You are not a sh yourself," said Hueitse. \How can you
know the happiness of the sh?" \And you not being I," retorted Chuangtse,
\how can you know that I do not know?"
Chuangtse, c. 300 BC
Abstract
Understanding knowledge is a fundamental issue in many disciplines. In computer science, knowledge arises not only in the obvious contexts (such as knowledgebased systems), but also in distributed systems (where the goal is to have each processor \know" something, as in agreement protocols). A general semantic model of
knowledge is introduced, to allow reasoning about statements such as \He knows
that I know whether or not she knows whether or not it is raining." This approach
more naturally models a state of knowledge than previous proposals (including
Kripke structures). Using this notion of model, a model theory for knowledge is
developed. This theory enables one to interpret the notion of a \nite amount of
information".
A preliminary version of this paper appeared in Proc. 25th IEEE Symp. on Foundations of Computer
Science, 1984, pp. 268{278. This version is essentially identical to the version that appears in Journal
of the ACM 38:2, 1991, pp. 382{428.
Part of the research reported here was performed while this author was visiting the Weizmann
Institute of Science, Rehovot, Israel
y
1
1 Introduction
Epistemology, the theory of knowledge, has been a subject of philosophical investigation
for millennia. Reasoning about knowledge and knowledge representation has also been
an issue of concern in Articial Intelligence for over two decades (cf. [MH, Mo1, BS]).
More recently, researchers have realized that these issues also play a crucial role in
other subelds of computer science, including cryptography, distributed computation,
and database theory, as well as in mathematical economics (cf. [Ha1, Ha2]).
For example, a distributed protocol is often best analyzed in terms of the states
of knowledge of the processors involved. In a protocol such as Byzantine Agreement
[PSL, DS], it is essential that this analysis includes not only what a processor knows to
be true about the world, but also its knowledge about what the other processors know.
Such reasoning can, however, get very complicated. As Clark and Marshall point out
[CM], while it may be somewhat dicult to keep straight a pipeline of gossip such as
\Dean knows that Nixon knows that Haldeman knows that Magruder knows about the
Watergate break-in", making sense out of \Dean doesn't know whether Nixon knows
that Dean knows that Nixon knows about the Watergate break-in" is much harder. Yet
this latter sentence precisely captures the type of reasoning that goes on in proving lower
bounds for Byzantine Agreement [DS].
The need to formally model this type of reasoning is our motivation for constructing
a semantic model for knowledge. The rst attempt to do so was made by Hintikka [Hi],
using essentially the notion of possible worlds. Hintikka's idea was that someone knows
' exactly if ' is true in all the worlds he thinks are possible. Possible-world semantics
has been formalized (cf. [Sa]) using Kripke structures [Kr]. In a Kripke structure for
knowledge, the \possible worlds" can be viewed as nodes on a graph that are joined by
edges of various colors, one corresponding to each \knower" or \agent". Two possible
worlds are joined by an edge for agent i exactly if they are indistinguishable as far as
agent i is concerned.
There are situations where Kripke structures clearly model the state of knowledge.
For example, assume that there is a set of processors, each with a set of clearly dened
local states. We then dene a Kripke structure whose states consist of the global states
(which describe the local states of each of the processors), where two global states are
indistinguishable to a processor if it has the same local state in both. This is the situatedautomata approach, where knowledge is ascribed on the basis of the information carried
by the state of a machine [Ro]. This approach has been used in a number of papers on
distributed systems, including [HM1, DM, FHV, MT, RK]. However, there are situations
where it is not clear how to use Kripke structures to model directly a state of knowledge.
Example 1.1: Consider a system with two communicating agents where message trans-
mission is not guaranteed. Suppose two messages have been exchanged: a message from
agent 1 to agent 2 saying p (think of p as being \the value is 3"), followed by an acknowledgment from agent 2 that is received by agent 1. Thus, agent 1 knows p, agent 2 knows
2
that agent 1 knows p, agent 1 knows that agent 2 knows that agent 1 knows p, and this,
in some sense, is all that is known. While it is easy to construct Kripke structures where
the formulas K p, K K p, and K K K p are all true (where Ki ' is read \i knows '"),
it is not the least bit obvious which one captures precisely this simple situation, or even
if there is one. (It follows from our results in Sections 3 and 4 there indeed is one, but
that there is none with only nitely many nodes.)
1
2
1
1
2
1
The diculty in using Kripke structures to model directly knowledge states also sheds
doubt on their adequacy as semantic models for knowledge. To get around this diculty,
various researchers have tried to characterize a state of knowledge syntactically, by the
set of formulas that are true of this state (cf. [Hi, MSHI, Mo2]). This method, however,
requires innitely many formulas to characterize a state of knowledge, and still begs the
question of what a model of a state of knowledge is. A model to us is a description of
the world, not a collection of formulas. Describing a state by the formulas that are true
in it seems to avoid the issue of modelling altogether.
In this paper we introduce knowledge structures, which are intended to model states
of knowledge. We also use the idea of possible worlds, but in a somewhat dierent way
than in Kripke structures. Roughly speaking, we proceed inductively by constructing
worlds of each depth. A depth 0 world is a description of reality (in the propositional
case, a truth assignment to all the primitive propositions); a depth 1 world consists of a
set of depth 0 worlds for each agent, corresponding to the worlds that the agent thinks
are possible; a depth 2 world consists roughly of a set of possible depth 1 worlds for each
agent, etc.
Having modeled knowledge states, we can go back and examine Kripke structures. It
turns out that we can now justify the use of Kripke structures as models for collections of
knowledge states. More precisely, to every node in a Kripke structure there corresponds
a knowledge structure where the same formulas are true, and conversely, for every knowledge structure we can build a Kripke structure one of whose nodes will satisfy the same
formulas as the knowledge structure. This correspondence between knowledge structures
and Kripke structures enables us to immediately apply to knowledge structures results
concerning complete axiomatizations and decision procedures that have already been
proved for Kripke structures (cf. [Sa, HM2]).
Although the same axioms characterize knowledge structures and Kripke structures,
knowledge structures are a much more exible tool for examining two concepts that seem
to us fundamental - nite information and common knowledge - and their interaction.
(A fact p is common knowledge if everyone knows that everyone knows that everyone
knows ... that p. For a discussion of the signicance of common knowledge for distributed
systems, see [HM1].) We study two model-theoretic constructions, no-information and
least-information extensions, that capture the notion of nite information, and nite
information in the presence of common knowledge. An interesting corollary of this investigation is that nite Kripke structures cannot model lack of common knowledge.
Approaches similar to ours have been taken by van Emde Boas et al. [VSG] and by
3
Mertens and Zamir [MZ]. In [VSG] an epistemic model is used to analyze the Conway
Paradox. Like ours, that model captures an innite hierarchy of knowledge levels, but it
does not have the expressive power of knowledge structures. In [MZ] the framework is
Bayesian; a world is not just possible or impossible, but it has a probability associated
with it. Mertens and Zamir's innite hierarchy of beliefs is the analogue of our knowledge
structures in a Bayesian setting. We will have more comments later about the relationship
between these works and ours.
The rest of this paper is organized as follows. In the next section we formally describe
knowledge structures and show how to use them to give semantics to formulas involving
knowledge. In Section 3 we describe the correspondence between knowledge structures
and Kripke structures, and show how we exploit this correspondence. In Section 4 we
show how to model nite information, and give some tight bounds for what can be
known with nite information. These results imply the surprisingly subtle fact that in
many practical situations, there can be no nontrivial common knowledge. In this section
we also show that nite Kripke structures cannot in general model nite information.
In Section 5 we deal with common and joint knowledge. The theme in this section
is that common and joint knowledge involve knowledge of transnite depth, and we
develop appropriate tools to deal with it. In Section 6 we compare our approach with
the Bayesian approach to modelling knowledge and we comment on the exibility and
utility of knowledge structures, by showing how they can be extended to deal with belief
and time. We conclude with some remarks in Section 7.
2 Knowledge structures
In this section we dene knowledge structures, each of which models a state of knowledge.
We assume a nite set of agents. The rst step in designing a model of knowledge is
to decide what the properties of knowledge should be. The nature of knowledge and
its properties has been a matter of great dispute among philosophers. Rather than
attempting to resolve these disputes here, we concentrate on one set of properties that
seems natural, and mention later (in Section 6) how to modify the model to capture
various others.
We take it to be part of the denition of knowledge that anything that someone
knows is true. While someone may believe false things, it is impossible to have false
knowledge. The motivation for the other properties of knowledge that we assume comes
from considering a system of idealized rational agents, in which it is common knowledge
that each agent is capable of perfect introspection and logical reasoning. In such a system,
an agent knows exactly what he does and does not know, and knows also all the logical
consequences of his knowledge. Finally, he knows that these properties hold for all the
other agents' knowledge. These properties are essentially the axioms that characterize
our notion of knowledge. Thus, our knowledge structures will be dened in such a way
that they satisfy the following axioms (recall that Ki' means \agent i knows '"):
4
1.
2.
3.
4.
5.
All substitution instances of propositional tautologies.
Ki ' ) ' (\Whatever agent i knows is true").
Ki ' ) Ki Ki' (\Agent i knows what he knows").
:Ki ' ) Ki :Ki ' (\Agent i knows what he does not know").
Ki ' ^ Ki (' ) ' ) ) Ki ' (\What agent i knows is closed under implication").
1
1
2
2
These axioms were rst discussed by Hintikka [Hi]. The axioms, along with the inference
rules of modus ponens (\from ' and ' ) ' infer ' ") and knowledge generalization
(\from ' infer Ki'") imply that the agents are very wise: each knows all tautologies and
all of the consequences of his knowledge, and each knows that all of the other agents are
equally wise. It is well-known that Axiom 3 can be derived from the other axioms and
inference rules [HC]. It is convenient to refer sometimes to Axiom 3 as describing positive
introspection, and to Axiom 4 as describing negative introspection.
Before we formally dene knowledge structures, let us discuss them informally. Assume rst that there is only one agent. In this case, a knowledge structure consists of
two parts. The rst part describes \reality". For simplicity, in this paper we take reality to be a truth assignment to a xed set of primitive propositions. The second part
of a knowledge structure describes a set of \possible worlds", each of which is a truth
assignment that the agent thinks is possible.
1
1
2
2
Example 2.1: Assume that p, q, and r are the primitive propositions, and that \reality"
is the truth assignment pqr, which means that p is true, q is false, and r is true. Assume
that the agent knows that exactly one of p, q, or r is false, but that he does not know
which. Then his set of \possible worlds" is fpqr; pqr; pqrg.
When there are two or more agents, then the situation becomes much more complex.
Not only can agents have knowledge about reality, but they can also have knowledge
about each other's knowledge.
Example 2.2: Assume there are two agents, Alice and Bob, and that there is only one
primitive proposition p. At the \0th level" (\reality"), assume that p is true. The 1st
level tells each agent's knowledge about reality. For example, Alice's knowledge at the
1st level could be \I (Alice) don't know whether p is true or false", and Bob's could be \I
(Bob) know that p is true". The 2nd level tells each agent's knowledge about the other
agent's knowledge about reality. For example, Alice's knowledge at the 2nd level could
be \I know that Bob knows whether p is true or false", and Bob's could be \I don't know
whether Alice knows p". Thus, Alice knows that either p is true and Bob knows this, or
else p is false and Bob knows this. At the 3rd level, Alice's knowledge could be \I know
that Bob does not know whether I know about p". This can continue for arbitrarily
many levels.
5
We now give the formal denition of a knowledge structure, and then explain more
of the intuition underlying it. We assume a xed nite set of primitive propositions,
and a xed nite set P of agents. A 0th-order knowledge assignment, f , is a truth
assignment to the primitive propositions. We call hf i a 1-ary world (since its \length"
is 1). Intuitively, a 1-ary world is a description of reality. Assume inductively that k-ary
worlds (or k-worlds, for short) have been dened. Let Wk be the set of all k-worlds.
A kth-order knowledge assignment is a function fk : P ! 2 sup Wk . Intuitively, fk
associates with each agent a set of \possible k-worlds"; the worlds in fk (i) are \possible"
for agent i and the worlds in Wk ? fk (i) are \impossible" for agent i. A (k + 1)-sequence
of knowledge assignments is a sequence hf ; : : :; fk i, where fi is an ith-order knowledge
assignment. A (k + 1)-world is a (k + 1)-sequence of knowledge assignments that satisfy
certain semantic restrictions, which we shall list shortly. These restrictions enforce the
properties of knowledge mentioned above. An innite sequence hf ; f ; f ; : : :i is called a
knowledge structure if each prex hf ; : : :; fk? i is a k-world for each k. Thus, a k-world
describes knowledge of depth k ? 1, and a knowledge structure describes knowledge of
arbitrary depth.
0
0
0
0
0
1
2
1
Example 2.3: Before we list the restrictions on fk , let us reconsider Example 2.2. In
that example, f is the truth assignment that makes p true. Also, f (Alice) = fp; pg
(where by p (respectively, p) we mean the 1-world hf i (respectively, hf 0 i), where f
(respectively, hf 0 i) is the truth assignment that makes p true (respectively, false)), and
f (Bob) = fpg. Saying f (Alice) = fp; pg means that Alice does not know whether p is
true or false. We can write the 2-world hf ; f i as
hp; (Alice 7! fp; pg; Bob 7! fpg)i:
0
1
0
0
0
0
1
1
0
1
Let us denote this 2-world by w . Let w be the 2-world
hp; (Alice 7! fp; pg; Bob 7! fpg)i;
and let w be
hp; (Alice 7! fpg; Bob 7! fpg)i:
In Example 2.2, f (Alice) = fw ; w g, since Alice thinks both w (where p is true and Bob
knows this) and w (where p is false and Bob knows this) are possible worlds. Similarly,
f (Bob) = fw ; w g, since Bob thinks both w (where p is true and Alice does not know
it) and w (where p is true and Alice knows this) are possible worlds.
1
2
3
2
1
2
1
2
2
1
3
1
3
A (k+1)-world hf ; : : :; fk i must satisfy the following restrictions for each agent i:
(K1) Correctness: hf ; : : :; fk? i 2 fk (i); if k 1 (\The real k-world is one of the
possibilities, for each agent"). In our example, we see that indeed p 2 f (Alice)
and p 2 f (Bob). Furthermore, w 2 f (Alice) and w 2 f (Bob), where we recall
that w is the \real" 2-world hf ; f i. Intuitively, this condition says that knowledge
is always correct (unlike belief, which can be incorrect).
0
0
1
1
1
1
1
0
2
1
6
1
2
(K2) Introspection: If hg ; : : : ; gk? i 2 fk (i), and k > 1, then gk? (i) = fk? (i)
0
1
1
1
(\Agent i knows exactly what he knows"). Let us consider our example. Alice thinks there are two possible 2-worlds, namely w and w , since f (Alice) =
fw ; w g. If we write w as hg ; g i, then indeed g (Alice) = fp; pg = f (Alice), as
required. Intuitively, although Alice has doubts about Bob's knowledge, she has no
doubts about her own knowledge. Thus, in all 2-worlds she considers possible, her
knowledge is identical, namely, she does not know whether p is true or false. This
condition implies that our agents are introspective about their knowledge.
(K3) Extension: hg ; : : : ; gk? i 2 fk? (i) i there is a (k ? 1)st-order knowledge assignment gk? such that hg ; : : : ; gk? ; gk? i 2 fk (i); if k > 1 (\i's higher-order
knowledge is an extension of i's lower-order knowledge"). In our example, since Alice thinks either p or p is possible, there is some 2-world she thinks possible (namely,
w ) in which p is true, and there is some 2-world she thinks possible (namely, w )
in which p is false. Conversely, because she thinks w and w are both possible, it
follows that she thinks either p or p is possible. Intuitively, this condition says that
the dierent levels of knowledge describing a knowledge world are consistent with
each other.
1
1
2
2
0
0
1
2
1
2
2
1
1
1
0
2
1
1
2
1
2
We note that (K1) implies that if hf ; : : :; fk i is a (k + 1)-world, then hf ; : : :; fj i is
a (j + 1)-world, for all j such that 0 j k. We also note that our three restrictions
imply an apparent strengthening of (K2): namely, if hg ; : : :; gk? i 2 fk (i); and k >
1; then gj (i) = fj (i) if 1 j < k. Similarly, our conditions imply that the \compatibility"
between fk and fk? as expressed by (K3) implies that the same compatibility holds
between fk and fj if 0 < j < k. Thus, i's higher-level knowledge determines his lowerlevel knowledge (that is, fk (i) determines fj (i) if 0 < j < k). So, higher-order knowledge
renes (that is, adds more detail to) lower-level knowledge.
We have phrased (K3) as a necessary and sucient condition, but it is easy to see
that one direction actually follows from (K1) and (K2). Suppose that hg ; : : :; gk? ; gk? i
2 fk (i). Then, by (K2), gk? (i) = fk? (i). By (K1), hg ; : : : ; gk? i 2 gk? (i). It follows
that hg ; : : :; gk? i 2 fk? (i). From now on, whenever we have to verify that (K3) holds,
we will check only the nontrivial direction.
It is not obvious that every world is the prex of some knowledge structure. McCarthy
[Mc] posed essentially this question as an open problem in 1975 (in a dierent framework,
of course). In fact, it may not be obvious to the reader that there are any knowledge
structures at all. As we shall see in Section 4, the answer to McCarthy's question is
positive.
There are tempting ways to \simplify" knowledge structures. It turns out that the
alternative denitions are not expressive enough to model the full range of possibilities
that knowledge structures can model. For example, one may want to dene a kthorder knowledge assignment as an assignment to each agent of a set of (k ? 1)th-order
knowledge assignments (instead of a set of k-worlds). This in fact is the approach taken
0
0
0
1
1
0
1
0
2
1
0
1
7
2
2
1
1
by van Emde Boas et al. [VSG]. Unfortunately, with this denition we cannot describe
the state of knowledge where Alice knows that either p is true and Bob knows it or
p is false and Bob does not know it. Essentially, the simpler approach cannot model
knowledge about relationships between knowledge and reality, and, more generally, it
cannot model knowledge about relationships between dierent levels of knowledge.
Let f be the knowledge structure hf ; f ; : : :i. Dene i's view of f , denoted i(f ), to
be the sequence hf (i); f (i); : : :i. If f and f 0 are knowledge structures, we say that f and
f 0 are i-equivalent, written f i f 0, if i(f ) = i(f 0). Thus, f and f 0 are i-equivalent if
agent i cannot distinguish between them.
At this point we can imagine two notions of what it means for agent i to think that
a k-world w is possible. The rst is the one we have been implicitly using up to now:
agent i thinks w is possible in a knowledge structure f = hf ; f ; : : :i if w 2 fk (i). We
say that agent i thinks w is conceivable in f if w is a prex of some knowledge structure
f 0 such that f i f 0; i.e. w is the prex of a knowledge structure that agent i cannot
distinguish from f . The following theorem assures us that the two notions of \possible
world" are identical.
0
1
1
2
0
1
Theorem 2.4: Agent i thinks that w is possible in f i agent i thinks that w is conceivable
in f .
Proof: Assume rst that agent i thinks w is conceivable in f , so w = hf 0 ; : : :; fk0 ? i is
the prex of a knowledge structure f 0 = hf 0 ; f 0 ; : : :; i, where f i f 0. In particular, we
have fk (i) = fk0 (i). By (K1), hf 0 ; : : :; fk0 ? i 2 fk0 (i). Hence, w 2 fk (i), so agent i thinks
w is possible in f .
Conversely, suppose agent i thinks w is possible in f , so that w = hf 0 ; : : : ; fk0 ? i 2
fk (i). By (K2), fk0 ? (i) = fk? (i). As we commented earlier, it follows easily that
fj0 (i) = fj (i) if 1 j k ? 1. Since hf 0 ; : : : ; fk0 ? i 2 fk (i), it follows from (K3) that
there is some fk0 such that hf 0 ; : : : ; fk0 ? ; fk0 i 2 fk (i). By (K2), fk0 (i) = fk (i). Similarly,
we can nd fk0 ; fk0 ; : : : such that f i f 0 = hf 0 ; f 0 ; : : :i. Since w is a prex of f 0, it
follows that w is conceivable in f .
0
0
1
0
1
1
1
1
0
1
+1
1
0
0
+2
+1
0
1
1
The set of formulas is the smallest set that contains the primitive propositions,
is closed under the Boolean connectives : and ^, and contains Ki ' if it contains ' (in
Section 5, we discuss richer languages). Boolean connectives such as _ and ) are dened
as usual. We now dene the depth of a formula ', denoted depth(').
1.
2.
3.
4.
depth(p) = 0 if p is a primitive proposition;
depth(:') = depth(');
depth(' ^ ' ) = max(depth(' ); depth(' ));
depth(Ki') = depth(') + 1.
1
2
1
2
8
We are almost ready to dene what it means for a knowledge structure to satisfy a
formula. We begin by dening what it means for an (r+1)-world hf ; : : :; fr i to satisfy
formula ', written hf ; : : : ; fr i j= ', if r depth(').
1. hf ; : : :; fr i j= p, where p is a primitive proposition, if p is true under the truth
assignment f .
2. hf ; : : :; fr i j= :' if hf ; : : : ; fri 6j= '.
3. hf ; : : :; fr i j= ' ^ ' if hf ; : : :; fr i j= ' and hf ; : : : ; fr i j= ' .
4. hf ; : : :; fr i j= Ki' if hg ; : : :; gr? i j= ' for each hg ; : : :; gr? i 2 fr (i).
Let us reconsider Example 2.2. Let w and w be, as before, the two 2-worlds that
Alice considers possible. Then w j= KBobp, since according to w , the only 1-world
Bob considers possible is hpi. Similarly, w j= KBob:p. Hence, both w and w satisfy
(KBobp_KBob :p). Since both of the 2-worlds that Alice considers possible satisfy (KBobp_
KBob:p), it follows that in our example hf ; f ; f i j= KAlice (KBobp _ KBob:p).
The next lemma says that to determine whether a formula of depth k is satised by
a world, we need only consider the (k + 1)-ary prex of the world.
Lemma 2.5: Assume that depth(') = k and r k. Then hf ; : : :; fr i j= ' i
hf ; : : : ; fk i j= '.
Proof: The proof is by induction on formulas. The only nontrivial case is when '
is of the form Ki , where we assume inductively that the lemma holds when ' is .
Assume that hf ; : : : ; fr i j= Ki , and that depth(Ki ) = k r. Let hg ; : : :; gk? i be an
arbitrary member of fk (i). It follows from (K3) that there exist gk ; : : :; gr? such that
hg ; : : : ; gk? ; : : : ; gr? i 2 fr (i). Since hf ; : : :; fr i j= Ki , it follows by denition that
hg ; : : : ; gr? i j= . So, by inductive assumption, hg ; : : :; gk? i j= . Thus every member
of fk (i) satises , and so hf ; : : : ; fk i j= Ki , as desired. The proof of the converse is
similar.
We say that the knowledge structure f = hf ; f ; : : :i satises ', written f j= ', if
hf ; : : : ; fk i j= ', where k = depth('). This is a reasonable denition, since if w =
hf ; : : : ; fri is an arbitrary prex of f such that r k, then it then follows from Lemma
2.5 that f j= ' i w j= '. We say that ' is satisable if it is satised in some knowledge
structure, and valid if it is satised in every knowledge structure.
Proposition 2.6: All of the axioms are valid.
Proof: (K1) (which says hf ; : : :; fk? i 2 fk (i), if k 1) causes the axiom Ki ' ) ' to be
valid. (K2) (which says that if hg ; : : : ; gk? i 2 fk (i), and k > 1, then gk? (i) = fk? (i))
can be viewed as a combination of two restrictions, one with gk? (i) fk? (i), and one
with gk? (i) fk? (i). The former restriction causes the axiom Ki' ) Ki Ki ' to be
valid, and the latter causes the axiom :Ki ' ) Ki :Ki' to be valid. The remaining
simple details are left to the reader.
0
0
0
0
0
0
0
1
2
0
0
1
0
0
1
2
0
1
1
2
1
1
2
0
1
1
2
2
0
0
0
0
1
1
0
1
0
1
1
0
0
1
0
0
1
0
0
0
1
0
1
1
1
1
1
9
1
1
Theorem 2.7: f j= Ki ' i g j= ' whenever f i g.
Proof: Assume that depth(Ki') = k. We rst show that if f j= Ki' and f i g, then
g j= '. Let f be hf ; f ; : : :i, and let w be the k-world that is a prex of g. By Theorem
2.4, we know that w 2 fk (i). Since f j= Ki ', it follows by denition that every member
of fk (i) (and in particular, w) satises '. Since w j= ', it follows by denition that
g j= ', as desired.
Conversely, assume that g j= ' whenever f i g. To show that f j= Ki ', we must
show that w j= ' for each w 2 fk (i). Assume that w 2 fk (i). By Theorem 2.4, w is a
prex of some g such that f i g. By assumption, g j= ', and so by denition w j= '.
0
1
Thus, agent i knows ' precisely if ' holds in every knowledge structure that i thinks
possible. Theorem 2.7 is a powerful tool. It shows the equivalence of two distinct notions
of truth. The rst notion of truth, which we can call \internal truth", says that Ki '
is true if ' is true in every k-world that i thinks is possible (where depth(Ki') = k).
These k-worlds are obtained by \looking inside" the knowledge structure (at level k).
Thus, internal truth is a nitistic notion. The other notion of truth, which we can call
\external truth", says that Ki' is true if ' is true in each of the knowledge structures
that i thinks is possible, of which there can be uncountably many.
3 Knowledge structures and Kripke structures
Previous attempts (cf. [Sa, MSHI, Mo1]) to provide a semantic foundation for reasoning
about knowledge have made use of Kripke structures [Kr].
Suppose we have agents 1; : : :; n. The corresponding Kripke structure M is a tuple
(S; ; K ; : : :; Kn ), where S is a set of states, (s) is a truth assignment to the primitive
propositions for each state s 2 S , and Ki is an equivalence relation on S (i.e., a reexive,
symmetric, and transitive binary relation on S ). Intuitively, (s; t) 2 Ki i s and t are
indistinguishable as far as agent i's knowledge is concerned. We now dene what it means
for a formula ' to be satised at a state s of M , written M; s j= '.
1. M; s j= p, where p is a primitive proposition, if p is true under the truth assignment
(s).
2. M; s j= :' if M; s 6j= '.
3. M; s j= ' ^ ' if M; s j= ' and M; s j= ' .
4. M; s j= Ki' if M; t j= ' for all t such that (s; t) 2 Ki.
It is not hard to show that with Kripke semantics, the modality Ki also has all the properties discussed in the previous section (see [HM2, Sa] for more details). The reexivity of
Ki gives us Ki' ) ', transitivity gives us Ki ' ) Ki Ki', and symmetry and transitivity
together give us :Ki ' ) Ki :Ki '.
1
1
2
1
2
10
Even though Kripke structures give Ki the desired properties, it is not clear that
they actually capture our intuition about knowledge. In particular, it is not clear what
state of knowledge corresponds to a state in a Kripke structure. The following theorem
claries this issue by providing an exact correspondence between knowledge structures
and states in Kripke structures.
We say that a state s of Kripke structure M is equivalent to knowledge structure f if
M; s j= ' i f j= ', for every formula '. That is, s and f are equivalent if they satisfy
the same formulas.
Theorem 3.1: To every Kripke structure M and state s in M , there corresponds a
knowledge structure fM;s such that s is equivalent to fM;s. Conversely, there is a Kripke
structure Mknow such that for every knowledge structure f there is a state s in Mknow
such that f is equivalent to s .
Proof: Suppose M is a Kripke structure. For every state s in M we construct a
knowledge structure fM;s = hf s ; f s ; : : :i. First, f s is just the truth assignment (s).
Suppose we have constructed f s ; f s; : : : ; fks for each state s in M . Then fks (i) =
fhf t; : : : ; fkt i : (s; t) 2 Kig. We leave it to the reader to check that Mknow ; s j= ' i
fM;s j= '.
For the converse, let Mknow = (Sknow ; ; K ; : : : ; Kn), where Sknow consists of all the
knowledge structures, (f ) = f , and (f ; g) 2 Ki i f i g. Now using Theorem 2.7, we
can show Mknow ; f j= ' i f j= '.
In [Sa] it is shown that the axioms of the previous section, with modus ponens and
knowledge generalization as the rules of inference, give a complete axiomatization for the
Kripke structure semantics of knowledge, while in [HM2] it is shown, again with respect
to Kripke structure semantics, that the question of deciding if a formula is satisable is
PSPACE-complete (provided there are at least two agents). From Theorem 3.1 it follows
that these results also apply to the knowledge structure semantics, so we get:
Corollary 3.2: The axioms of the previous section, together with modus ponens and
knowledge generalization as the rules of inference, give a complete axiomatization for
knowledge structures.
f
f
0
1
0
0
1
+1
0
1
0
Corollary 3.3: The problem of deciding if a formula is satisable is PSPACE-complete
(provided there are at least two agents).
We note that in [FV1], it is shown how to obtain an elegant, constructive proof of
Corollary 3.2, by working only with knowledge structures and not making use of the
completeness theorem for Kripke structures.
Theorem 3.1 shows that knowledge structures and Kripke structures have the same
theory, but its implications are deeper. It shows that knowledge structures and Kripke
structures complement each other in modelling knowledge: knowledge structures model
states of knowledge, and Kripke structures model collections of knowledge states.
11
4 Modelling Finite Information
4.1 The no-information extension
A knowledge structure fully describes a state of knowledge; i.e., it describes arbitrarily
deep levels of knowledge. In reality, however, agents have only a nite amount of information. In this section we study the knowledge states that arise from nite amounts of
information. Put dierently, we study knowledge structures that have nite descriptions.
Consider a variant of Example 1.1, where we have a system with three communicating
agents, Alice, Bob, and Charlie. Assume that Bob has sent no messages and has received
only one message, a message from Alice saying \p" (i.e. p is true). For ease of exposition,
let us also assume that p is the only primitive proposition. Intuitively, all that Bob knows
at this point is that Alice knows p. But what state of knowledge does this correspond
to?
The answer to this question depends in part on the underlying model of knowledge
acquisition (cf. [FV2, FHV]). For example, is it possible as far as Bob is concerned that
Charlie knows that Bob knows p, even though Bob never sent any messages? The answer
may be yes if each agent stores the information he has about primitive propositions and
about the information he has received from other agents in a database, and if databases
are insecure, so that agents can read each other's databases (then Charlie can nd out
what information Bob has received, without receiving any messages from other agents).
It may also be yes if messages are guaranteed to arrive in one round of communication,
for in that case, for all Bob knows, Alice may have sent Charlie a message (in the rst
round) saying that she would also send Bob the message \p" in that round. On the other
hand, if message communication is not guaranteed and databases are secure, then Bob
knows that Charlie does not know that Bob knows p. Is it possible that Alice knows
that Bob knows that Alice knows p? Again, the answer depends in part on whether
communication is guaranteed. If there is a chance that messages may not arrive, then it
is not possible for Alice to have such depth 3 knowledge at the end of the rst round.
In order to characterize Bob's knowledge state, we will rst consider the most \permissive" situation, where we assume that agents have no knowledge about how other
agents acquire information. Thus, agents should allow for all possibilities that are consistent with the information they have. Since Bob has received a message from Alice saying
p is true (and we assume that messages are honest), Bob knows that Alice knows p. Of
course, Bob also knows that he himself knows p, but he has no idea whether Charlie
knows p. Thus there are two 2-worlds that Bob thinks are possible:
hp; (Alice 7! fpg; Bob 7! fpg; Charlie 7! fpg)i
and
.
hp; (Alice 7! fpg; Bob 7! fpg; Charlie 7! fp; pg)i
12
Let W be the set consisting of these two 2-worlds. What 3-worlds does Bob think
possible? Intuitively, Bob should consider a 3-world w = hg ; g ; g i possible if it is
consistent with Bob's information, that is, if g (Bob) = W . Let W 0 be the set of 3worlds that satisfy this condition. These are the 3-worlds that Bob considers possible.
This idea extends. The set of 4-worlds that Bob thinks are possible consists of all the
4-worlds hg ; g ; g ; g i such that g (Bob) = W 0. We shall generalize this idea shortly
when we dene the no-information extension.
Let W be a set of (k ? 1)-worlds, and i an agent. Dene the i-extension of W to be
the set of k-worlds given by fhg ; : : :; gk? i : gk? (i) = W g: Intuitively, this is the set of
k-worlds w such that in w, agent i considers W as the set of possible (k ? 1)-worlds.
0
1
2
2
0
1
2
3
3
0
1
1
Denition 4.1: Let w = hf ; : : :; fk i be a (k + 1)-world. The one-step no-information
extension w of w is the (k + 2)-world hf ; : : : ; fk ; fk i, where fk (i) is the i-extension
of fk (i). Thus, fk (i) = fhg ; : : :; gk i : gk (i) = fk (i)g:
0
+
0
+1
+1
+1
0
In the above denition (and later), we use the convention that f (i) is the empty set
for each agent i. Hence, hf i = hf ; f i, where f (i) is the set of all truth assignments,
for each agent i. Intuitively, the one-step no-information extension hf ; : : :; fk ; fk i of
hf ; : : : ; fk i describes what each agent knows at depth k + 1, assuming that \all that
each agent i knows" is already described by fk (i) and given the underlying \permissive"
model described above. Thus, fk (i) is the set of all k-worlds that are compatible with
i's lower-depth knowledge.
We can relate this denition to the notion of i-equivalence dened in the previous
section as follows. Let w = hf ; :::; fk? i and w0 = hf 0 ; :::; fk0 ? i be k-worlds. By analogy
with our denition of i-equivalence for knowledge structures, let us say that the worlds
w and w0 are i-equivalent (written w i w0), if fj (i) = fj0 (i) for 0 < j < k. Then (as
noted in Section 2), w i w0 i fk? (i) = fk0 ? (i). So, fk (i) is the i-extension of fk (i)
if fk (i) = fhg ; : : :; gk i : hg ; : : :; gk i i hf ; : : :; fk ig.
The intention of the one-step no-information extension is that w describes the knowledge of the agents one level higher than the description of their knowledge in w, if w
completely describes the information they have. However, a priori, it is not clear that
w is even a world, since it may not satisfy the three restrictions described in Section 2.
Before removing this doubt we need some auxiliary machinery.
The following lemma, whose proof is left to the reader, shows that knowledge assignments can be \mixed".
0
0
+
0
1
1
0
+1
0
+1
0
1
0
1
+1
0
+1
1
0
1
0
+
+
Lemma 4.2: Let hf ; : : : ; fk i and hg ; : : : ; gk i be (k +1)-worlds such that hf ; : : :; fk? i i
hg ; : : : ; gk? i. Let hk be a kth-order knowledge assignment such that hk (i) = fk (i) and
hk (j ) = gk (j ) for j =
6 i. Then hg ; : : :; gk? ; hk i is a (k + 1)-world.
Let w = hf ; : : : ; fk i be a (k + 1)-world and let hg ; : : :; gk? i 2 fk (i). The i-matching
extension of hg ; : : :; gk? i with respect to w is the sequence hg ; : : :; gk? ; gk i, where
gk (i) = fk (i) and gk (j ) is the j -extension of gk? (j ) for j =
6 i.
0
0
0
0
1
1
0
1
0
0
0
1
1
0
1
13
1
Lemma 4.3:
1. Let hf ; : : : ; fk? i be a k-world. Then the one-step no-information extension is a
0
1
(k + 1)-world.
2. Let w = hf ; : : : ; fk i be a (k + 1)-world and let hg ; : : :; gk? i 2 fk (i). Then the
i-matching extension of hg ; : : :; gk? i with respect to w is a (k + 1)-world.
0
0
0
1
1
Proof: We prove parts (1) and (2) simultaneously by induction on k. To prove part (1)
we consider a k-world w = hf ; : : :; fk? i. Let hf ; : : : ; fk i be the one-step no-information
extension of w. We have to show that hf ; : : : ; fk i satises the restrictions (K1), (K2),
and (K3). The case k = 1 is immediate. For the inductive step, again the fact that (K1)
and (K2) hold follows immediately from the denition of the one-step no-information
extension. For (K3), suppose hg ; : : :; gk? i 2 fk? (i). Let w0 be the i-matching extension
of hg ; : : :; gk? i with respect to w. By the induction hypothesis, w0 is a k-world. By
construction w i w0, so by the denition of the one-step no-information extension,
w0 2 fk (i). Thus, every element of fk (i) has an extension in fk (i), and (K3) holds.
To prove part (2) we consider rst the one-step no-information extension hg ; : : :; gk i
of hg ; : : : ; gk? i. By the induction hypothesis, hg ; : : : ; gk i is a (k + 1)-world. Since
hg ; : : : ; gk? i 2 fk (i), we have that hf ; : : : ; fk? i i hg ; : : :; gk? i. The claim now
follows by Lemma 4.2.
Part (1) of Lemma 4.3 tells us that indeed, the one-step no-information extension of
a world is a world.
We now develop some machinery that justies the name \no-information extension".
We have dened what it means for an (r+1)-world hf ; : : :; fr i to satisfy formula ',
written hf ; : : : ; fr i j= ', if r depth('). We now give an extension of this denition to
formulas of greater depth. Let us say that a world w must eventually satisfy ', written
w j= ', if for every knowledge structure f with w as a prex, f j= '. For example, if
the primitive proposition p is true under the truth assignment f , then hf i j= :K :p,
since if p is true, then it is not possible for an agent to know :p. Note that we cannot
replace j= by j= (in other words, it is not the case that hf i j= :K :p), since the depth
of the formula :K :p is too big.
Say that a set of formulas logically implies the formula , written j= , if every
knowledge structure that satises every formula of also satises . That is, logically
if there is no \counterexample" knowledge structure that satises every formula of but not .
The next proposition justies the name \no-information extension", by characterizing
when an agent i knows a formula ' of depth at most k ? 1 in the one-step no-information
extension of a k-ary world w = hf ; : : :; fk? i. The proposition says that this happens
precisely when the truth of the formula Ki ' is already guaranteed by w anyway. There
are two ways that we make precise \the truth of the formula Ki' is already guaranteed
0
1
0
0
0
0
2
1
2
+1
0
0
1
0
0
1
0
1
0
1
0
0
+
0
+
0
1
0
1
14
0
1
+
1
by w anyway". In the rst sense (part (2) of Proposition 4.4 below), w j= Ki '; that
is, w must eventually satisfy Ki ', as dened above. The second sense (part (3) of
Proposition 4.4) says that knowledge (and lack of knowledge) of agent i, as described by
w, is sucient to logically imply Ki '.
+
Proposition 4.4: Assume that i is an agent, w is a k-world, and ' is a formula of
depth at most k ? 1. The following are equivalent.
1. w j= Ki '
2. w j= Ki '
3. Let be the set of all formulas of depth at most k ? 1 of the form Ki or :Ki
that are satised by w. Then j= Ki '.
Proof: It is immediate that (2) implies (1), since w is a prex of w . We now show
that (3) implies (2). Assume that (3) holds. Let f be a knowledge structure with w as a
prex. We must show that f j= Ki'. Since w satises , so does f . Since j= Ki', it
follows that f j= Ki ', as desired.
We now show that (1) implies (3). Assume that (1) holds. Let w = hf ; : : : ; fk i.
Let be as in (3), and let f 0 = hf 0 ; f 0 ; : : :i be a knowledge structure that satises .
We must show that f 0 j= Ki '. That is, let v = hg ; : : :; gk? i be an arbitrary member of
fk0 (i); we must show that v j= '.
Since f 0 satises , so does its k-ary prex hf 0 ; : : :; fk0 ? i. It therefore follows from
Lemma A.1 of the appendix that fk? (i) = fk0 ? (i). Since v = hg ; : : : ; gk? i 2 fk0 (i),
it follows from (K2) that gk? (i) = fk0 ? (i). Hence, gk? (i) = fk? (i). Therefore, by
denition of the one-step no-information extension, v 2 fk (i). Since w = hf ; : : :; fk i j=
Ki ', it follows that v j= '. This was to be shown.
+
+
1
+
+
0
1
0
1
0
1
1
0
1
0
1
1
1
1
1
+
0
We now dene the no-information extension of a world w to be the result of repeatedly taking one-step no-information extensions. Formally, the no-information extension
w* of w is the sequence hf ; : : :; fk ; fk ; : : :i, where hf ; : : : ; fm i is the one-step noinformation extension of hf ; : : : ; fmi for each m k. Intuitively, the no-information
extension w* is a knowledge structure that describes the knowledge of the agents, if w
completely describes the information they have.
We might hope that an analogous proposition to Proposition 4.4 would hold for the
(full) no-information extension. However, this is not the case. To understand why, let
us denote the two-step no-information extension (w ) of w by w . If we replace w
everywhere in Proposition 4.4 by w , and let ' be of depth k, then the proposition no
longer holds. For example, let p be a primitive proposition that is true under the truth
assignment f . Then hf i j= :K p, so by negative introspection, hf i j= K :K p.
0
+1
0
+1
0
+ +
++
+
++
0
0
+
1
0
++
1
1
Note that if k = 1, then there are no formulas of depth 0 of the form K or :K . Thus, if k = 1
then is the empty set.
1
i
15
i
Therefore, if ' is :K p, then hf i j= Ki ', although hf i 6j= Ki ' and 6j= Ki', where
is as in part (3) of Proposition 4.4. What is happening is that negative knowledge
at one level induces positive knowledge at the next level. This would not happen if we
modied the denition of knowledge structures by eliminating negative introspection, as
is done in [Va1].
Let w = hf ; : : : ; fk i be a (k + 1)-world, and let w = hf ; : : :; fk ; fk i be the onestep no-information extension. Note that for each (k + 2)-world hf ; : : :; fk ; fk0 i that
extends w, we have fk0 (i) fk (i) for each agent i (this follows from the restriction
that for every hg ; : : :; gk i 2 fk0 (i), necessarily gk (i) = fk (i)). Thus, the one-step
no-information extension can be characterized by the fact that fk (i) is maximal for
each i. This explains why \no extra knowledge" is added in taking the one-step noinformation extension, since the more possible worlds there are, the less knowledge there
is. However, now let hf ; : : :; fk ; fk ; fk i be the two-step no-information extension
w . It is not the case that fk (i) is maximal for each i, among all two-step extensions
hf ; : : : ; fk ; fk0 ; fk0 i of w. This is because if fk0 (i) 6= fk (i), then fk0 (i) and fk (i)
are incomparable (and in fact, disjoint from each other), since every hg ; : : : ; gk i 2 fk0
has gk (i) = fk0 (i), whereas every hg ; : : : ; gk i 2 fk has gk (i) = fk (i). Again,
the point is that lack of knowledge at one level induces knowledge at the next level, by
negative introspection.
The next theorem follows immediately from part (1) of Lemma 4.3.
1
0
++
+
0
+
0
0
+1
0
+1
+1
+1
0
+1
+1
0
++
+1
+2
+2
0
+1
+2
+1
+1
+2
+2
0
+1
+1
0
+1
+2
+1
+1
+2
+1
Theorem 4.5: For all worlds w, the no-information extension w* is a knowledge structure.
Corollary 4.6: Every world is the prex of a knowledge structure.
We note that in fact, it is not hard to show that every world is the prex of uncountably many distinct knowledge structures. Corollary 4.6 answers McCarthy's question
(see Section 2) positively.
We shall investigate some properties of the no-information extension in the next
section.
4.2 On the Presence of Common Knowledge
Recall that state s of Kripke structure M is equivalent to knowledge structure f if M; s j=
' i f j= ', for every formula '. Since a no-information extension captures what is
perhaps the most natural notion of nite information, one might hope that for each noinformation extension w*, there would be a nite Kripke structure (i.e., one with nitely
many states), one of whose states is equivalent to w*. However, this is not true. In fact,
it is very far from the truth: for no no-information extension is there such a nite Kripke
structure. To understand why, we must rst consider the notion of common knowledge.
16
Assume that the agents are 1,: : : ,n. Let E' be a shorthand for K ' ^ : : : ^ Kn ', that
is, \Everyone knows '". Let E sup 0' be ', and let E sup j' abbreviate EE sup j ? 1'
for j 1. We say that the formula ' is common knowledge in knowledge structure f if
f j= E sup j' for every j > 0. Similarly, we say that the formula ' is common knowledge
in state s of Kripke structure M if M; s j= E sup j' for every j > 0.
As we shall show, there is never any nontrivial common knowledge in a no-information
extension, as long as there are at least two agents. In fact, we shall show (in Corollary
4.13 below) that ' is common knowledge in w* i ' is valid. We now show that the situation is completely dierent for nite Kripke structures. We rst need some preliminary
denitions, which will also be useful for some of the theorems we prove later.
1
2
Denition 4.7: Let p = i : : : is be a nite string of agents. The length of p is s.
The reverse of p, written p sup R, is is : : :i . If ' is a formula, then let K ' be an
abbreviation for the formula Ki1 : : :Ki '. If p is the empty string, then K ' is taken to
1
1
p
p
s
be '.
Denition 4.8: Let M = (S; ; K ; : : :; Kn ) be a Kripke structure. We say that there is
a path of length k between two states s and t in S if there is a sequence u ; : : :; uk of states
in S such that s = u , t = uk and for all 0 i k ? 1 we have that (ui; ui ) 2 Kj for
some 1 j n. If there is such a path, then we say that the two states are connected.
The distance between s and t is the length of the shortest path between s and t if such
a path exists, and is undened otherwise.
1
0
0
+1
Theorem 4.9: Assume that there are at least two agents. Then for each nite Kripke
structure M , there is some nonvalid formula ' that is common knowledge in every state
of M .
Proof: Let M = (S; ; K ; : : :; Kn ), n 2. Let be the maximal distance between any
1
two connected states in S , and let p be a primitive proposition.
Let be the formula E sup p ) E sup + 1p. We claim that is not valid. By way
of proof, consider the Kripke structure M 0 = (S 0; 0; K0 ; : : : ; Kn0 ), where S 0 = f0; : : :; ; +
1g, 0(i) makes p true i 0 i , for j 6= 1 and j 6= 2 the relation Kj0 is the trivial
equivalence relation f(i; i) : 0 i + 1g, K0 is the reexive closure of f(i; i + 1) : 0 i and i is eveng, and K0 is the reexive closure of f(i; i +1) : 0 i and i is oddg.
We leave it to the reader to verify that M 0; 0 6j= . (Note, however, that is valid if
there is only one agent.)
We now claim that is true in every state of S . Suppose that E sup ' is true in a
state s in S . Then ' must be true in all states whose distance from s is less or equal to
. By the denition of , it follows that ' must be true in all states that are connected
to s, and consequently E sup + 1' is true in s. Since is true in all states of M , it
follows that is common knowledge in every state of M .
1
1
2
2
There are other interpretations of the notion of common knowledge. See [Ba].
17
Remark 4.10: Interestingly, the theorem does not hold if there is only one agent. The
intuitive reason is that in that case there are only nitely many distinct knowledge states
(given our assumption of a nite set of primitive propositions) and each one of them may
hold in one of the connected components of M . A weaker version of the theorem is still
true, however. For each nite Kripke structure M and each state s of M , there is some
nonvalid formula ' that is common knowledge in s.
We now show that agents have arbitrarily deep nontrivial knowledge in a no-information
extension. It is instructive to consider rst the \simplest" no-information extension. Assume that there is only one primitive proposition p, and that there are only two agents,
Alice and Bob. As before, for convenience let us for now denote simply by p the truth
assignment that makes p true. Intuitively, the no-information extension hpi* is the knowledge structure where p is true, and where Alice and Bob have no information. Assume
that hpi* j= KAlice '. We might conjecture that ' must then be valid, since, after all,
Alice has \no information" in hpi*. This, however, is not the case. For, since Alice
does not know p, she knows that she does not know p. That is, hpi* j= KAlice :KAlice p,
although :KAlice p is not valid. What if hpi* j= KAlice KBob'? We are certainly tempted
to conjecture that ' must then be valid. After all, in hpi*, Alice \has no information"
about Bob. Once again, our intuition is incorrect. For, since Alice knows that the formula
KAlice p is false, she also knows that Bob cannot know this formula (because anything
that Bob knows must be true); hence, Alice knows that :KBobKAlice p is true. Since Alice
knows that Bob is introspective, she knows that if Bob does not know something, then he
knows that he does not know it. Thus, hpi* j= KAlice KBob:KBobKAlice p. Hence, if ' is the
(non-valid) formula :KBobKAlice p, then hpi* j= KAlice KBob', contrary to the tempting
conjecture. In fact, it follows from the proof of Theorem 4.11 below that this generalizes,
so that if q 2 fAlice; Bobg*, then hpi* j= K :K R p. Thus, in the no-information
extension hpi*, where Alice and Bob have \no information" about each other, they nevertheless have arbitrarily deep nontrivial knowledge! Moreover, by taking ' to be the
disjunction of :K R p over all q of length k, it follows that hpi* j= E sup k'. More
generally, we have the following theorem, which we shall show to be the best possible.
q
(q sup
(q sup
)
)
Theorem 4.11: For every k, there is a k-world w such that for every l, there is a
nonvalid formula ' of depth l where w* j= E sup l + k ? 1'.
Proof: We assume for convenience that there is exactly one primitive proposition p. Let
w be the k-world hf ; : : : ; fk? i where f is the truth assignment that makes p true, and
where fj (i) = fhf ; : : :; fj? ig if 1 j < k, for each agent i. Thus, fj (i) is a singleton
set for 1 j < k and for each agent i. For each r 2 P * of length k ? 1, it is easy to see
that w j= K p. Let q = rs be an arbitrary member of P * of length l + k ? 1, where r is
of length k ? 1 and s is of length l. Let be the formula :K R :p. We shall now
show that w* j= K .
We rst show, by induction on l, that for every p 2 P * of length l, the formula p
implies K :K R :p. The base case (l = 0) is immediate. Assume inductively that
0
1
0
0
1
r
s
q
p
(p sup
s
)
18
(s sup
)
p implies K :K R :p; we shall show that p implies K i:Ki R :p, where i is an
arbitrary agent. It is convenient to give names for reference to the following two simple
facts.
Fact 1. implies :Ki :.
Fact 2. If implies , then K implies K , for t 2 P *.
By Fact 1, where we let be the formula :K R :p, we know that :K R :p implies :KiK R :p. Let be K R :p. By the axioms, we know that :Ki implies
Ki :Ki . So, :K R :p implies Ki :Ki K R :p. By using Fact 2, we therefore infer that K :K R :p implies K Ki:Ki K R :p. That is, K :K R :p implies
K i:Ki R :p. Since by inductive assumption p implies K :K R :p, it follows
that p implies K i:Ki R :p. This completes the induction step.
By what we just showed, p implies K :K R :p. So by Fact 2, we know that K p
implies K K :K R :p. But K K :K R :p is just K . Hence, K p implies
K . Since (a) w j= K p, (b) the k-ary prex of w* is w, and (c) the formula K p
is of depth k ? 1, it follows that w* j= K p. Therefore, from what we just showed,
w* j= K .
Let ' be Wf : s 2 P * and s is of length lg. Let q = rs be an arbitrary member of
P * of length l + k ? 1, where r is of length k ? 1 and s is of length l. Since, as w* j= K ,
it follows that w* j= K '. Since q was arbitrary, it follows that w* j= E sup l + k ? 1'.
Finally, it remains to show that ' is not valid. Let f be the knowledge structure
hf ; f ; : : :i where f is the truth assignment that makes p false, and where fj (i) =
fhf ; : : :; fj? ig for j 1 and for each agent i. Thus, fj (i) is a singleton set for j 1
and for each agent i. It is easy to see that w j= K :p, for each r 2 P *. Thus f j= :',
so ' is not valid.
Recall that we claimed above that there is no nontrivial common knowledge in a noinformation extension. We might hope to prove this by showing that if w is a k-world
and w j= E sup j' for some suciently large j (say j > k + 1), then ' is necessarily
valid. However, Theorem 4.11 shows that this approach will not work. We must take a
more sophisticated approach and consider also the depth of ', as in the following result,
whose proof appears in the appendix.
Theorem 4.12: Assume that there are at least two agents, ' is a formula of depth r,
and w is a k-world. If w* j= E sup r + k', then ' is valid.
Note that the result of Theorem 4.12 is tight, since Theorem 4.11 shows that we
cannot replace \r + k" in 4.12 by \r + k ? 1". And of course it now follows immediately
that there is no nontrivial common knowledge in a no-information extension.
Corollary 4.13: Assume that there are at least two agents, and w is a world. Then '
is common knowledge in w* i ' is valid.
p
(p sup
)
p
t
)
(p sup
(p sup
p
p
(p sup
(p sup
(p sup
p
q
p
)
p
(p sup
(s sup
r
)
s
)
(p sup
(p sup
)
)
)
s
s
(p sup
)
(p sup
)
p
r
)
)
)
)
)
t
(p sup
(p sup
(p sup
s
(s sup
(s sup
r
)
q
)
s
r
r
r
r
q
s
s
q
q
0
1
0
0
1
r
19
s
Corollary 4.13 contrasts with the situation for nite Kripke structures (Theorem 4.9).
In fact, the following theorem is a simple consequence of Theorem 4.9 and Corollary 4.13.
Theorem 4.14: Assume that there are at least two agents and that w* is a no-information
extension. Then there is no state s of a nite Kripke structure M that is equivalent to
w*. That is, if s is a state of a nite Kripke structure M , then there is a formula ' such
that M; s j= ' but w* j= :'.
Proof: Let be a nonvalid formula which is common knowledge in every state of
M . Such a formula exists, by Theorem 4.9. By Corollary 4.13, there is k such that
w* j= :E sup k . However, since is common knowledge in every state of M , we know
that M; s j= E sup k . So, if we let ' be E sup k , then the theorem follows.
Thus, even to model Example 1.1 requires innitely many states if we use Kripke
structures.
4.3 The least-information extension
When dening the no-information extension of a world w we assumed that agents consider
possible every world that is compatible with w. The justication is that no assumption
should be made about the underlying mode of knowledge acquisition. In practice, however, agents usually do have information about how knowledge is acquired. Furthermore,
this information is often common knowledge. For example, it may be common knowledge
that the only way new knowledge is acquired is via message passing, and that communication proceeds in synchronous rounds. In this case the no-information extension is
inappropriate, since it does not capture this common knowledge (thus, it is common
knowledge that after, say, one round, Alice does not know that Bob knows that Alice
knows the primitive proposition p). As another example, if it is common knowledge that
each agent stores his information about primitive propositions and about the information he has received from other agents in a database, and it is common knowledge that
databases are insecure, then again the no-information extension is inappropriate. For,
if Alice has peeked at Bob's database and thereby knows that Bob knows p, then Alice
does not consider it possible that Bob knows that Charlie does not know that Bob knows
p, since it is common knowledge that Charlie could have peeked at Bob's database also.
We now consider a generalization of the no-information extension; in particular, we
construct the least-information extension, which is designed to capture the idea of \all
you know, given some common knowledge". In order to explain our construction, we rst
need to investigate the notion of common knowledge a little more.
Denition 4.15: A world w appears in a world u if w is a prex of u; w appearsj
in u if w appearsj in u or some world hg ; : : : ; gk i appearsj in u and w 2 gk (i) for some
0
+1
0
agent i. A world w appears in world u if w appearsj in u for some j . A world w appears in
20
knowledge structure f if w appears in some prex of f . Let worldsk (f ) (resp. worldsk (w))
be the set of k-worlds appearing in f (resp. w).
Lemma 4.16: Suppose depth(') = k. Then ' is common knowledge in f i ' is true
in all the worlds in worldsk (f ).
Proof: It is easy to show by induction on m that f j= E sup m' i ' is true in all the
(k + 1)-worlds that appearm in f .
+1
Using the intuition brought out by this lemma, we can now describe our construction
of the least-information extension. More precisely, given a set C of k-worlds (for example,
these worlds can be all the k-worlds satisfying a formula that is commonly known to be
true) and a world w 2 C , we construct the least-information extension of w with respect
to C . The idea is to build a knowledge structure where C is common knowledge; i.e., the
only k-worlds that appear are those in C . The construction is completely analogous to
that of the no-information extension, except that everything is relativized to C .
Let C be a set of k-worlds, let W be a set of (m ? 1)-worlds, and i an agent. Dene
the i-extension of W with respect to C to be the set of m-worlds given by fhg ; : : : ; gm? i :
gm? (i) = W and worldsk (hg ; : : : ; gm? i) Cg.
Intuitively, this is the set of m-worlds w such that in w, agent i considers W as the
set of possible (m ? 1)-worlds, subject to the restriction that it is common knowledge
that the only possible k-worlds are those in C .
Denition 4.17: Let C be a set of k-worlds, and let w = hf ; : : :; fmi be an (m+1)-world.
The one-step least-information extension of w with respect to C , written (w; C ) , is the
(m + 2)-world hf ; : : :; fm; fm i, where fm (i) is the i-extension of fm (i) with respect
to C . Thus, fm (i) = fhg ; : : :; gm i : gm (i) = fm(i) and worldsk (hg ; : : : ; gmi) Cg.
Intuitively, the one-step least-information extension hf ; : : :; fm; fm i of hf ; : : :; fm i
describes what each agent knows at depth m + 1, assuming that \all that each agent i
knows" is already described by fm(i) and the fact that it is common knowledge that C is
the set of possible k-worlds. Thus, fm (i) is the set of all m-worlds that are compatible
with i's lower-depth knowledge, subject to the constraint that it is common knowledge
that C is the set of possible k-worlds. Note that the one-step no-information extension is
a special case of the one-step least-information extension, where we take C to be the set
of all k-worlds. As in the case of the one-step no-information extension, it is not a priori
clear that (w; C ) is a world. In fact, in general it is not. We shall investigate this issue
shortly.
The next proposition is analogous to Proposition 4.4.
Proposition 4.18: Assume that C is a set of k-worlds, i is an agent, w is a k-world,
and ' is a formula of depth at most k ? 1. Assume that (w; C ) is a world. The following
are equivalent.
0
1
0
1
1
0
0
+1
+1
+
+1
0
0
0
+1
+1
+
+
21
0
1. (w; C )+ j= Ki '
2. w j=+;C Ki '
3. Let be the set of all formulas of depth at most k ? 1 of the form Ki or :Ki
that are satised by w. Let ? be the set of all formulas E r , where is a depth
(k ? 1) formula that is satised by every member of C . Then [ ? j= Ki '.
Proof: The proof is very similar to that of Proposition 4.4. We also make use of the fact
that if a knowledge structure f satises ?, then worldsk (f ) C . Details are omitted.
In part (3) of Proposition 4.18, the set ? of formulas say that every depth k ?1 formula
that is satised by every member of C is common knowledge. In the next section, we shall
enrich our language so that \ is common knowledge" can be expressed in the language
(by C ). For each as described in Proposition 4.18, we could then, of course, replace
the set of formulas E r by the single formula C . So, part (3) of Proposition 4.18 says
that knowledge (and lack of knowledge) of agent i, as described by w, along with the
fact that it is common knowledge that C is the set of possible k-worlds, is sucient to
logically imply Ki'.
Just as we did with the no-information extension, we dene the least-information
extension by taking one-step least-information extensions repeatedly. Formally, if w =
hf ; : : : ; fk? i 2 C , then the least-information extension of w with respect to C , written
(w; C )*, is the sequence hf ; : : : ; fk? ; fk ; : : :i, where hf ; : : :; fm i is the one-step leastinformation extension of hf ; : : :; fmi with respect to C , for each m k ? 1.
As with one-step extensions, the no-information extension is a special case of the
least-information extension, where we take C to be the set of all k-worlds.
As we remarked earlier, (w; C ) is not necessarily a world, so of course (w; C )* is not
necessarily a knowledge structure. As we shall see, there may not even be a knowledge
structure where C is common knowledge. In order to characterize when least-information
extensions exist, we need a few technical denitions.
0
1
0
1
0
+1
0
+
Denition 4.19: Let C be a set of k-worlds, and let i be an agent. C is i-closed if either
(a) k = 1 or (b) k > 1 and whenever hf ; :::; fk? i 2 C and hg ; :::; gk? i 2 fk? (i) then
there is gk? such that hg ; :::; gk? ; gk? i 2 C and gk? (i) = fk? (i). C is closed if it is iclosed for each agent i. We say w0 is reachable from w in C if there is a sequence w ; : : :; wk
of worlds in C such that w = w , w0 = wk , and for all j < k, we have wj i wj for
some agent i. In this case we say that w0 is distance k from w. Let reach(w; C ) be the
set of worlds reachable from w in C .
The intuition behind closedness is that a set C of worlds is closed if all the worlds
that are considered possible in worlds in C are themselves in C . Thus, if agent i knows
that only worlds in C are possible, and
1. hf ; :::; fk? i 2 C , and
0
1
0
2
1
1
0
1
2
1
1
0
0
0
+1
1
22
2. there is no gk? such that hg ; :::; gk? ; gk? i 2 C and gk? (i) = fk? (i),
1
0
2
1
1
1
then we cannot have hg ; :::; gk? i 2 fk? (i). The intuition behind reachability is that if
w and w0 are in C and w i w0, then w0 is possible for agent i from w. Thus, the worlds
reachable from w in C are the worlds that are in some sense considered possible in w.
To further motivate the above notions we present the following proposition.
0
2
1
Proposition 4.20: If f = hf ; f ; : : :i is a knowledge structure, then for all k 0,
1. the set worldsk (f ) is closed, and
2. worldsk (f ) = reach(hf ; : : :; fk? i; worldsk (f )).
Proof: For (1), rst note that if a world hg ; : : :; gk? i appears in f , then some extension
hg ; : : : ; gk? ; gk i also appears in f . This is proved for appearsj by an easy induction on j ,
using (K3); we leave details to the reader. Now x k, and suppose that hg ; : : :; gk? i 2
worldsk (f ) and hh ; : : :; hk? i 2 gk? (i). From the observation above it follows that there
is some extension hg ; : : :; gk i that appears in f . By (K3) again, there is some hk? such
that hh ; : : :; hk? ; hk? i 2 gk (i). By denition, hh ; : : :; hk? i 2 worldsk (f ) and by (K2),
gk? (i) = hk? (i). Thus worldsk (f ) is i-closed. Since i was arbitrary, worldsk (f ) is closed.
For (2), we prove by a straightforward induction on j that for all m, if hg ; : : :; gm? i
appearsj in f , then hg ; : : :; gm? i 2 reach(hf ; : : :; fm? i; worldsm (f )). If j = 0 the
result is immediate. For j 1, suppose that there is some hh ; : : : ; hmi that appearsj ?1
in f such that hg ; : : :; gm? i 2 hm(i) for some agent i. By the induction hypothesis,
hh ; : : : ; hmi 2 reach(hf ; : : : ; fmi; worldsm (f )). It is easy to see that we must also
have hh ; : : :; hm? i 2 reach(hf ; : : :; fm? i; worldsm (f )). And since hg ; : : : ; gm? i i
hh ; : : : ; hm? i, it follows that hg ; : : :; gm? i 2 reach(hf ; : : : ; fm? i; worldsm (f )).
The next theorem, which is proven in the appendix, characterizes when (w; C )* is a
0
1
0
1
0
0
1
1
0
0
2
1
1
0
0
2
1
1
1
0
1
1
0
0
1
0
1
1
0
0
0
0
0
0
1
1
1
+1
0
0
1
0
1
0
1
1
knowledge structure.
Theorem 4.21: (w; C )* is a knowledge structure i reach(w; C ) is a closed set.
Suppose C is a set of k-worlds and w 2 C . The least-information extension (w; C )*
(provided it exists) describes a knowledge structure where it is common knowledge that
the only k-worlds that appear are those in C . But the k-worlds that appear might be
a proper subset C 0 C . In such a case, it would be common knowledge that the only
k-worlds that appear are those in C 0. When is it the case that all the worlds of C are
considered possible? That is, under what conditions do all the worlds in C appear in
(w; C )*? We get a clue to the answer from Proposition 4.20, from which it follows easily
that if C = worldsk (f ) and w is the k-ary prex of f , then reach(w; C ) is closed and
C = reach(w; C ). The next theorem, whose proof appears in the appendix, says that
these two conditions on w and C characterize when (w; C )* is a knowledge structure
where all the worlds of C appear.
23
Theorem 4.22: (w; C )* is a knowledge structure where all the worlds of C appear i C
is closed and C = reach(w; C ).
We remark that from now on, whenever we write (w; C )*, we will always assume that
C is a set of k-worlds for some k, and reach(w; C ) is closed, so that in fact (w; C )* is a
knowledge structure.
We shall shortly give a proposition that justies the name \least-information extension", just as the analogous Proposition 4.4 justies the name \no-information extension".
We rst need a denition analogous to that of j= . Recall that a world w must eventually
satisfy ', written w j= ', if for every knowledge structure f with w as a prex, f j= '.
Let C be a set of k-worlds. We say that a world w must eventually satisfy ', if C is
common knowledge, written w j= ;C ', if for every knowledge structure f with w as a
prex such that worldsk (f ) C , we have f j= '.
There is a natural sense in which we can view (w; C )* as a \nite model", since it is
a nite description of an (innite) knowledge structure. It is also natural to view a pair
(M; s), where M is a nite Kripke structure and s is a state of M , as a \nite model". As
we now show, the former class of \nite models" is richer than the latter class: every state
in a nite Kripke structure is equivalent to some least-information extension. However,
by Theorem 4.14, the converse does not hold, since a no-information extension is a special
case of a least-information extension and is not equivalent to any state in a nite Kripke
model.
If M is a Kripke structure and s is a state of M , then let fM;s be the knowledge
structure constructed in the proof of Theorem 3.1. (Recall that fM;s is equivalent to the
state s of M , that is, M; s j= ' i fM;s j= ', for every formula '.)
+
+
+
Theorem 4.23: If M is a nite Kripke structure and s is a state of M , then fM;s is a
least-information extension; i.e., there exists a set C of worlds and a world w 2 C such
that fM;s = (w; C )*.
Proof: Let M be the Kripke structure (S; ; K ; : : :; Kn ), where S , the set of states, is
nite. Throughout this proof, we shall denote fM;s by fs, and write fs as hs ; s ; : : :i, for
each s 2 S . Recall from the construction of Theorem 3.1 that sk (i) = fht ; : : : ; tk? i :
(s; t) 2 Ki g for k > 0. Choose N such that if s; t 2 S and fs =
6 ft, then sN ? =6 tN ? .
1
0
1
0
1
1
1
Since S is nite, there is some nite N with this property. In fact, we can simply take
N to be the maximum of Ns;t + 1 for all states s; t in S , where Ns;t is the least m such
that sm 6= tm if fs 6= ft, and 0 if fs = ft. Let C = fhs ; : : :; sN i : s 2 S g. It is easy to see
from the construction of fs that the (N + 1)-worlds that appear in fs are precisely those
in reach(hs ; : : : ; sN i; C ), and by Proposition 4.20, these worlds form a closed set. Thus
(hs ; : : :; sN i; C )* is a knowledge structure by Theorem 4.21. We now show that for all
s 2 S , we have fs = (hs ; : : :; sN i; C )*.
In fact, we prove the following claim: Suppose s 2 S , N 0 N , and w = hs0 ; : : :; s0N 0 i
is such that (a) worldsN (w) C and (b) s0i = si for i N . Then s0i = si for all i N 0.
0
0
0
0
0
+1
24
We prove the claim by showing, by induction on m, that sm = s0m , for 0 m N 0.
For m N this follows by assumption. For m > N , suppose ht0 ; : : : ; t0m? i 2 s0m(i). Now
ht0 ; : : :; t0m? i 2 s0m? (i) by (K3), and s0m? (i) = sm? (i) by the inductive hypothesis.
Thus, ht0 ; : : :; t0m? i 2 sm? (i), so for some t such that (s; t) 2 Ki, we have t0l = tl for
0 l m ? 2. Moreover, since the prex ht0 ; : : : ; t0N i must be in C by assumption (a),
it follows that for some u 2 S , we have t0l = ul for 0 l N . Now using the induction
hypothesis, we have that t0l = ul for 0 l m ? 1. Since tl = t0l for 0 l m ? 2,
it follows that tl = ul for 0 l m ? 2, and, since m ? 2 N ? 1, by choice of N
we have um? = tm? . Thus, t0l = tl for 0 l m ? 1. Since ht ; : : : ; tm? i 2 sm(i) by
denition of sm(i), we have that ht0 ; : : :; t0m? i 2 sm(i). Thus, s0m(i) sm(i). For the
converse, suppose that ht ; : : : ; tm? i 2 sm(i). Thus by (K3), ht ; : : : ; tm? i 2 sm? (i), so
by the inductive hypothesis, ht ; : : :; tm? i 2 s0m? (i). By (K3), there exists some t0m?
such that ht ; : : : ; tm? ; t0m? i 2 s0m(i). Now if m ? 1 > N , it immediately follows from the
induction hypothesis that we must have t0m? = tm? . If m ? 1 = N , then by assumption
(a) it follows that ht ; : : :; tm? ; t0m? i 2 C . By the denition of C , we also have that
ht ; : : :; tm? ; tm? i 2 C . By choice of N , we must have t0m? = tm? . In either case, we
get sm(i) s0m(i). Thus it follows that sm = s0m , and we are done with the proof of the
claim.
Now taking (hs ; : : :; sN i; C )* = hs0 ; s0 ; : : :i, an easy induction on m using this claim
shows that sk = s0k for all k, so that fs = (hs ; : : :; sN i; C )* as desired.
0
0
2
1
0
1
1
1
1
2
0
1
1
0
0
0
1
1
0
0
0
2
2
1
1
1
0
2
2
1
1
1
0
1
2
1
1
1
1
0
0
1
0
Remark 4.24: The above proof does not give us any information about the length of
the worlds in C . We now show that we can bound this length.
We say that a state s is equivalent to t at the jth stage with respect to agent i, denoted
s i;j t, if sj (i) = tj (i). It is easy to see that if s i;j t, then s i;j? t.
Suppose now that for some j 0 we have that s i;j t i s i;j? t for all states
s; t 2 S and for all agents i 2 P . If that is the case, then we say that j ? 1 is stable. We
claim that if j ? 1 is stable, then j is also stable. As we already observed, if s i;j t
then s i;j t, so it remains to prove the other direction.
Assume s i;j t. We rst prove that sj (i) tj (i). Let u 2 S be such that
(s; u) 2 Ki, so hu ; : : :; uj i 2 sj (i). Since hu ; : : : ; uj? i 2 sj (i) = tj (i), there must be
some v 2 S such that (t; v) 2 Ki and hu ; : : : ; uj? i = hv ; : : : ; vj? i. By assumption it
follows that hu ; : : : ; uj i = hv ; : : :; vj i, so hu ; : : :; uj i 2 tj (i), as desired. Analogously
we can show that tj (i) sj (i). Thus s i;j t.
Since the relations i;j are decreasing as a function of j , there is some j 0 such
that all j j are stable. Assume that j is minimal with respect to that property. Thus,
if 0 j < j , then for some agent i, the equivalence relation i;j strictly renes i;j? .
Let m be the number of states in S , and let n be the number of agents. An equivalence
relation on S can be strictly rened at most m ? 1 times. Thus, j mn. That is, if we
take N to be mn + 1, and if s; t 2 S and f 6= f , then sN ? 6= tN ? . Consequently, C
can be taken to be a set of (mn + 2)-worlds.
1
1
+1
+1
0
+1
+1
0
1
0
0
0
+1
1
0
+1
0
1
+1
+1
0
0
0
0
1
0
s
25
t
1
1
We note that if s i;j0 t for all agents i 2 P , then fs = ft. This notion of equivalence
between states in Kripke structures is closely related to the notions of equivalence [Ho]
and bisimulation [Pa] between states in nite-state automata.
Example 4.25: We now consider a very interesting situation where both no-information
extensions and least-information extensions enter the picture. Suppose we have three
agents, Alice, Bob, and Charlie, and suppose that there are two primitive propositions,
p and q. All the agents observe \reality" (so that they get some information about p
and q), but do not communicate, and intuitively have \no information about each other's
knowledge". Moreover, it is common knowledge that this is the case. Further, suppose
that it is the case that p and q are both true, but Alice just knows that p is true and has
no knowledge of q, Bob knows that either p is true or q is true, while Charlie knows that
both p and q are true. What knowledge structure f describes this situation? Clearly,
its 2-ary prex is hf ; f i, where f = pq, f (Alice) = fpq; pqg, f (Bob) = fpq; pq; pqg,
and f (Charlie) = fpqg. Now f (Alice) is clearly the Alice-extension of f (Alice); Alice
considers any 2-world consistent with her own information possible. Similarly, we see that
hf ; f ; f i is the one-step no-information extension of hf ; f i. What about f ? Should
we continue taking one-step no-information extensions? The answer is no, since it is
common knowledge that \no one has any knowledge about anyone else's knowledge", so
it is also common knowledge that the only 2-worlds possible are one-step no-information
extensions! Let C be the set of all 2-worlds that are one-step no-information extensions.
Then f = (hf ; f ; f i; C )*.
0
1
0
1
0
1
1
1
2
1
2
0
0
1
1
3
2
This example can be generalized. Consider a situation where knowledge is acquired
by unreliable synchronous communication. Intuitively, before the rst round of communication we can paradoxically say that it is common knowledge that \nobody has
nontrivial knowledge of depth greater than 1". (Note that if communication is reliable,
then common knowledge about reality can be achieved in one round of communication.)
Similarly, after r rounds of communication we can say that it is common knowledge
that \nobody has nontrivial knowledge of depth greater than r + 1". Suppose that f
describes the state of knowledge after r rounds of communication, where f = hf ; f ; : : :i.
Then essentially the same reasoning as that above shows that fr (i) is the i-extension
of fr (i), and if C is the set of all (r + 3)-ary one-step no-information extensions, then
f = (hf ; : : : ; fr i; C )*. The knowledge structure f can loosely be described as \the
least-information extension of a one-step no-information extension".
It turns out that this situation can also be captured by a nite Kripke model. Let
1; : : : ; n be the agents. Let M be the Kripke structure (S; ; K ; : : : ; Kn), where S is the
set of all k-worlds, where (hf ; : : : ; fk? i) = f , and where Ki = f(w; w0) : w i w0g
for each agent i. This construction is analogous to that of Theorem 3.1, except there we
took S to consist of all knowledge structures, rather than all the k-worlds. Of course, in
this case M is a nite Kripke structure, since there are only a nite number of k-worlds.
By Theorem 4.23, we know that fM;s is a least-information extension for every state s
0
+2
+1
0
+2
1
0
1
0
26
1
of M . In fact, it turns out that fM;s = (hs ; : : :; sk i; C )*, where sk (i) is the i-extension
of sk? (i) for each agent i and C consists of all the (k + 1)-ary one-step no-information
extensions. The details of the proof of this fact are straightforward and left to the reader.
0
1
4.4 Summary
We now summarize our results on modelling nite information. First, we introduce
the no-information extension w* of a world w, which intuitively represents the state of
knowledge if all of the information of the agents (about reality and about each other)
is already given by the world w. We show that the no-information extension is indeed
a knowledge structure. In particular, this shows that every world is the prex of some
knowledge structure. Although on the face of it, both a nite Kripke structure (along
with a state of the Kripke structure) and a no-information extension can be considered
as \nite models", we show that there is an important dierence, when there are at least
two agents. In every nite Kripke structure, some nonvalid formula is common knowledge
in every state. However, in each no-information extension, the only formulas that are
common knowledge are valid formulas.
We then dene the least-information extension, which is a generalization of the noinformation extension that allows certain common knowledge. Let C be a set of k-ary
worlds, and let w be a world in C . Intuitively, the least-information extension (w; C )*
represents the state of knowledge if all of the information of the agents (about reality and
about each other) is already given by the world w, subject to the constraint that it is common knowledge that the only possible k-worlds are those in C . Unlike the no-information
extension, the least-information extension is not always a knowledge structure. We characterize when the least-information extension (w; C )* is a knowledge structure. We also
characterize when it represents a situation where it is common knowledge that in fact
the possible k-worlds are precisely those in C (rather than possibly a proper subset of C ).
We show that least-information extensions are the most general notion of \nite model"
of those we have discussed, in that not only is every no-information extension a leastinformation extension, but also the knowledge structure that represents the information
at a state of a nite Kripke structure is also a least-information extension.
5 Modelling Common and Joint Knowledge
Convention: We use lower-case Roman letters such as i, j , etc. to range over natural
numbers, and we use lower-case Greek letters such as , , etc. to range over ordinals.
5.1 Extended syntax and semantics
In Section 4, we mentioned the important concept of common knowledge. Nevertheless,
common knowledge was dened as a metalogical concept, and we could not express it
directly in our logic. Thus, it is natural to extend our logic and add to it the notion of
27
common knowledge. That is, if ' is a formula, then we would also like C' (\' is common
knowledge") to be a formula, so that we can allow formulas with C \inside". Another
important notion that we would like to add to our logic is that of joint knowledge. A
fact ' is joint knowledge of a group S if \everybody in S knows that everybody in S
knows ... '". Common knowledge is, of course, a special case of joint knowledge, where
S is the set of all agents. Joint knowledge is important in situations where some agents
are reasoning about the knowledge shared by certain groups of agents (see for example
[DM]). Thus, we extend our language by adding a Vnew modality CS , for each nonempty
set S of agents. Let ES ' be an abbreviation for i2S Ki' (i.e., \everyone in S knows
'"). Furthermore, let ES sup 0V' denote ', and let ES sup i' denote ES ES sup i ? 1'.
Then CS ' is intended to mean i> ES sup i'.
We now want to give semantics to the extended language. The semantics dened in
Section 2 depended on the notion of depth of formulas. Since a joint knowledge formula
is intended to be equivalent to an innite conjunction of formulas of increasing depth,
it seems that the depth of a joint knowledge formula should be innite. This motivates
using ordinals to dene depth of formulas. More formally, we dene the depth of formulas
in the extended logic as follows.
0
1.
2.
3.
4.
5.
depth(p) = 0 if p is a primitive proposition;
depth(:') = depth(');
depth(' ^ ' ) = max (depth(' ); depth(' ));
depth(Ki') = depth(') + 1.
depth(CS ') = minf: depth(') + i for all i 0g.
1
2
1
2
In other words, depth(CS ') is the rst limit ordinal greater than depth('). For example,
if p is a primitive proposition then the depth of K K :Cf ; gp is ! + 2 and the depth of
Cf ; g:Cf ; gp is ! 2.
It is easy to verify the following proposition.
1
12
2
34
34
Proposition 5.1: For all extended formulas ', we have depth(') < ! sup 2.
To dene the semantics of formulas of innite depth we need to dene worlds of
innite length, that are indexed by ordinals rather than only by natural numbers. The
denition is a natural extension of the denitions in Section 2. Instead of dening k-ary
worlds for every natural number k, we dene -ary worlds for every ordinal . A 0thorder knowledge assignment f is a truth assignment to the primitive propositions. We
call hf i a 1-ary world. Let W be the set of all -ary worlds. A th-order knowledge
assignment is a function f : P ! 2 sup W. A -sequence of knowledge assignments
is a sequence hf ; f ; : : :i of length , where fi is an ith-order knowledge assignment. A
-ary world (or -world, for short) f is a -sequence of knowledge assignments satisfying
0
0
0
1
28
certain restrictions. For example, an (! + 1)-world is of the form hf ; f ; :::; f!i, where
f! (i) is a set of !-worlds and certain other restrictions are satised. If , then the
-prex of f , denoted f< , is the -sequence that is the restriction of f to .
We now describe the restrictions that a ( +1)-world f has to satisfy for each agent i.
0
1
(K10) f< 2 f(i).
(K20) If g 2 f(i), and > 1, then g (i) = f (i) for all < .
(K30) Let 0 < < . Then g 2 f (i) i there is some h 2 f(i) such that g = h< .
We note that it follows from (K10) and (K20) that if h 2 f(i) and 0 < < , then
h< 2 f(i). Thus, only the other direction of (K30) is nontrivial. We also note that it
follows from (K10) that if f is a -world, then f< is a -world for all .
Clearly, (K10)-(K30) generalize restrictions (K1)-(K3). It is easy to see that knowledge
structures are simply !-worlds.
We can now dene what it means for a -world f to satisfy a formula ' of depth ,
written f j= '. We rst dene a binary relation on ordinals. We say that (or
) if either is a successor ordinal and < , or is a limit ordinal and (in
other words, whether is a successor ordinal or a limit ordinal, > + 1 for all < ).
The relation j= is dened between f and ' i .
1. f j= p, where p is a primitive proposition, if p is true under the truth assignment
f.
2. f j= : if f 6j= .
3. f j= ' ^ ' if f j= ' and f j= ' .
4. If is a successor ordinal, then f j= Ki if g j= for each g 2 f? (i).
5. If is a limit ordinal, then f j= Ki if f< j= Ki .
6. f j= CS if f j= ES sup i for all i > 0.
0
1
2
1
2
1
+1
It is easy to see that the denitions of Section 2 are a special case of the denitions here.
In Section 2, however, we dened satisfaction with respect to structures as a total relation,
where here we leave satisfaction as a partial relation between worlds and formulas.
The following lemma, which is analogous to Lemma 2.5, indicates the robustness of
the denitions above.
Lemma 5.2: Let f be a -world, and let ' be a formula such that depth(') = and
> . Then f j= ' i f< j= '. Furthermore, if is a limit ordinal, then f j= ' i
f< j= '.
+1
29
Proof: The proof is by simultaneous induction on formulas and worlds. The nontrivial
cases in the induction on formulas are when ' is of the form Ki or CS , where we
assume inductively that the lemma holds for .
Consider rst the case that ' is of the form Ki . Assume that is a successor ordinal.
Suppose that f j= Ki . Let g be an arbitrary member of f (i). It follows from (K30) that
g = h< for some h 2 f? (i). Since f j= Ki , it follows by denition that h j= . By
inductive assumption, g j= . Thus, every member of f (i) satises , and so f< j= '.
The proof of the converse is similar.
Assume now that is a limit ordinal. Then f j= Ki i f< j= Ki , and the claim
for this case holds by the induction hypothesis.
Consider now the case that ' is of the form CS . Suppose that f j= CS . Then
f j= ES sup i for all i 0. Let = depth( ). By the induction hypothesis, f< i j=
ES sup i for all i 0. Again by the induction hypothesis, f< j= ES sup i for all i 0,
whenever + i for all i 0. In particular, f< j= '. Again, the proof of the converse
is similar.
We now describe some axioms for joint knowledge. These are generalization of axioms
for common knowledge due to Lehmann [Le] and Milgrom [Mi].
1
+1
+1
+ +1
1.
2.
3.
4.
5.
6.
7.
CS ' ) '
CS ' ) CS CS '
:CS ' ) CS :CS '
CS ' ^ CS (' ) ' ) ) CS '
Cfig' Ki'
CS ' ) CT ' if T S
CS (' ) ES ') ) (' ) CS ').
1
1
2
2
Axioms 1-4 are analogous to axioms 2-5 for knowledge. They say that joint knowledge is
correct, introspective, and closed under implication. Axiom 5 deals with the degenerate
case of a single agent. Axiom 6 says that joint knowledge is inherited by subsets, and
Axiom 7 describes how joint knowledge is built as a xpoint of knowledge.
We want to show that the axioms are valid. Since satisfaction is a partial relation
between worlds and formulas, we have to redene the notion of validity. We say that a
formula ' is valid if it is satised by every world for which the satisfaction relation is
dened with respect to that formula. An axiom is valid if all its instances are valid.
Proposition 5.3: All the axioms above are valid.
30
Proof: We demonstrate the validity of Axiom 7. The other axioms are left to the reader.
Let f be a -world, and let ' be a formula of depth . Assume that f j= CS (' ) ES ')
and f j= '. By denition we have that f j= ES sup i(' ) ES '), for all i > 0. By the
knowledge axioms, it follows that f j= ES sup i' ) ES sup i + 1', for all i > 0. We now
show, by induction on i, that f j= ES sup i', for all i > 0. For i = 1, we have that
f j= ES ', since f j= ' and f j= CS (' ) ES '). Assume now that f j= ES sup i'. Since
f j= ES sup i ) ES sup i + 1', it follows that f j= ES sup i + 1'. Thus, f j= CS '.
In [FV1] it is shown that the above axiomatization for knowledge and joint knowledge
together with modus ponens and joint knowledge generalization (\from ' infer CS '") is
indeed complete. Another axiomatization is given in [HM2].
5.2 Model-theoretic constructions
We now extend the machinery developed in the previous sections. Our goal is to prove
the equivalence of \internal" truth and \external" truth, in analogy with Theorem 2.7
This will then enable us to relate knowledge worlds and Kripke structures as in Theorem
3.1.
Let f and g be -worlds. We say that f and g are i-equivalent, written f i g, if
f(i) = g (i) for all such that 0 < . We call fg : f i gg the i-equivalence class
of f .
We now generalize the no-information extension.
Denition 5.4: Let f be a -world. Let . The -no-information extension of f ,
denoted f sup , is a -sequence of knowledge assignments dened as follows:
1. f sup = f .
2. If > is a successor ordinal, then f<? sup = f sup ? 1 and f? sup (i) is
the i-equivalence class of f sup ? 1 for all agents i.
3. If is a limit ordinal, then for each < , we have f sup = f sup + 1.
It is easy to see that when < ! (so that f is just a world of nite length), then
the !-no-information extension of f is the same as the no-information extension as we
1
1
dened it in Section 4.
We need to prove that the no-information extension yields knowledge worlds. We
rst need the analogue of Lemma 4.2.
Lemma 5.5: Let f and g be -world for some successor ordinal such that f<? i
g<? . Let h be a th-order knowledge assignment such that h(i) = f(i) and h(j ) =
g(j ) for j =
6 i. Then hg ; : : : ; g? ; hi is a ( + 1)-world.
1
1
0
1
31
We also need a generalization of the matching extension. Let f be a -world, where is a successor ordinal. Let g 2 f? (i). The i ? matching extension of g with respect to
f is the -sequence h of knowledge assignments, where h<? = g, h? (i) = f? (i), and
h? (j ) is the j -equivalence class of g for j 6= i. We now prove the analogue of Lemma
4.3.
1
1
1
1
1
Theorem 5.6: Let f be a -world.
1. Let . Then f sup is a -world.
2. If is a successor ordinal, and g 2 f? (i), then the i-matching extension of g with
respect to f is a -world.
1
Proof: The proof is by induction on .
To prove (1), it suces to prove that if f is a -world, then f sup + 1 satises the
restrictions (K10), (K20), and (K30). The fact that (K10) and (K20) hold is immediate
from the denition. We prove that (K30) holds by induction on .
If = 1, then the claim holds vacuously. For the inductive step, suppose that
0 < < and g 2 f(i). We will construct a -world h such that g = h< and h i f .
Thus, h 2 f(i). We inductively describe h< for , where h< i f< .
For the basis of the induction, we take h< to be g. Assume now that and
h< has been dened for all < . If is a limit ordinal, then h< is already dened,
so suppose that is a successor ordinal. Then we have h<? i f<? . Let h< be the
i-matching extension of h<? with respect to f< . By the induction hypothesis, h< is
a -world, and clearly h< i f< . This completes the proof that (K30) holds.
To prove part (2) we consider rst the -sequence g sup , which by the induction
hypothesis is a -world. Since g 2 f? (i), we have that g i f<? . The claim now
follows by Lemma 5.5.
A consequence of the theorem is that every -world can be extended to a -world,
for any > . This generalizes the result in Section 4 that every world can be extended
to a structure. Furthermore, while proving the theorem we also proved another useful
result.
1
1
1
1
1
Lemma 5.7: Let f be a -world, and let g 2 f (i), where < . Then there is some
-world e such that g = e< and e i f .
Let S be a set of agents. We say that a -world g is S-reachable from a -world
f if there is a sequence f ; : : : ; fk such that f = f , g = fk , and for each j such that
1 j k ? 1 there is some agent i 2 S such that fj i fj . In this case we say that g
is S-distance k from g.
1
1
+1
We can now prove the analogue of Theorem 2.7, which shows the equivalence of
internal and external notions of truth.
32
Theorem 5.8: Let f be a -world.
1. f j= Ki ' i g j= ' whenever f i g.
2. f j= CS ' i g j= ' whenever g is S -reachable from f .
Proof:
1. Suppose rst that is a successor ordinal.
(a) Assume that f j= Ki'. By denition, h j= ' whenever h 2 f? (i). Let g
be such that f i g. By (K10), g<? 2 g? (i). But g? (i) = f? (i). It
follows that g<? 2 f? (i), so g<? j= '. By Lemma 5.2, we have g j= '.
(b) Assume that g j= ' whenever f i g. Let h 2 f? (i). By Lemma 5.7, there
is a -world g such that h = g<? and g i f . Thus, g j= '. By Lemma
5.2, it follows that h j= '.
Suppose now that is a limit ordinal, and assume that the claim has been proven
for all smaller ordinals. Let = depth(').
(a) Assume that f j= Ki '. Then, by Lemma 5.2, f< j= Ki'. Since + 2 < ,
we have that h j= ' for every ( + 2)-world h i f< . Now assume g i f .
It follows that g< i f< . Thus, by Lemma 5.2, it follows that g j= '.
(b) Assume that g j= ' whenever g i f . Let h 2 f (i). By (K20), h i f< .
By Lemma 5.7, there is a -world g such that g i f and h = g< . By
assumption, g j= ', so h j= ', by Lemma 5.2. We have shown that h j= '
whenever h 2 f (i). It follows that f< j= Ki', and by Lemma 5.2, we
have that f j= Ki'.
2. It is easy to prove, by induction on i, that f j= ES sup i' i g j= ' whenever g
is S -distance i from f . The claim follows, since f j= CS ' i f j= ES sup i' for all
i 0.
1
1
1
1
1
1
1
1
1
1
+2
+2
+2
+2
+1
+1
+1
+1
+2
The next result is analogous to Theorem 3.1. Here we consider a state s of Kripke
structure M to be equivalent to an ! -world if they satisfy the same extended formulas.
2
Corollary 5.9: To every Kripke structure M and state s in M , there corresponds an
! sup 2-world fM;s such that s is equivalent to fM;s. Conversely, there is a Kripke structure
Mknow such that for every ! sup 2-world f there is a state s in Mknow such that f is
f
equivalent to sf .
Proof: The proof is analogous to the proof of Theorem 3.1. There are two dierences.
First, the ! sup 2-world fM;s is constructed by transnite induction. Second, Theorem
5.8 is used instead of Theorem 2.7.
33
5.3 Was that necessary?
So far we have claimed that it is necessary to dene innitary worlds in order to give
semantics to extended formulas. But is that really the case? We are now going to show
some evidence to the contrary.
Let KC-formulas be formulas that use the modalities Ki and C (i.e., CP , where P is
the set of all agents), but not CS when S is a proper subset of P . Let K-formulas be
formulas that use only the Ki modalities.
The next theorem says that for KC -formulas, the extension beyond ! is redundant.
Theorem 5.10: Let be a KC -formula. Let f and g be -worlds, such that f<! = g<! .
Then f j= i g j= .
Proof: See appendix.
The above theorem indicates how we can dene the semantics of arbitrary KC formulas in !-worlds. We denote this new satisfaction relation by the symbol k? .
1.
2.
3.
4.
5.
f k? p, where p is a primitive proposition, if p is true under the truth assignment f .
f k? : if it is not the case that f k? .
f k? ' ^ ' if f k? ' and f k? ' .
f k? Ki if gk? whenever g i f .
f k? C if f k? E sup i for all i > 0.
0
1
2
1
2
Theorem 5.11: Let be a KC -formula of depth , and let f be a -world, . Then
f j= i f<! k? .
Proof: We prove the claim by transnite induction on the depth of and an induction
on the Boolean structure of . The nontrivial cases are where is either of the form Ki '
or of the form C'.
Consider the rst case. Suppose that f j= Ki '. Let h be an !-world such that
h i f<! . Consider h sup . By Theorem 5.10, we have h sup j= Ki', so we also have
h sup j= ', and by the induction hypothesis, hk? '. Since h is an arbitrary !-world
such that h i f<! , it follows that f<! k? Ki '.
Suppose that f<! k? Ki'. Let g be a -world such that g i f . So g<! k? ', by
denition. By the induction hypothesis, g j= '. Since g is an arbitrary -world such
that g i f , it follows, by Theorem 5.8, that f j= Ki '.
Consider the second case. Then f j= C' i f j= E sup j' for all j 0 i (by the
induction hypothesis) f<! k? E sup j' for all j 0 i f<! k? C'.
34
According to the above theorem j= and k? are consistent with each other, so we
need not distinguish between them. Note, however, that k? may be dened where j= is
undened.
As a consequence of the above theorem we show that when dealing with satisability
of KC -formulas, it is sucient to consider least-information extensions.
Theorem 5.12: Let ' be a satisable KC -formula. Then there is an !-world f such
that f k? ' and f is a least-information extension.
Proof: By [HM2], if ' is satisable, then there is a nite Kripke structure M and a state
s in M such that M; s j= '. By Corollary 5.9, fM;s j= '. Let g = fM;s. By Theorem 5.11,
g<! k? '. But by Theorem 4.23, g<! is a least-information extension.
Since a least-information extension has a nite description, this theorem can be viewed
as a \nite model" theorem: if a KC -formula is satisable, then it is satisable in a
\nite" model.
We can now use KC -formulas to give another demonstration of the subtlety of Corollary 4.13. Corollary 4.13 says that if ' is a K -formula that is common knowledge in
a no-information extension, then ' is valid. The theorem is false if ' is allowed to be
a KC -formula. For example, if p is a primitive proposition, then we can show that
the KC -formula :Cp, which is not valid, is common knowledge in every no-information
extension. As a side remark, we note that this formula :Cp can be viewed as an abbreviation for the innite disjunction :Ep _ :EEp _ :EEEp _ ::: : It is interesting to
note that although this innite disjunction is common knowledge in every no-information
extension, no nite part of it is common knowledge in any no-information extension.
The above theorems show that it is sucient to consider !-worlds when dealing with
KC -formulas. The question then arises whether there is indeed a need to dene \longer"
worlds.
Example 5.13: Consider the following situation. Agents 1 and 2 are communicating
about a fact p through an unreliable channel, one over which messages are not guaranteed
to arrive. As shown in [HM1], under such conditions, arbitrarily deep knowledge is
attainable, but common knowledge is not. More precisely, Ef ; g sup kp is attainable for
all k 1, but Cf ; gp is not attainable. If agent 3 does not know how many rounds of
successful communication have transpired, then K :Cf ; gp holds and :K :Ef ; g sup kp
holds for all k 1.
We claim that we need an (! + 1)-world to model this situation. That is, we claim
that there is no !-world f where K :Cf ; gp holds and :K :Ef ; g sup kp holds for all
k 1. To make sense of what it means for a formula such as K :Cf ; gp to hold in
f , we extend our denition of k? in the natural way to apply to such formulas. Thus,
f k? K :Cf ; gp i for every !-world g such that f g, necessarily gk? :Cf ; gp (which,
because :Cf ; gp is of depth !, means that g j= :Cf ; gp).
12
12
3
3
3
12
3
12
12
3
3
3
12
12
12
35
12
12
12
Assume now that f = hf ; f ; f ; : : :i is an !-world and f j= :K :Ef ; g sup kp for all
k 1; to prove our claim we must show that f k? :K :Cf ; gp.
We can assume without loss of generality that p is the only primitive proposition
(otherwise, we can \restrict" f by \erasing" all of the primitive propositions other than
p; the straightforward details are left to the reader). Let T be a tree with levels 0; 1; 2; : : :,
where the kth level of the tree contains all of the members hg ; : : :; gk i of fk (3) that
satisfy Ef ; g sup kp, and where the parent of the (k + 1)-ary world hg ; :::; gki is its k-ary
prex hg ; :::; gk? i if k 1. Thus, the kth level contains all of the (k + 1)-ary worlds
that agent 3 consider possible and that satisfy Ef ; g sup kp. (We see that T is a tree
rather than a forest, since there is only one member at level 0, namely hg i, where g
is the truth assignment where p is true.) For each k, there is some world hg ; : : :; gk i
at level k, since f j= :K :Ef ; g sup kp. Since Ef ; g sup kp ) Ef ; g sup rp for each
r k, it follows that if hg ; : : : ; gk i is in T , then so are all of its prexes. Thus, there are
arbitrarily long nite paths in the tree. The tree has nite fanout, since there are only a
nite number of possible k-worlds for each k. By Konig's Innity Lemma, T contains an
innite path. This innite path corresponds to a knowledge structure g = hg ; g ; g ; : : :i.
Since hg ; : : : ; gk i 2 fk (3), it follows by restriction (K2) on knowledge structures that
gk (3) = fk (3), for every k. So f g. Also g j= Cf ; gp, since g j= Ef ; g sup kp for each
k. Thus, f k? :K :Cf ; gp, as desired.
In our example we would like to be able to model a situation where the facts :K :Efk ; gp,
k 1, and K :Cf ; gp all hold. It is easy to imagine another situation where the facts
:K :Efk ; gp, k 1, and :K :Cf ; gp all hold. The crucial point is that the facts
:K :Efk ; gp, k 1, should not determine whether K :Cf ; gp holds. If we restrict attention to !-worlds then, as we showed, this independence fails. In an !-world where the
facts :K :Ef ; gk p, k 1, are all true, the fact K :Cf ; gp is forced to be true as well.
This can be explained as follows. It is straightforward to show that under our k? semantics, agent 3 considers an !-world g = hg ; g ; g ; : : :i possible precisely if agent 3 considers
the k-ary prex hg ; :::; gk? i possible for every k 1 (that is, hg ; :::; gk? i 2 fk (3)). Under the assumption that :K :Efk ; gp holds for every k 1, our arguments above actually
show that there is an !-world g that satises Cf ; gp such that agent 3 considers every
nite prex of g possible. So under the k? semantics, agent 3 is forced to consider g
possible. The whole point of having an !th-order knowledge assignment f! is to be able
to model the fact that agent 3 considers g impossible (via g 62 f! (3)) even though agent 3
considers every nite prex of g possible.
0
1
2
3
3
12
12
0
+1
0
12
0
1
12
0
0
0
3
12
12
12
0
0
0
1
2
+1
3
3
12
12
12
3
3
3
12
3
12
3
12
12
3
12
3
3
12
0
0
1
12
12
2
1
0
3
1
12
12
The above example suggests that our transnite construction is indeed necessary.
Intuitively, \long worlds" are needed to model \deep" knowledge. This intuition, however,
needs to be sharpened, since KC -formulas can express deep knowledge but they do not
require long models. The answer is that KC -formulas do not really express knowledge
of depth greater than !, since they are always equivalent to formulas of depth !. On
the other hand, a formula such as K :Cf ; gp is inherently of depth ! + 1. The next
theorem says that the situation in general is unlike that with KC -formulas ', where we
3
12
36
could decide the truth of ' by considering only the prex of length !. Specically, the
theorem says that there is no < ! such that for every formula ', we can decide the
truth of ' in a world w by considering only the prex of w of length .
Theorem 5.14: For every ordinal 1 < ! sup 2 there is a formula and there are
( + 1)-worlds f and g, such that f< = g<, f j= , and g 6j= .
Proof: See appendix.
We note that another approach to modelling joint knowledge is described in [FV1].
In that approach knowledge assignments assign sets of worlds to sets of agents, rather
then individual agents. The advantage of that approach is that one does not need to
consider -worlds for > !.
5.4 Summary
We now summarize our results on modelling common and joint knowledge. Common and
joint knowledge are described by formulas of innite depth (in fact, by formulas whose
depth is given by ordinals up to ! ). Therefore, to properly model states of knowledge
where such formulas might hold, we need to consider not just worlds of length ! (that
is, knowledge structures, as in the previous sections), but worlds of length up to ! . We
show that as before, the truth of a formula is determined by a prex of appropriate length
of the world. We generalize the denition of no-information extension, and show that as
before, the result is indeed a world. In particular, this shows that every -world is a prex
of some -world, whenever , which generalizes the result of Section 4 that says this
when is nite and = !. We show that as before, internal and external notions of
truth coincide. As before, we use this to prove an equivalence between knowledge worlds
and Kripke structures.
We show that if we restrict our attention to KC -formulas ' (those where there is
no joint knowledge over proper subsets of the set of agents), then the truth of ' in a
-world f is already determined by the !-prex of f . We then show how to dene the
semantics of KC -formulas in !-worlds directly. We also prove a nite model theorem,
by showing that if a KC -formula is satisable, then it is satisable in an !-world that is
a least-information extension. Finally, we show the rather dicult technical result that
when we allow general extended formulas (which may discuss joint knowledge among
proper subsets of the set of agents), then we really need to allow \long worlds". Thus,
there is no < ! such that for every formula ', we can decide the truth of ' in a world
f by considering only the prex of f of length .
2
2
2
37
6 Extensions of the Approach
6.1 A Bayesian Approach
Economists have taken a Bayesian approach to modelling knowledge, where instead of
having possible and impossible worlds we associate a probability distribution on worlds
with each agent [Au, BD1]. In a non-Bayesian setting, an agent knows a fact p if p holds
in all the worlds that the agent consider possible. In a Bayesian setting, an agent knows
a fact p if the probability that p holds according to the agent's distribution is 1 [BD1].
(See also [FH, MS] for an approach that mixes Bayesian and non-Bayesian approaches).
Mertens and Zamir describe a Bayesian analogue to knowledge structures [MZ], which
they call innite hierarchies of beliefs. If X is a set, then let (X ) denote the space
of probability distributions over X . Mertens and Zamir start with a set S called the
uncertainty space (for technical reasons, this set is required to satisfy certain topological
properties). Intuitively, S consists of all possible states of nature. A 0th-order Bayesian
assignment f is simply an element of S and hf i is a Bayesian 1-world.
Assume inductively that the set Xk of Bayesian k-worlds have been dened. A kthorder Bayesian assignment is a function fk : P ! (Xk ). Intuitively, fk associates
with every agent a probability distribution on the set of Bayesian k-worlds. A (k + 1)sequence of Bayesian assignments is a sequence hf ; : : : ; fk i, where fi is an ith-order
Bayesian assignment. A (k + 1)-world is a (k + 1)-sequence of Bayesian assignments
that satisfy certain semantic restrictions, which we shall not list. An innite sequence
hf ; f ; f ; : : :i is called a Bayesian knowledge structure if each prex hf ; : : :; fk? i is a
Bayesian k-world for each k. The Bayesian approach has the interesting feature that
there is no point in explicitly dening transnite assignments, since these are already
determined by the kth-order assignments (this result is implicit in Theorem 2.9 of [MZ]).
We will come back to this point later.
The connection between Bayesian Kripke structures ([Au, BD1]) and Bayesian knowledge structures ([MZ]) has been studied in [BD2, MZ, TW]. The conclusion is that
Bayesian Kripke structures and Bayesian knowledge structures have a relationship somewhat analogous to the one exhibited in this paper between Kripke structures and knowledge structure. Note that the results cannot be precisely analogous, since none of those
paper has a notion of a logical language in which assertions about the structures can be
made.
It may seem that the Bayesian approach is more expressive than the non-Bayesian
approach. After all, in the Bayesian approach not only we distinguish between possible
(having positive probability) and impossible (having probability zero) worlds, but we
actually supply a degree of possibility to worlds. This is indeed the case with nite-order
assignments over a nite uncertainty space. However, once we consider innite uncertainty spaces or transnite assignments, the picture gets more involved. Now, we cannot
represent the fact that a world is impossible just by assigning it probability zero; in fact,
probability is assigned to sets of worlds and not to individual worlds. Thus, in general the
0
0
0
0
1
2
0
38
1
expressive power of knowledge assignments and Bayesian assignments is incomparable.
Consider an (! + 1)-world f = hf ; f ; : : :; f! i in which f! (i) is uncountable, i.e., there
are uncountably many !-worlds that agent i considers possible (an example occurs in the
(! + 1)-no-information extension of a k-ary world, k < !, when there are at least two
agents). For Bayesian assignments, we have made the convention that an agent knows a
fact precisely if the probability of that fact (according to the agent's distribution) is 1.
Hence, an agent considers a fact possible precisely if its probability is positive, since if
its probability is 0, then the agent would know the negation of the fact. In the situation
we are now considering, agent i considers an uncountable number of !-worlds possible,
and hence, under the Bayesian approach, agent i must assign to each of these !-worlds
a positive probability. However, it is well-known that it is impossible to assign positive probabilities to uncountably many disjoint events. Thus, this situation cannot be
captured by the Bayesian approach.
0
1
Example 6.1: It is instructive to re-examine in a Bayesian setting the situation described
in Example 5.13, where :K :Ef ; g sup kp holds for all k 1. Probabilistically speaking,
that means that the probability assigned by agent 3 to the events Ef ; g sup kp is greater
than 0 for all k 1.
3
12
12
Let pk be the probability assigned by agent 3 to Ef ; g sup kp (and hence to the
equivalent formula ^i sup kEf ; g sup ip), for k 1. Since ^i sup 1Ef ; g sup kp is
equivalent to Cf ; gp, the probability that agent 3 assigns to Cf ; gp is limk!1 pk (this
follows from the countable additivity of probability functions). Thus, K :Cf ; gp holds
precisely when limk!1 pk = 0. So, in this case, we can see why it is not necessary to
go beyond level !: the probability (and hence the truth) of K :Cf ; gp is determined
by probabilities at the nite levels. By contrast, as we discussed in Example 5.13, in
the non-Bayesian setting we need to examine the !th-order assignment f! to determine
whether K :Cf ; gp holds.
The crucial point is that in the Bayesian setting, the probabilities assigned by agent 3
to the facts Efk ; gp, k 1, determine the probability he assigns to the fact Cf ; gp. This
lack of independence is a general phenomenon. Let A be a set of !-worlds in the Bayesian
setting, and let Ak be the set of all k-ary prexes of members of A. The probability that
agent 3 assigns to the set A is the limit (as k ! 1) of the probability that agent 3
assigns to the set Ak . The probabilities at the nite levels completely determine the
probabilities at level !. As a consequence, we do not need !th-order assignments in a
Bayesian setting. By contrast, as we saw in Example 5.13, in our setting we need level !,
to provide additional information: if agent 3 considers every nite prex of an !-world
g possible, then the !th-order assignment f! tells us whether or not agent 3 considers g
possible.
12
=1
=1
12
12
12
12
3
3
3
12
12
12
12
12
The above example demonstrates that countable additivity of probability functions
is the reason that transnite assignments are redundant in the Bayesian approach. We
note that countable additivity is crucial to the results in [BD2, MZ].
39
6.2 Further Extensions
Knowledge structures serve as a useful and robust tool for a deep investigation of various knowledge-related issues. While in this paper we focus on a particular variety of
knowledge, the methodology we presented can be used to study other varieties.
For example, if we want to study belief rather than knowledge, then we replace the
semantic restriction (K1) (\hf ; : : :; fk? i 2 fk (i); if k 1") by \fk (i) is nonempty if
k 1" in our denition of knowledge structures, and we get belief structures (where it
is possible to \believe" something that may not be true). We can also dene knowledgebelief structures that deal with both knowledge and belief simultaneously, where there is
a semantic restriction that implies that every known fact is also believed.
We can also incorporate time into knowledge structures, so that we can give semantics
to a sentence such as \Alice knows that tomorrow Bob will know p" (or even to a
sentence \Alice knows that tomorrow she (Alice) will know whether p is true or false").
The rst step is to dene a 0th-order assignment as a function f from ! (which we
take to represent time; 0 is today, 1 is tomorrow, etc.) to truth assignments to the
primitive propositions. The second step is to dene a kth-order assignment as a function
fk : P ! ! 2 sup Wk , where Wk here is the set of all k-worlds involving time. Our
semantic restrictions (K1), (K2), and (K3) generalize naturally. One can also add other
natural semantic restrictions; e.g., a restriction that says that each agent's knowledge
increases monotonically with time. (Cf. [Le, HV] for the Kripke semantics of knowledge
and time.)
The above examples suggest that the methodology described in this paper is quite
general. This line of thought is pursued in [FV1], which investigates the applicability of
the approach presented here to the modelling of other modal logics.
0
1
0
7 Concluding Remarks
In this paper we introduced a new semantic approach to modelling knowledge using
knowledge structures. Although in a certain sense knowledge structures are equivalent
to the well-known Kripke structures, they have a number of advantages over Kripke
structures:
Although there are situations where using Kripke structures is the appropriate approach to modelling knowledge (such as the situated-automata approach, where
knowledge is ascribed on the basis of the information carried by the state of a machine), there are other situations where it is not clear how to use Kripke structures
to model knowledge states. In such situations our approach oers a more intuitive
approach to modelling knowledge.
Our notion of a no-information extension models directly the notion of a \nite
amount of information", where in particular there is no common knowledge. However, we show that no nite Kripke structure can capture this. By means of the
40
least-information extension, we model the notion of a \nite amount of information
in the presence of common knowledge".
As shown in [FV1], by using knowledge structures, one can obtain proofs of decidability and compactness that are almost straightforward, and an elegant and
constructive completeness proof.
On the other hand for some applications using Kripke structures is clearly the preferred approach. For example, the graph-theoretic nature of Kripke structures makes
them the tool of choice when developing ecient decision procedures (cf. [HM2, La, Va2]).
In summary, knowledge structures are a new semantic representation for knowledge.
Although they do not replace the widely-used Kripke structures, they do complement
them: there are times when we can gain more insight by modelling knowledge with
knowledge structures rather than with Kripke structures.
Acknowledgments. We thank Mike Fischer and Neil Immerman for stimulating discussions during the formative stages of this paper. We are grateful to Flaviu Cristian, Jakob
Gonczarowski, Yuri Gurevich, Carl Hauser, Daniel Lehmann, Hector Levesque, Yoram
Moses, Ray Strong, Johan van Benthem, and Ed Wimmers, who read a draft of this paper
and made helpful comments. We also thank Eddie Dekel-Tabak, Jean-Francois Mertens,
and John McCarthy for interesting discussions. Finally, the two referees deserve thank
for many suggestions that helped to improve the quality of the paper. In particular, the
equivalence of (1) and (3) in Proposition 4.4 was suggested by one of the referees.
A Appendix
In this appendix, we give the results and proofs we promised in the body of the paper.
A.1 A lemma used for Proposition 4.4
We begin with a lemma that was used in the proof of Proposition 4.4.
Lemma A.1: Let w and w0 be k-worlds that agree on all formulas of depth at most k ? 1
of the form Ki or :Ki . Then w i w0 .
Proof: We prove this by induction on k. The case k = 1 is trivial. For the inductive
step, assume that w = hf ; : : :; fk? i and w0 = hf 0 ; : : : ; fk0 ? i are as in the statement of
0
1
0
1
the lemma. We must show that fk? (i) = fk0 ? (i). If not, then without loss of generality,
we can assume that there is some (k ? 1)-world v that is in fk? (i) but not in fk0 ? (i).
Assume that fk0 ? (i) = fv0 ; : : :; vr0 g. Thus, v is distinct from each of the vj0 's. Hence, for
each j (1 j r), either rst component g of v is distinct from the rst component of
vj0 , or else there is i such that v 6i vj0 . So by inductive hypothesis, there is a formula j
of depth at most k ? 2 such that vj0 j= j but v 6j= j . Let be the formula _ _ r.
1
1
1
1
1
1
0
1
41
Then w0 j= Ki but w 6j= Ki . Since the formula Ki is of depth at most k ? 1, this
contradicts our assumption.
A.2 Proof of Theorem 4.12
Our next goal is to prove Theorem 4.12, which is as follows.
Theorem 4.12: Assume that there are at least two agents, ' is a formula of depth r,
and w a k-world. If w* j= E sup r + k', then ' is valid.
We rst need some preliminary concepts and results.
Lemma A.2: Let hf ; : : : ; fk? i and hg ; : : :; gk? i be k-worlds. If fk? (i) = gk? (i),
then hf ; : : : ; fk? i* i hg ; : : :; gk? i*.
Proof: Let hf ; : : :; fk? i* be hf ; : : :; fk? ; fk ; fk ; : : :i, and similarly for hg ; : : :; gk? i*.
Since fk? (i) = gk? (i), it follows, as noted earlier, that fj (i) = gj (i) whenever 0 < j < k.
Assume inductively that we have shown that fr (i) = gr (i) for some r k ? 1. Then
fr (i) = fhh ; : : : ; hri : hr (i) = fr (i)g = fhh ; : : : ; hr i : hr (i) = gr (i)g = gr (i). This
0
0
1
1
0
1
+1
0
0
1
1
1
1
1
0
1
+1
0
1
1
0
0
completes the induction step.
+1
Denition A.3: If p = i ; : : : ; is is a string of agents and if f and f 0 are knowledge
structures, then we say that f f 0 if there are knowledge structures f ; : : :; fs such
that (a) f = f , (b) fs = f 0, and (c) fj i fj whenever 1 j s. We may then say
that there is a path between f and f 0.
1
p
1
1
+1
+1
+1
j
Note in particular that if p is the empty string, then f f 0 precisely if f = f 0. Note
that for an arbitrary string p the relation need not be an equivalence relation.
We shall take advantage of the following simple properties of , where pq is the
concatenation of the strings p and q.
p
p
p
(Reversibility): If f f 0, then f 0 R f .
(Transitivity): If f f 0 and f 0 f 00, then f f 00.
(Collapsibility): Let i be an agent. Then f ii f 0 if and only if f i f 0.
If p = i : : :is and no two consecutive agents in p are the same, (i.e., if ij 6= ij
for 1 j < s), then we say that p is nonduplicating. By collapsibility, we can use
p
p sup
p
q
pq
p q
1
p q
+1
nonduplicating strings without loss of generality.
We are interested in our notion of because of the following lemma, whose simple
proof (by induction on the length of p) is left to the reader.
p
Lemma A.4: If f j= K ' and f g, then g j= '.
p
p
42
The next proposition implies that there is a path between every pair of no-information
extensions, and gives us information as to the length of the path.
Proposition A.5: Let w be a k-world, let w0 be a k0-world, and let p 2 P * be nonduplicating and of length k + k 0 ? 1. Then w* w0*.
Proof: We rst prove the following special case.
Special case: Let hf ; : : :; fk? i be a k-world, and let p 2 P * be nonduplicating and
of length k. Then hf i* hf ; : : : ; fk? i*.
We prove the special case by induction on k. The base case (k=1) is immediate.
For the inductive step, let hf ; : : : ; fk i be a (k + 1)-world, and let p = i : : :ik be
nonduplicating. By the inductive assumption, hf i* i1:::i hf ; : : :; fk? i*. We must
show that hf i* i1:::i +1 hf ; : : :; fk i*. Dene fk0 by letting fk0 (j ) be the j -equivalence
class of fk? (j ) for each agent j . By Lemma 4.3, we know that hf ; : : :; fk? ; fk0 i is a
(k +1)-world. Let hf ; : : : ; fk? ; gk i be the ik -matching extension of hf ; : : :; fk? i with
respect to fk . Again, by Lemma 4.3, we know that hf ; : : :; fk? ; gk i is a (k + 1)-world.
Since gk (ik ) = fk0 (ik ), it follows from Lemma A.2 that
hf ; : : : ; fk? ; fk0 i* i hf ; : : : ; fk? ; gk i*:
Since gk (ik ) = fk (ik ), it follows from Lemma A.2 that
hf ; : : :; fk? ; gk i* i +1 hf ; : : :; fk? ; fk i*:
Since fk0 (j ) is the j -equivalence class of fk? (j ) for each agent j , it follows that hf ; : : : ; fk? ; fk0 i* =
hf ; : : : ; fk? i*. Putting these last few observations together, we see that hf ; : : :; fk? i* i i +1
hf ; : : : ; fk? ; fk i*. Putting this fact together with our inductive assumption that hf i* i1:::i
hf ; : : : ; fk? i*, it follows by transitivity that hf i* i1:::i i i +1 hf ; : : :; fk i*. By collapsibility, hf i* i1:::i +1 hf ; : : :; fk i*, which was to be shown. This concludes the proof
of the special case.
Let w = hf ; : : : ; fk? i and w0 = hf 0 ; : : :; fk0 ? i. Let p = i : : :ik k0 ? . By the special
case, hf i* i :::i1 w* and hf 0 i* i :::i + 0?1 w0*. By reversibility, w* i1:::i hf i*. By
Lemma A.2, hf i* i hf 0 i*. By transitivity (applied twice), w* i1:::i i i :::i + 0?1 w0*.
By collapsibility (applied twice), w* w0*, as desired.
We can now prove Theorem 4.12.
Proof of Theorem 4.12. Assume that there are at least two agents, that ' is a formula
of depth r, that w a k-world, and that w* j= E sup r + k'. We must show that ' is valid.
Assume not; we shall derive a contradiction.
Let p 2 P * be nonduplicating and of length r + k (there is obviously such a string
p, since there are at least two agents). Since w* j= E sup r + k', clearly w* j= K '.
Since ' is of depth r and not valid, there is an (r + 1)-world w0 = hf ; : : : ; fr i such that
w0 6j= '. Hence, w0* 6j= '. By Lemma A.5, w* w0*. Therefore, since w* j= K ', it
follows from Proposition A.4 that w0* j= '. This is a contradiction.
p
0
1
0
p
0
1
0
1
0
0
0
k
1
0
k
1
0
0
1
+1
0
1
k
1
0
0
+1
0
1
1
1
+1
0
1
0
k
1
1
0
1
0
1
0
1
0
1
0
1
k k
0
0
0
1
k
k )( k k
)
k
0
k
1
1
0
0
k
0
(
0
k
0
0
+1
+
1
k
k k
(
0
k) k( k
k k
0
)
p
p
0
p
43
p
A.3 Proof of Theorem 4.21
We now begin a development that will lead to the proof of Theorem 4.21, which is as
follows:
Theorem 4.21: (w; C )* is a knowledge structure i reach(w; C ) is a closed set.
In order to prove the theorem, we need a few preliminary lemmas.
Lemma A.6: If C is a set of k-worlds, w 2 C , and C 0 = reach(w; C ), then (w; C )* =
(w; C 0)*.
Proof: Suppose w = hf ; : : :; fk? i and (w; C )* = hf ; : : :; fk? ; fk ; fk ; : : :i. We can
prove, by induction on m k, that worldsk (hf ; : : :; fm? i) reach(w; C ); the proof is
0
1
0
0
1
+1
1
left to the reader. It now follows from the denition of the least-information extension
that (w; C )* = (w; C 0)*.
Lemma A.7: Let C be a closed set of k-worlds, m k ? 1, and v = hg ; : : : ; gmi be
a world such that worldsk (v) C . Dene gm (i) = fhh ; : : :; hmi : hm (i) = gm (i) and
worldsk (hh ; : : : ; hmi) Cg for each agent i. Then hg ; : : :; gm i is a world.
0
+1
0
0
0
+1
Proof: The fact that (K1) and (K2) hold is immediate from the denition. To see
that (K3) holds, we proceed by induction on m. First suppose m = k ? 1 > 0
and hh ; : : :; hm? i 2 gm (i). From the fact that C is closed and v 2 C (since v 2
worldsk (v) C ), it follows that there is some hh ; : : : ; hm? ; hmi 2 C such that hm(i) =
gm (i). This world is in gm (i) by denition, so (K3) holds. Suppose now that m >
k ? 1 and hh ; : : :; hm? i 2 gm(i). Note that worldsk (hh ; : : :; hm? i) C by assumption. Dene hm (i) = gm (i) and hm(j ) = fhh0 ; : : : ; h0m? i : h0m? (j ) = hm? (j ) and
worldsk (hh0 ; : : : ; h0m? i) Cg for j =
6 i. By the inductive hypothesis, hh ; : : :; hmi is a
world, and by construction, it is in gm (i). Thus there is some extension of hh ; : : :; hm? i
0
1
0
1
+1
0
1
0
0
0
1
1
1
1
0
1
+1
in gm (i), so (K3) holds.
0
1
+1
Lemma A.8: Suppose C is a set of k-worlds, w; w0 2 C with w i w0, and (w; C )* is a
knowledge structure. Then (w0; C )* is a knowledge structure.
Proof: Suppose (w; C )* = hf ; f ; : : :i and (w0; C )* = hg ; g ; : : :i. We prove by induction
on m that hg ; : : :; gm i is an (m + 1)-world for all m. If m < k then hg ; : : :; gm i is
a prex of the world w0, so the result is immediate. If m k, it is easy to see that
0
1
0
1
0
0
properties (K1) and (K2) hold from the construction. For (K3), suppose that m > 1 and
hh ; : : : ; hm? i 2 gm? (j ). By assumption, hg ; : : :; gm? i is a world. By the construction
of least-information extensions, gm? (i) = fm? (i) and worldsk (hg ; : : :; gm? i) C , so
hg ; : : : ; gm? i 2 fm(i). Since (w; C )* is a knowledge structure, there is some gm0 such
that hg ; : : : ; gm? ; gm0 i 2 fm (i). By property (K3), there must be some h0m? such that
0
2
1
0
1
0
1
1
0
1
1
0
1
+1
1
44
hh ; : : : ; hm? ; h0m? i 2 gm0 (i). By construction again, we must have h0m? (i) = gm? (i)
and worldsk (hh ; : : :; hm? ; h0m? i) C . Thus, hh ; : : :; hm? ; h0m? i 2 gm(i), so (K3)
0
2
0
holds.
1
1
1
2
0
1
2
1
Proof of Theorem 4.21. First suppose that (w; C )* is a knowledge structure. Note
that it follows (by an easy induction on distance using Lemma A.8), that (w0; C )* is a
knowledge structure for all w0 2 reach(w; C ). We now show that reach(w; C ) is closed.
Suppose hg ; : : : ; gk? i 2 reach(w; C ) and hh ; : : : ; hk? i 2 gk? (i). We want to show that
for some h0k? , we have h0k? (i) = gk? (i) and hh ; : : :; hk? ; h0k? i 2 reach(w; C ). Suppose
(hg ; : : : ; gk? i; C )* = hg ; : : :; gk? ; gk ; : : :i. By property (K3), there is some h0k? such
that hh ; : : : ; hk? ; h0k? i 2 gk (i). By the construction of least-information extensions, we
must have hh ; : : :; hk? ; h0k? i 2 C and h0k? (i) = gk? (i). Thus hh ; : : :; hk? ; h0k? i 2
reach(w; C ). This shows that reach(w; C ) is closed.
For the converse, suppose that C 0 = reach(w; C ) is closed. By Lemma A.6, it follows
that (w; C 0)* = (w; C )*. Suppose that (w; C 0)* = hf ; f ; : : :i. Now a straightforward
induction on m using Lemma A.7 shows that hf ; : : :; fm i is an (m + 1)-world. Thus
(w; C )* is a knowledge structure.
0
1
0
1
0
1
1
1
0
0
2
2
1
0
2
1
1
1
1
0
2
1
1
1
0
0
2
1
1
0
A.4 Proof of Theorem 4.22
In this subsection, we prove Theorem 4.22, which is as follows:
Theorem 4.22: (w; C )* is a knowledge structure where all the worlds of C appear i C
is closed and C = reach(w; C ).
We begin with a lemma.
Lemma A.9: Suppose C is a set of k-worlds, w 2 C , and (w; C )* is a knowledge structure. Then worldsk ((w; C )*) = reach(w; C ); i.e., the k-worlds that appear in (w; C )* are
precisely those that are reachable from w.
Proof: Let C 0 = reach(w; C ). By Lemma A.6, it follows that worldsk ((w; C )*) C 0. To
get containment in the other direction, we show by induction on m that if w0 is distance
m from w, then w0 appears in (w; C )*. The result is trivial if m = 0. If m > 0, there
exists w00 2 C , where w i w00 for some agent i, and w0 is distance m ? 1 from w00. By
Lemma A.8, (w00; C )* is a knowledge structure, and by induction hypothesis, w0 appears
in (w00; C )*. Suppose (w; C )* = hf ; f ; : : :i and (w00; C )* = hg ; g ; : : :i. Thus w0 appears
in hg ; : : : ; gli for some l. Since w i w00, by the construction of least-information
extensions, we must have fl(i) = gl (i) and thus hg ; : : :; gli 2 fl (i). Therefore w0
appears in hf ; : : :; fl i.
We now prove Theorem 4.22. If (w; C )* is a knowledge structure where all the worlds
in C appear, then by Theorem 4.21, reach(w; C ) is closed, and by Lemma A.9, C =
reach(w; C ). Conversely, if C is closed and C = reach(w; C ), then reach(w; C ) is closed,
0
1
0
1
0
0
0
+1
45
+1
so by Theorem 4.21, (w; C )* is a knowledge structure. By Lemma A.9, it also follows
that all the worlds of C appear in (w; C )*.
A.5 Proof of Theorem 5.10
In this subsection, we prove Theorem 5.10. We rst need a lemma.
Lemma A.10: Let be a K -formula, and let f and g be !-worlds such that f i g for
each i. Then f j= C i g j= C.
Proof: Assume f j= C. Then f j= E sup j for all j > 0. It follows that f j= KiE sup j
for all j > 0. But f i g, so g j= E sup j for all j > 0. Thus, g j= C.
We can now restate and prove the theorem.
Theorem 5.10: Let be a KC -formula. Let f and g be -worlds, such that f<! = g<! .
Then f j= i g j= .
Proof: By Lemma 5.2, to prove the theorem it suces to show that every KC -formula
is equivalent to a formula whose depth is less than or equal to !. The proof uses the
following valid axiom schemes:
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
Ki (' ^ : : : ^ 'k ) ((Ki' ) ^ : : : ^ (Ki'k ))
C (' ^ : : : ^ 'k ) ((C' ) ^ : : : ^ (C'k))
Ki C' C'
Ki :C' :C'
CC' C'
C :C' :C'
Ki ' _ Ki ) Ki (' _ )
C' _ C ) C (' _ )
Ki ' ) '
C' ) '
1
1
1
1
We want to show that every KC -formula is equivalent to a formula where there is no
C in the scope of another C or a Ki . The proof is by structural induction. By (1) and
(2) it suces to prove that the following axiom schemes are valid, where ' and are
K -formulas.
46
(a)
(b)
(c)
(d)
Ki (C' _ ) (C' _ Ki )
Ki (:C' _ ) (:C' _ Ki )
C (C' _ ) (C' _ C )
C (:C' _ ) (:C' _ C )
In (a)-(d), implication from right to left follows easily from (3)-(10), so we consider only
implication from left to right. We rst prove (a). By Lemma 5.2, it suces to consider
(! + 1)-worlds. Let h be an (! + 1)-world.
Suppose that h j= Ki(C' _ ). Then for every e 2 f! (i) we have e j= C' _ . There
are two possibilities to consider. First, it is possible that for some e 2 f! (i), we have
that e j= C'. Then, by Lemma A.10, for every e 2 f! (i) we have that e j= C'. Thus,
h j= Ki C'. Consequently, by (3), h j= C' _ Ki . The other possibility is that for all
e 2 f! (i) we have that e j= :C'. In that case, we have h j= Ki , so h j= C' _ Ki .
The proof of (b) is similar and left to the reader. We now prove (c). Let h be a world
such that h j= C (C' _ ). Then h j= E sup i(C' _ ) for all i > 0. It follows by (a) that
h j= (C' _ E sup i ) for all i 0. There are now two possibilities to consider. First, it
is possible that h j= C', in which case clearly, h j= C' _ C . The other possibility is
that h 6j= C'. In that case we have h j= E sup i for all i 0, i.e., h j= C . It follows
that h j= C' _ C .
The proof of (d) is similar and left to the reader.
A.6 Proof of Theorem 5.14
In this subsection, we prove Theorem 5.14, which is as follows:
Theorem 5.14: For every ordinal 1 < ! sup 2 there is a formula and there are
( + 1)-worlds f and g, such that f< = g<, f j= , and g 6j= .
Proof: We rst prove the claim for 1 < !. The case = 1 is easy and is left to
the reader (one agent suces). Consider the case where 1 < < !. Here we need two
agents, 1 and 2, and one primitive proposition p. Let f make p true. For 1 k , let
fk (1) = fk (2) = ff<k g. It is easy to verify that f is a ( + 1)-world. Also, one can show
by induction on k, 1 k that f<k j= E sup kp. In particular, f j= E sup p. Let g
be the 2-matching extension of f< with respect to f . That is, g< = f<, g(2) = fg<g,
and g(1) is the 1-equivalence class of g<. We now show that g 6j= E sup p. The proof
is by induction on . For = 1, we have g (1) = fp; pg, so g 6j= K p, and consequently,
g 6j= Ep. For > 1, let g0 be the 1-matching extension of g<? with respect to g< ; that
0
0
0
is, g<
? = g<? , g? (1) = fg<? g, and g? (2) is the 2-equivalence class of g<? .
By the inductive hypothesis and a symmetry argument we have that g0 6j= E sup ? 1p.
But g0 g< , so g 6j= K E sup ? 1p. Consequently, g 6j= E sup p.
0
+1
1
1
1
1
1
1
1
1
1
1
47
1
We now prove the claim for ! < ! sup 2. Here we use three agents, 1, 2, and 3,
and one primitive proposition p (note that three agents are necessary by Theorem 5.10).
Let < ! sup 2. Note that = ! k + l for some k; l 0. We dene the classes U and
V of -worlds. As we shall see later, the worlds f and g whose existence is claimed by the
theorem will be members of U and V , correspondingly. Worlds in U are constructed
in such a manner as to prevent agents 2 and 3 from having joint knowledge, and to make
sure that agent 1 knows it. In worlds in V agents 2 and 3 do have joint knowledge of
agent 1's knowledge.
The construction is by induction on . The class U contains the single 1-world hh i,
where h makes p true. Let V = U .
A 2-world h is in U if h< 2 U and h (1) = fh< g. An l-world h is in Ul for l > 2
if h< is in U , and either hl? (2) is the 2-equivalence class of h<l? or hl? (3) is the 3equivalence class of h<l? . An !-world h is in U! if hl(1) is the 1-equivalence class of h<l
for all l > 1, and h<l is in Ul for some l > 2. An l-world h is in Vl for l > 1 if h< is in U ,
and for all m such that 1 m < l we have that hm (2) = fe : e h<m and e 2 Vmg, and
hm(3) = fe : e h<m and e 2 Vm g. An !-world h is in V! if hl(1) is the 1-equivalence
class of h<l for all l > 1, and h<l is in Vl for all l > 1.
Inductively, let < ! sup 2 be a limit ordinal. A world h is in U if h< is in
U and h (1) = fe : e h< and e 2 U g. A ( + l)-world h is in U l for l > 1 if
h< is in U and either h l? (2) is the 2-equivalence class of h< l? or h l? (3)
is the 3-equivalence class of h< l? . A ( + !)-world h is in U ! if h l(1) is the
1-equivalence class of h< l for all l > 0, and h< l is in U l for some l > 1. A ( + l)world h is in V l for l > 0 if h< is in U , h (2) = fe : e h< and e 2 U g,
h(3) = fe : e h< and e 2 Ug, and for all m such that 1 m < l we have that
h m (2) = fe : e h< m and e 2 V m g, and h m (3) = fe : e h< m and e 2
V m g. A ( + !)-world h is in V ! if h l (1) is the 1-equivalence class of h< l for all
l > 1, and h< l is in V l for all l > 1.
To prove the existence of f and g we have to rst prove several properties of the U's
and V 's. The proof requires a fairly technical induction hypothesis. We also need dene
the classes U0 for certain successor ordinals. Let be a limit ordinal. A world h is in
U0 if h< is in U and h(1) is the 1-equivalence class of h< .
Claim A. Let < ! sup 2.
1
0
2
2
2
1
1
1
1
1
0
1
1
1
1
1
2
2
2
3
+1
1
+1
+1
+
+
1
+
+
1
+
+
+
+
+1
1
+
1
+
+
+1
2
3
+
2
+
+
+
+
+
+
3
+
+
+
+
+1
1. If h 2 U, then there is some h0 2 U such that h0< = h.
2. If is a successor ordinal and h 2 V , then there is some h0 2 V such that
h0< = h.
3. If is a limit ordinal and h 2 U , then there is some h0 2 V such that h0< = h.
4. If is a successor ordinal and h 2 V , then there is some h0 2 U such that
h0< = h.
+1
+1
+1
+1
48
5. If is a limit ordinal and h 2 U , then there is some h0 2 U0 such that h0< = h.
6. The classes U and V are nonempty.
The proof is by induction on . We rst prove part (1). If is a successor ordinal
and h 2 U, then h sup + 1 is the desired extension. Assume now that = ! k is
a limit ordinal. Let h 2 U. We construct a ( + 1)-world h0 in U . Let h0< = h,
h0(2) (resp., h0 (3)) be the 2-no-information (resp., 3-no-information) extension of h, and
h0(1) = fe : e h and e 2 U g. We have to show that h0 satises (K30) for all agents.
That (K30) holds for agents 2 and 3 is obvious, so we focus on agent 1. Let e 2 h(1).
Without loss of generality we can assume that > ! (k ? 1). It is easy to see that
e sup 2 U and e h, so e sup 2 h0(1) and (K30) holds. This completes the proof
of part (1).
We now prove part (2). We rst consider the case = l < !. The claim clearly
holds for l = 1. For l > 1, let h0 be an (l + 1)-world dened as follows: h0<l = h,
h0l(1) is the 1-no-information extension of h, h0l(2) = fe : e h and e 2 Vl g, and
h0l(3) = fe : e h and e 2 Vl g. We claim that h0 2 Vl . (K10) and (K20) clearly
holds, so only (K30) remains to be veried. That (K30) holds for agent 1 is obvious, so
we focus on agent 2 (the argument for agent 3 is analogous). Let e 2 hl? (2). Since
h 2 Vl, we have that e 2 Vl? . Thus, by the inductive hypothesis, there is e0 2 Vl such
that e0<l? = e. In particular, e0l? (2) = fe00 : e00 e and e00 2 Vl? g. But e h<l? , so
e0l? (2) = hl? (2), and consequently e0 h. Thus, e0 2 h0l (2) and (K30) holds. It follows
that h0 2 Vl .
Now let be a limit ordinal, and let h 2 V l , l 1. We dene h0 to be a ( + l + 1)world dened as follows: h0< l = h, h0 l (1) is the 1-no-information extension of h,
h0 l (2) = fe : e h and e 2 V l g, and h0 l (3) = fe : e h and e 2 V l g.
We claim that h0 2 V l . (K10) and (K20) clearly holds, so only (K30) remains to be
veried. That (K30) holds for agent 1 is obvious, so we focus on agent 2 (the argument
for agent 3 is analogous). Let e 2 h l? (2). If l = 1, then e 2 U , and if l > 1, then
e 2 V l? . In either case there is e0 2 V l such that e0< l? = e. In particular, if
l = 1 then e0 l? (2) = fe00 : e00 e and e00 2 Ug, and if l > 1, then e0 l? (2) = fe00 :
e00 e and e00 2 V l? g. But e h< l? , so e0 l? (2) = h l? (2), and consequently
e0 h. Thus, e0 2 h0 l (2). It follows that h0 2 V l and (K30) holds. This completes
the proof of part (2).
We now prove part (3). Let h 2 U. We construct a ( + 1)-world h0 in V . Let
0
h< = h, and h0(i) = fe : e i h and e 2 Ug for i 2 P . We verify that h0 satises
(K30) as above. This completes the proof of part (3).
Let be a successor ordinal and let h 2 V . Then h sup + 1 2 U . Let be
a limit ordinal and let h 2 U. Then h sup + 1 2 U0 . This completes the proof of
parts (4) and (5). Finally, part (6) follows from parts (1), (2), and (3). This completes
the proof of Claim A.
Let k be the formula (:Cf ; gK ) sup kp, where (:Cf ; gK ) sup 0p, is p, and (:Cf ; gK ) sup k + 1p
is (:Cf ; gK )(:Cf ; gK ) sup kp.
+1
+1
1
1
2
3
+1
1
1
1
2
1
1
1
1
2
1
2
+1
+
+
+
2
+
+
3
+
+
+ +1
+
+
1
1
+
+
2
+
1
2
1
+
2
1
+
2
+
1
+
+
1
1
1
+ +1
+
+1
+1
+1
23
23
1
23
1
23
1
49
1
23
1
Claim B. Let < ! sup 2, = ! k + l.
1. Let k = 0 and l = 2. If h 2 U 0 , then h j= :K p. If h 2 U , then h j= K p.
2. Let k = 0 and l > 2. If h 2 U , then h j= :Ef ; g sup l ? 2K p. If h 2 V, then
h j= Ef ; g sup l ? 2K p.
3. Let k > 0 and l = 0. If h 2 U , then h j= k . If h 2 V, then h j= :k .
4. Let k > 0 and l = 1. If h 2 U0 , then h j= :K k . If h 2 U , then h j= K k .
5. Let k > 0 and l > 1. If h 2 U, then h j= :Ef ; g sup l ? 2K k . If h 2 V , then
h j= Ef ; g sup l ? 2K k .
1
2
2
1
1
23
1
23
1
1
1
23
1
23
Part (1) of the claim is obvious. We now prove part (2) by induction on l. Let
h 2 U and assume that h (2) is the 2-equivalence class of h< . Let e be the 2-matching
extension of h< with respect to h< . It is easy to see that e 2 U 0 and e h< . Thus,
e 2 h (2), so h j= :Ef ; gK p. Inductively, let h 2 Ul and assume that hl? (2) is the
2-equivalence class of h<l? . Let e be the 2-matching extension of h<l? with respect
to h<l? . Then e 2 Ul? and e 2 hl? (2). By induction, e j= :Ef ; g sup l ? 3K p. It
follows that h j= :Ef ; g sup l ? 2K p.
We now prove that if h 2 Vl, l > 1, then h j= Ef ; g sup l ? 2K p. This clearly
holds for l = 2. Suppose now that h 2 Vl, l > 2. Let e 2 hl? (2). By denition,
e 2 Vl? , so e j= Ef ; g sup l ? 3K p. Thus, h j= K Ef ; g sup l ? 3K p. Similarly,
h j= K Ef ; g sup l ? 3K p. It follows that h j= Ef ; g sup l ? 2K p. This complete the
3
2
2
1
2
2
2
2
1
23
1
1
1
2
2
1
1
1
23
1
23
1
23
1
1
3
1
23
23
2
1
1
23
23
1
proof of part (2).
Part (3) follows from parts (2) and (5) and the denition of U and V for limit
ordinals .
We now prove part (4). Let = ! k. Let h 2 U and e 2 h (1). By denition,
e 2 U , so by induction (part (3)) e j= k . Thus, h j= K k . Let h 2 U0 . Suppose rst
that k = 1. Let e 2 V! . We have that e 2 h (1), since e h<! by construction. But
e j= : (by part (3)), so h j= :K . Suppose now that k > 1. Let d be h<! k? . We
know that d 2 U! k? . By Claim A, d can be extended to a world e 2 V such that
e<! k? = d and e h. But e j= :k , so h j= :K k .
We prove Part (5) by induction on l. Let < ! sup 2 be a limit ordinal. Let h 2 U
and assume that h (2) is the 2-equivalence class of h< . Let e be the 2-matching
extension of h< with respect to h< . It is easy to see that e 2 U0 and e h< .
Thus, e 2 h (2), so h j= :Ef ; gK k , since, by part (4), e j= :K k . Inductively,
let h 2 U l , l > 2, and assume that h l? (2) is the 2-equivalence class of h< l? .
Let e be the 2-matching extension of h< l? with respect to h< l? . We have that
e 2 U l? and e 2 h l? (2). By induction, e j= :Ef ; g sup l ? 3K k . It follows that
h j= :Ef ; g sup l ? 2K k .
If h 2 V , then we also have h 2 U by construction. Thus, by part (4), we
have h j= K k . Suppose now that h 2 V l , l > 1. Let e 2 h l? (2). By denition,
+1
1
+1
1
1
1 1
(
(
(
1)
1
1)
1)
1
+2
+1
+1
+1
+1
23
+
+
1
+
23
1
+1
1
2
+1
1
+1
1
+
1
+
2
1
+
+
1
23
+1
+
50
+
1
1
1
e 2 V
so e j= Ef ; g sup l ? 3K k by induction. Thus, h j= K Ef ; g sup l ? 3K k .
Similarly, h j= K Ef ; g sup l ? 3K k . It follows that h j= Ef ; g sup l ? 2K k . This
completes the proof of part (5) and of Claim B.
Let = ! k + l, k > 0. Consider rst the case that is a limit ordinal. Let
h 2 U . By Claim A, there are worlds f 2 U and g 2 U0 such that f< = g< = h.
By Claim B, we have f j= :K k and g 6j= :K k . Consider now the case that is a
successor ordinal. Let h 2 V . By Claim A, there are worlds f 2 U and g 2 V
such that f< = g< = h. By Claim B, we have f j= :Ef ; g sup l ? 1K k and g 6j=
:Ef ; g sup l ? 1K k .
l?1 ,
+
3
23
1
23
1
2
23
1
23
+1
1
1
+1
1
+1
23
+1
1
1
23
References
[Au]
[Ba]
[BD1]
[BD2]
[BS]
[CM]
[DM]
[DS]
[FH]
R. Aumann, Agreeing to disagree, Annals of Statistics, 4(1976), pp. 1236{1239.
J. Barwise, Three views of common knowledge, Proc. 2nd Conference on Theoretical Aspects of Reasoning about Knowledge, M. Y. Vardi, ed., 1988, pp.
365{379.
A. Brandenburger and E. Dekel, Common knowledge with probability one, J.
Math. Economics 16(1987), pp. 237{245.
A. Brandenburger and E. Dekel, Hierarchies of beliefs and common knowledge,
Research Paper No. 841, Department of Economics, Stanford University, 1985.
R. Brachman and B. Smith (eds.) SIGART Newsletter, No. 70, Special issue on
knowledge representation, 1980.
H.H. Clark and C.R. Marshall, Denite reference and mutual knowledge, in
Elements of Discourse Understanding (A.K. Joshi, B.L. Webber, and I.A. Sag,
eds.), Cambridge University Press, Cambridge, 1981.
C. Dwork and Y. Moses, Knowledge and common knowledge in a Byzantine
environment: crash failures (extended abstract), Proc. Conf. on Theoretical
Aspects of Reasoning about Knowledge (J. Y. Halpern, ed.), Morgan Kaufmann
Publishers, 1986, pp. 149{170, to appear in Information and Computation.
D. Dolev and H.R. Strong, Authenticated algorithms for Byzantine agreement,
SIAM J. of Computing 12(1983), pp. 656{666.
R. Fagin and J. Y. Halpern, Reasoning about knowledge and probability: preliminary report, Proc. 2nd Conference on Theoretical Aspects of Reasoning
about Knowledge (M. Y. Vardi, ed.), 1988, pp. 277{293.
51
[FHV] R. Fagin, J. Y. Halpern, and M. Y. Vardi, What can machines know? On the
epistemic properties of machines, Proc. National Conf. on Articial Intelligence
(AAAI-86), 1986, pp. 428{434 (a revised and expanded version appears as IBM
Research Report RJ 6250, 1988).
[FV1] R. Fagin and M. Y. Vardi, An internal semantics for modal logic, Proc. 17th
ACM Symp. on Theory of Computing, Providence, 1985, pp. 305{315.
[FV2] R. Fagin and M. Y. Vardi, Knowledge and implicit knowledge in a distributed
environment: preliminary report, Proc. Conference on Theoretical Aspects of
Reasoning about Knowledge (J. Y. Halpern, ed.), 1986, pp. 187{206.
[Ha1] J. Y. Halpern, Reasoning about knowledge: an overview, Proc. Conference on
Theoretical Aspects of Reasoning about Knowledge (J. Y. Halpern, ed.), 1986,
pp. 1{17.
[Ha2] J. Y. Halpern, Using reasoning about knowledge to analyze distributed systems,
in Annual Review of Computer Science, Vol. 2, J. Traub et al., eds., Annual
Reviews Inc., 1987, pp. 37{68.
[HM1] J.Y. Halpern and Y.O. Moses, Knowledge and common knowledge in a distributed environment, Proc. 3rd ACM Symp. on Principles of Distributed Computing, 1984, pp. 50{61 (a revised and expanded version appears as IBM Research Report RJ 4421, 1988).
[HM2] J.Y. Halpern and Y.O. Moses, A guide to the modal logic of knowledge, Proc.
9th Int'l Joint Conference on Articial Intelligence (IJCAI-85), 1985, pp. 480{
490.
[HV] J. Y. Halpern and M. Y. Vardi, The complexity of reasoning about knowledge
and time, Proc. 18th ACM Symp. on Theory of Computing, 1986, pp. 304{315.
[Hi]
J. Hintikka, Knowledge and belief, Cornell University Press, 1962.
[Ho]
J. E. Hopcroft, An n log n algorithm for minimizing the states in a niteautomaton, in Theory of Machines and Computations (Z. Kohavi, ed.), Academic Press, 1971, pp. 189{196.
[HC] G.E. Hughes and M.J. Cresswell, Modal Logic, Methuen, 1968.
[Kr]
S. Kripke, Semantical analysis of modal logic, Zeitschrift fur Mathematische
Logik und Grundlagen der Mathematik 9 (1963), pp. 67{96.
[La]
R. E. Ladner, The computational complexity of provability in systems of modal
propositional logic, SIAM J. Comput. 6(1977), pp. 467{480.
52
[Le]
D. Lehmann, Knowledge, common knowledge, and related puzzles, Proc. 3rd
ACM Symp. on Principles of Distributed Computing, 1984, pp. 62{67.
[Mc] J. McCarthy, personal communication, 1984.
[MH] J. McCarthy and P. Hayes, Some philosophical problems from the standpoint
of articial intelligence, in Machine Intelligence 4 (D. Michie, ed.), American
Elsevier, 1969, pp. 463{502.
[MSHI] J. McCarthy, M. Sato, T. Hayashi, S. Igarashi, On the model theory of knowledge, Stanford AI Lab Memo AIM{312, Stanford University, 1978.
[MZ] J. F. Mertens and S. Zamir, Formulation of Bayesian analysis for games with
incomplete information, Int'l J. Game Theory, 14(1985), pp. 1{29.
[Mi]
P. Milgrom, An axiomatic characterization of common knowledge, Econometrica
49, 1(1981), pp. 219{222.
[MS] D. Monderer and D. Samet, Approximating common knowledge with common
beliefs, Games and Economic Behavior 1(1989), pp. 170{190.
[Mo1] R. C. Moore, A formal theory of knowledge and action, in Formal Theories of
the Commonsense World, J. Hobbs and R. C. Moore, eds., Ablex Publishing
Corp., 1985.
[Mo2] R.C. Moore, Semantical considerations on nonmonotonic logic, Articial Intelligence 25(1985), pp. 75{94.
[MT] Y. Moses and M. Tuttle, Programming simultaneous actions using common
knowledge, Proceedings of the 27th Annual Symposium on Foundations of Computer Science, 1986, pp. 208{221.
[Pa]
D. Park, Concurrency and automata on innite sequences, in Proc. 5th GI Conf.
on Theoretical Computer Science (P. Deussen, ed.), Lecture Notes in Computer
Science 104, Springer-Verlag, 1981.
[PSL] M. Pease, R. Shostak, and L. Lamport, Reaching agreement in the presence of
faults, J. ACM 27(1980), pp. 228{234.
[Ro]
S.J. Rosenschein, Formal theories of knowledge in AI and robotics, New Generation Computing 3, 1985, pp. 345{357.
[RK] S.J. Rosenschein and L.P. Kaelbling, The synthesis of digital machines with
provable epistemic properties, Theoretical Aspects of Reasoning about Knowledge (J. Y. Halpern, ed.), Morgan Kaufmann, 1986, pp. 83{98.
53
[Sa]
M. Sato, A Study of Kripke-type Models for Some Modal Logics, Research Institute for Mathematical Science, Kyoto University, Kyoto, Japan, 1976.
[TW] T. Tan and S. Werlang, On Aumann's notion of common knowledge { an alternative approach, Discussion Paper in Economics and Econometrics 85-26,
University of Chicago, 1985.
[VSG] P. van Emde Boas, J. Groenendijk, M. Stokhof, The Conway paradox: its
solution in an epistemic framework. Proc. 3rd Amsterdam Montague Symp.,
1980.
[Va1] M. Y. Vardi, A model-theoretic analysis of monotonic knowledge, Proc. 9th Int'l
Joint Conference on Articial Intelligence (IJCAI-85), 1985, pp. 509{512.
[Va2] M. Y. Vardi, On the complexity of epistemic reasoning. Proc. 4th IEEE Symp.
on Logic in Computer Science, June 1989, pp. 243{252.
54