Bounded rationality and Epistemic Logic Giacomo Sillari University

Bounded rationality and Epistemic Logic
Giacomo Sillari
University of Pennsylvania
Preliminary Draft
0. Introduction.
As it is well known, there exists a chasm between the picture of perfect rationality on which modern
mainstream economics is by and large built, and the limited reasoning and decision making abilities
displayed by real agents. Herbert Simon groundbreaking work (cf., for instance, Simon 1955) pioneered
research on bounded rationality. At the onset a study of administrative and organizational behavior,
after its inception the study of boundedly rational agency has flourished in a variety of areas. In decision
theory, the momentous conceptual shift is from focusing on optimal behavior to focusing on satisficing
behavior. Instead of maximizing her expected utility, a boundedly rational agent could adopt a satisficing
heuristics: scan the possible outcomes until she retrieves one that goes over her aspiration level. A
crucial theoretical element in accounts of bounded rationality, is the context of decision. In fact, decision
making is only the endpoint of a process of decision that involves information gathering and knowledge
processing. In prospect theory, for instance, the choice over prospects is preceded by an editing phase in
which prospects, their probability and their utility is constructed. Crucial psychological evidence about
the limits of human rationality points out the relevance of framing for decisions (decisions are largely
dependent on the frame of reference in which options are given to the decision maker.)
One of the assumptions of ideal rationality is that agents are perfect reasoners. Not only they invariably
come up with an ideal, objective representation of the decision space, but they are also able to perform
the computationally demanding calculations that lead to the choice of an action maximizing the agent’s
utility. In order to do so, the agent is supposed to be able to handle sophisticated probabilistic
reasoning. While common sense indicates that human agents are not able to perform such calculations,
experimental evidence shows moreover that human agents (i) are persistently biased in their
probabilistic reasoning and (ii) use, en lieu of probability calculus, a variety of simple heuristics. Simple
heuristics, according to Gigerenzer and the ABC group, “make use smart” in that they allow agents to
deviate from the rationality canon and yet achieve satisficing, if suboptimal outcomes.
In this paper I look at limitations to agents’ logical abilities or, as it were, limitations to their epistemic
rationality. The idea, in a nutshell, is that subjects display different kinds of access to items of knowledge
and belief. Beliefs to which an agent has access are explicitly held and possibly used in the agent’s
deductions. Beliefs to which an agent has no access are only implicitly held and cannot be used by the
agent. The framework in which the distinction above is implemented is epistemic logic, the formal
representation of knowledge and belief pioneered by the work of Jaakko Hintikka in the 60s, revived by
the interest of both economists and computer scientists in the 80s and thriving today at the intersection
of disciplines like philosophy, economics, informatics and cognitive science. The argument of this paper
is structured as follows: (1) experimental evidence traditionally interpreted as meaning that human
reasoners fail to perform even the simplest of logical tasks, and hence that logic shouldn’t find a
privileged spot in a theory of human reasoning is re-interpreted to show that logical reasoning is a
(qualified) feature of human rationality. (2) In the following section I argue that logical reasoning, and
representations thereof, needs to be informed by psychological, empirical elements, and review
different approaches to integrating psychologistic elements in logic, not all satisfactory. (3) I will then
introduce the traditional formalism for representing knowledge and belief in logic and economics and
show its descriptive inadequacy as stemming from the so-called logical omniscience of agents. (4) A
philosophical discussion of logical omniscience based on the works of Isaac Levi and of Allen Newell is
used to support a weakening of the formalism and the introduction of a notion of accessibility in the
language. (5) In the following section, I offer a description of the formalism and of some of the main
results, as well as this author’s contributions, in the “awareness logic” literature while (6) in the final
section I propose possible avenues for future research and conclude.
1. The Wason Selection Task
The classic locus for experimental evidence on the logical performance of subjects is the Wason
Selection Task (see Wason, 1966). The subjects are shown four cards showing (in the original version of
the problem) the characters E, K, 4 and 7 on their visible side. Subjects are told that each card has a
number on one side and a letter on the other. They are then asked to “name the cards, and only those
cards, which need to be turned in order to determine whether the [following] rule is true or false”: If a
card has a vowel on one side, it has an even number on the other. The most popular responses are that
(i) one has to turn the E and 4 cards or (ii) only the E card. Both answers are incorrect: the E card needs
to be checked, since an odd number on the other side would invalidate the rule. However, any letter on
the flip side of the 4 card is consistent with the rule, and of course so is any number on the flip side of
the K card. On the other hand, a vowel on the flip side of the 7 card would invalidate the rule. So, in
order to disconfirm the rule (that is, in order to check for its validity), subjects need to look at both the E
and the 7 cards. Wason discovered that only about 4% of subjects actually indicate the correct solution,
a fact that would indicate a sever lack of logical skills in human subjects, and in fact a cognitive bias—the
confirmation bias—that prevents us to reason correctly about even simple logical implications. Such
results can be taken as evidence that logic does not (and in fact should not) play a role in actual human
reasoning: “only gradually did we realize first that there was no existing formal calculus that correctly
modeled our subjects’ inferences, and second that no purely formal system would succeed.” (Wason
and Johnson-Laird 1972).
In a recent book, Stenning and van Lambalgen (2008) re-evaluate evidence from the vast literature on
the Wason selections task. Their idea is that, in and of itself, the WST does not speak against the logical
character of human reasoning. They argue that, on the contrary, the way subjects understand,
conceptualize and represent the task determine the use of different kinds of reasoning, and may explain
the result of the vast experimental follow up. In particular, subjects facing the WST need to make
several choices about representation: of the implication written in the rule (is it material implication? is
it a conditional? does it allow exceptions?), on the cards themselves (are they a sample from a larger
set?), of the notion of “truth” (does not disproving the rule amount to validating it?), and most
importantly, on how to understand the rule. It is well known that ‘pragmatic’ versions of the WST (e.g.
versions in which the rule is of the kind “if under 21, then not allows to drink beer”) see much higher
proportions of correct answers. This suggests, according to van Lambalgen and Stenning, that agents
distinguish between characterizations of descriptive rules (like the original rule from the WST) and
deontic rules (like the drinking rule). In particular, while the latter is assumed to be valid, the validity of
the former is to be established. This would prompt two different modes of logical reasoning, that is to
say two distinct logical forms for the task. The deontic form assumes the validity of the rule from the
onset, and therefore the information-processing task, here, is the inspection of those cases that could
possibly violate the norm.
2. Logic and Psychology
What do failures at the WST entail for our views about reasoning and logic? Do they mean that subjects
fail to comply with rules that ought to be observed if one wants to reason correctly? If the answer to this
question is yes, then logic is normative and human subjects fail to live by the standards of epistemic
rationality. But what would such standards be? Do logical laws exist independently from their
incarnation in human reasoning, or does logic have empirical content instead? If the latter, then logic
and cognitive psychology are inextricably, and productively, linked to one another.
In this paper, I argue that logic does have some degree of normativity, with the proviso that there is no
privileged logic to model reasoning, but rather a wealth of logical systems, each one apt for modeling
reasoning in different situations. In mathematics, we need classical or constructive reasoning; in
problem-solving, classical or non-monotone; in knowledge representation, epistemic or dynamic, and so
on. However, before a particular deductive system is adopted, we do perform a crucial initial operation,
which is that of representing the relevant information for the scenario in which we are acting. Thus,
while modus ponens is a classically valid rule of inference regardless of the fact that we use it correctly
or not, context and the way subjects interpret its elements bear an influence on the kind of inferential
practices that human reasoners adopt. But if a subject’s interpretation of the contextual elements is of
such great importance for reasoning, then it makes sense to enrich our notion of logic by incorporating
psychologistic elements in the modeling of reasoning, be it by appealing to ordinary language users
intuitions, or be it resorting, as it is more and more often the case, to the work of cognitive scientists
(see the position paper Verbrugge 2009, the recent special issues of Studia Logica, Journal of Logic,
Language, and Information, and Topoi on various aspects of logic, psychologism and the interrelation
between logic and cognitive science).
The line between abstract logic correctness and the pragmatics of human reasoning needs then to be
blurred. But to what extent? A possibility would be that of identifying logic and reasoning tout court, by
having the latter cannibalize the former. Such a radically individualistic position identifies, for each
reasoner i, the rule of reasoning to which i abides and the rules of logic (for agent i). “Logical
individualism” is of course untenable, for one of its consequences—the absolute relativity of logical
truth—defies the purpose of identifying general principles of inference that can aptly describe the
reasoning of flesh-and-bone agents. As much as we want to model agents who are epistemically rational
only in a limited, bounded sense, we do need to hold agents accountable to at least some of their
epistemic or doxastic commitments. But full relativism of the kind yielded by psychological individualism
would prevent us from obtaining that goal.
A more promising position idea might be that of monitoring the practices of a relevant group of agents
and identify the rules of correct reasoning with some average of the observable performance of such a
group. “Correct reasoning” is then given by the lowest common denominator established by a relevant
target group. Or it is yielded by some kind of average in the logical practices of the target group.
Whatever the conception under which we take the performance of a (selected) group to be the
benchmark for correct reasoning, such a descriptivist position is more appealing that the individualistic
one. For one thing, truth is not purely subjective and relativized anymore. In fact, contrary to the
individualistic case, methods of reasoning and the conclusion yielded by them can be interpersonally
discussed and communicated. However, this position still leaves us with reasoning rules devoid of
normative force besides the normativity that stems from societal disapproval that we incur when
violating generally accepted rules of reasoning.
A third way in which we can understand the relation between logic and psychology is based on
“cognitive architectures” (see Pellettier et al., 2008). According to this view, we reason the way we do
because we have evolved a cognitive module specifying the way the human brain works when it comes
to representations and inferences. The mental apparatus developed by the human species to solve
problems related to dominance, social contract enforcement, etc. ensures that there is intersubjective
agreement on logical truth and validity (since the goals for which the cognitive architecture has evolved
are the same for everyone), and the cognitive architecture constitutes a steady benchmark against
which to evaluate logical reasoning (see also Rips 1985). In fact, psychologists like Cummins, and
Cosmides and Tooby, among others, have come up with ingenious evolutionary explanations
reminiscent of Gigerenzer’s argument that humans are better at reasoning about frequencies than at
reasoning about probabilities, exactly because the former kind of ability was more adaptive than the
latter. The inferential processes supported by the adaptive module, constitute a benchmark of
competence for human reasoning.
3. Reasoning About Knowledge
The basic idea behind formal representations of knowledge and belief is to give a semantics of such
notions in terms of possible worlds that may or may not be epistemically accessible to the agents. Each
world validates a certain set of atomic propositions (it is sunny, a is a possible action, etc.) An agent is
said to know a given proposition at world w1 if it holds true in all possible worlds that are epistemically
accessible to that agent from w1 (in symbols: (M,w1) |= Kϕ, where M is the model containing w1) Let me
illustrate this key notion with a simple example used in a public lecture by Johan van Benthem (see van
Benthem 2008). Ann, Bob and Carol receive one card each from a set comprising one red, one white and
one blue card. The can see what card they have been dealt, but they cannot tell which card the other
kids hold. We can represent the relevant worlds by the initials of the cards distributed to Ann, Bob, and
Carol, respectively. For instance BWR is the world in which Ann has the blue card, Bob the white card
and Carol the red card. Atoms have the form bAnn, meaning that Ann holds the blue card. bAnn is true in
both BWR and BRW and only in those two possible worlds. Thus, for instance, Ann cannot distinguish
between BWR and BRW. We can schematically represent the model M as follows, representing Ann’s
accessibility relation with solid lines, Bob’s with dotted lines and Carol’s with dashed lines (and omitting
reflexive arrows):
BRW
WRB
BWR
RWB
WBR
RBW
It is now easy to answer questions about what players know about each other. For instance: at BWR,
does Ann know that Bob has the white card? She does not, since both BWR and BRW are epistemically
accessible for her, and Bob has white in one state but red in the other. However, she knows that Carol
does not have blue, since both in BWR and BRW, bAnn holds. More complex propositions can be tested:
does Bob know that Ann does not know that Bob has white? From BWR, Bob accesses BWR and RWB. At
BWR Ann doesn’t know bBob. From RWB, Ann accesses RBW and RWB, and she does not know that bBob
either. So at all worlds accessible from BWR for Bob, Ann doesn’t know bBob. Hence Bob knows that Ann
doesn’t know that Bob holds blue.
Knowledge and belief are distinguished in the language by imposing different axiomatic constraints. For
instance, it is customary to accept the veridicality axiom for knowledge: Kϕ → ϕ; that is: if an agent
knows ϕ, then ϕ must be true. Instead of veridicality, accounts of belief accept a weaker constraint: Bϕ
→ ¬B¬ϕ; that is: beliefs are consistent. Such axiomatic constraints translate into structural properties
of possible worlds semantics. For instance, veridicality entails that the accessibility relation be reflexive,
as the agent must be able to see that ϕ holds at the world she is at. Veridicality is not the only property
of knowledge posited by epistemic logicians. Introspective axioms are also required both in the positive
(Kϕ→KKϕ) and negative (¬Kϕ→K¬Kϕ). These axioms, along with a few others, ensure that the
accessibility relation on the set of possible worlds is an equivalence relation. This in turn entails that the
accessibility relation induces a partition of the space of possible worlds, in fact it establishes the
equivalence between possible worlds models favored by philosophers and partitional models used in
economics to model agents’ knowledge and information.
4. Bounded Epistemic Rationality
The possible worlds framework is a powerful tool for formally represent epistemic and doxastic
attributions. After the inception of epistemic logic in the 60s, the field has received new impulse during
the 80s, when computer scientists and economists alike have adopted the partitional possible worlds
framework for dealing with multi-agent systems and interactive epistemology, respectively. Soon
enough, however, it became clear that the framework is in fact too powerful. Epistemic logics model
logically omniscient agents: the deductive abilities of agents are too strong for epistemic logic to be an
adequate representation of epistemic and doxastic attributions to agents. Among the undesirable
properties: agents know all propositional tautologies, no matter how complex; they know all the logical
consequences of what they know, no matter how distant, and so on. This was apparent already to
Jaakko Hintikka in his seminal 1962 monograph Knowledge and Belief. His clever, yet partial way out was
to distinguish two senses of “knowledge”: a weak sense, in which knowledge stands simply for
information, and a strong sense, in which knowledge is information held by the agent by virtue of some
kind of justification. Thus, the semantic characterization of knowledge described in the previous session
would in fact be a characterization of information (cf. van Benthem, ms.) while denizens of knowledge
proper would be items of information that are entertained by the agent in some relevant
(epistemological or perhaps psychological) sense. It is interesting to note that, though not at all based
on psychological evidence, Hintikka’s argument in several places appeals to intuitions recoverable
among ordinary language users (e.g. validity was rethought of as “immunity from certain standards of
criticism”).
Several authors have confronted the problem of logical omniscience. In this section I’ll review the ideas
of two scholars—one, Allen Newell, of the founding fathers of artificial intelligence and the
epistemologist Isaac Levi—to show how their approaches can fit with the solution of the problem
proposed here.
4.1 Newell’s Knowledge Level. An intelligent system, in the functional view of agency endorsed by
Newell, is embedded in an action-oriented environment. The system's activity consists in the process
from a perceptual component (that inputs task statements and information), through a representation
module (that represents tasks and information as data structures), to a goal structure (the solution to
the given task statement). In this picture, knowledge is perceived from the external world, and stored as
it is represented in data structures. Newell claims that there is a distinction between knowledge and its
representation, much like there is a one between the symbolic level of a computer system and the level
of the actual physical processes supporting the symbolic manipulations. A level in a computer system
consists of a medium (which is to be processed), components together with laws of composition, a
system, and, determining the behavior of the system, laws of behavior. For example, at the symbolic
level the system is the computer, its components are symbols and their syntax, the medium consists of
memories, while the laws of behavior are given by the interpretation of logical operations. Below the
symbolic level, there is the physical level of circuits and devices. Among the properties of levels, we
notice that each level is reducible to the next lower level (e.g., logical operations in terms of switches),
but also that a level need not have a description at higher levels. Newell takes it that knowledge
constitutes a computer system level located immediately above the symbolic level.
At the knowledge level, the system is the agent; the components are goals, actions and bodies (of
knowledge); the medium is knowledge and the behavioral rule is rationality. Notice that the symbolic
level constitutes the level of representation. Hence, since every level is reducible to the next lower level,
knowledge can be represented through symbolic systems. But can we provide a description of the
knowledge level without resorting to the level of representation? It turns out that we can, although we
can only if we do not decouple knowledge and action. In particular, says Newell, ``it is unclear in what
sense [systems lacking rationality] can be said to have knowledge (Cf. Newell 1982:100), where
“rationality” stands for “principles of action'”. Indeed an agent, at the knowledge level, is but a set of
actions, bodies of knowledge and a set of goals, rather independently of whether the agent has any
physical implementation. What, then, is knowledge? Knowledge, according to Newell, is whatever can
be ascribed to an agent, such that the observed behavior of the agent can be explained (that is,
computed) according to the laws of behavior encoded in the principle of rationality. The principle of
rationality appears to be unqualified: “If an agent has knowledge that one of its actions will lead to one
of his goals, then the agent will select that action (Cf. Newell 1982:102). Thus, the definition of
knowledge is a procedural one: an observer notices the action undertaken by the agent; given that the
observer is familiar with the agent's goals and its rationality, the observer can therefore infer what
knowledge the agent must possess. Knowledge is not defined structurally, for example as physical
objects, symbols standing for them and their specific properties and relations. Knowledge is rather
defined functionally as what mediates the behavior of the agent and the principle of rationality
governing the agent's actions. Can we not sever the bond between knowledge and action by providing,
for example, a characterization of knowledge in terms of a physical structure corresponding to it? As
Newell explains, “the answer in a nutshell is that knowledge of the world cannot be captured in a finite
structure. [...] Knowledge as a structure must contain at least as much variety as the set of all truths (i.e.
propositions) that the agent can respond to” (Newell 1982:107). Hence, knowledge cannot be captured
in a finite physical structure, and can only be considered in its functional relation with action.
Thus (a version of) the problem of logical omniscience presents itself when it comes to describing the
epistemic aspect of an intelligent system. Ideally (at the knowledge level), the body of knowledge an
agent is equipped with is unbounded, hence knowledge cannot be represented in a physical system.
However, recall from above how a level of interpretation of the intelligent system is reducible to the
next lower level. Knowledge should therefore be reducible to the level of symbols. This implies that the
symbolic level necessarily encompasses only a portion of the unbounded body of knowledge that the
agent possesses.
The interesting question is then: in which way does an agent extract representation from knowledge?
Or, in other terms: Given the definition of representation above, what can its theory be? Building a
theory of representation involves building a theory of access, to explain how agents manage to extract
limited, explicit knowledge (working knowledge, representation) from their unbounded implicit
knowledge. The suggestive idea is that agents do so “intelligently”, i.e. by judging what is relevant to the
task at hand. Such a judgment, in turn, depends on the principle of rationality. Hence, knowledge and
action cannot be decoupled and knowledge cannot be entirely represented at the symbolic level, since it
involves both structures and processes. Logics, as they are “one class of representations [...] uniquely
fitted to the analysis of knowledge and representation” seem to be suitable for such an endeavor. In
particular, epistemic logics in which the notion of access can be built in in order to achieve the
distinction between (unbounded) knowledge and its (limited) representation are natural candidates to
axiomatize theories of explicit knowledge representation.
4.2 Commitment and Performance. Levi illustrates (see, for instance, Levi 1980) the concept of
epistemic commitment through the following example: an agent is considering what integer stands in
the billionth decimal place in the decimal expansion of π. She is considering ten hypotheses of the form
“the integer in the billionth decimal place in the decimal expansion of π is j”, where j designates one of
the first ten integers. Exactly one of those hypotheses is consistent with the logical and mathematical
truths that, according to Levi, are part of the incorrigible core of the agent's body of knowledge.
However, it is reasonable to think that, if the agent has not performed (or has no way to perform) the
needed calculations, “there is an important sense in which [the agent] does not know which of these
hypotheses is entailed by [logical and mathematical truths] and, hence, does not know what the integer
in the billionth place in the decimal expansion of π” (Levi 1980:9-10). Levi stresses that the agent is
committed to believing the right hypothesis, but she may at the same time be unaware of what the right
hypothesis is. While the body of knowledge of an ideally situated and rational agent contains all logical
truths and their consequences, the body of knowledge of real persons or institutions does not. Epistemic
(or, as Levi prefers, doxastic) commitments are necessary constituents of knowledge, which, although
ideally sufficient to achieve knowledge, must in practice be supplemented with a further element. As
Levi puts it: “I do assume, however, that to be aware of one's commitment is to know what they are”
(Levi 1980:12).
The normative aspect of the principle of rationality regulating epistemic commitments and, hence, their
relative performances, is further explored in (Levi 1997). Levi maintains that the principle of rationality
in inquiry and deliberation is twofold. On the one hand, it imposes necessary, but weak, coherence
conditions on the agent's state of full belief, credal probability, and preferences. On the other, it
provides minimal conditions for the justification of changes in the agent's state of full belief, credal
probability, and preferences. As weak as the coherence constraints might be, they are demanding well
beyond the capability of any actual agent. For instance, full beliefs should be closed under logical
consequence; credal probabilities should respect the laws of probability; and preferences should be
transitive and satisfy independence conditions. Hence, such principles of rationality are not to be
thought of as descriptive (or predictive) or, for that matter, normative (since it is not sensible to impose
conditions that cannot possibly be fulfilled). They are, says Levi, prescriptive, in the sense that they do
not require compliance tout court, but rather demand that we enhance our ability to follow them.
Agents fail to comply with the principle of rationality requiring the deductive closure of their belief set,
and they do so for multiple reasons. An agent might fail to entertain a belief logically implied by other
beliefs of hers because she is lacking in attention. Or, being ignorant of the relevant deductive rules, she
may draw an incorrect conclusion or even refuse to draw a conclusion altogether. The former case,
according to Levi, can be accommodated by understanding belief as a disposition to assent upon
interrogation. In the latter, the agent needs to improve her logical abilities—by “seeking therapy”. In
both cases, however, what is observed is a discrepancy between the agent's commitment to hold an
epistemic disposition, and her epistemic performance, which fails to display the disposition she is
committed to having. The prescriptive character of the principle of rationality gives the agent an
(epistemic) obligation to fulfill the commitment to full belief. The agent is thus committed to holding
such a belief. The notion of full belief appears both as an epistemic disposition (commitment) of the
agent, as well as the actual performance of her disposition.
The discussion of Levi's idea of epistemic commitment provides us with three related, yet distinct
concepts involved in the description of the epistemic state of an agent. On the one hand, we have
epistemic commitments. On the other, we have commitments that the agent fulfills, that is to say, in the
terminology of (Levi 1980), commitments of which the agent is aware. The latter, though, calls for a
third element, the agent's awareness of the commitment she is going to fulfill.
5. Models of (Un)awareness
The two accounts analyzed above share a common structure. In the case of Allen Newell's analysis of
agency at the knowledge level, knowledge representation (readily available to the agent) is distinguished
by knowledge tout court in that it consists of the latter plus some form of access to it. The slogan, by
Newell, is thus: Representation = Knowledge + Access. In the case of Levi, flesh-and-bone agents display
logical performances that differ more or less markedly from their logical commitments. Under the
cognitive architecture view expounded in section 2, agents commitments would represent their logical
competence. Knowledge, thus, relies again on two factors: commitment (or competence) and access (or,
in Levi’s words, awareness). How can we incorporate the issue of accessibility in the fabric of epistemic
logic? The answer comes from computer science. Fagin and Halpern (1988) put forth a logic of
awareness in which the picture of knowledge as based on access finds a precise formalization. In their
logic there are three operators pertaining to issues epistemic: an implicit knowledge operator, which
behaves in the standard way illustrated in section 3; an awareness operator, which has in its scope
formulas to which the agent has access to; and an explicit knowledge operator, which represents the
realistic notion of knowledge we are interested in and that, intuitively, is used for those items of implicit
knowledge to which the agent has access or, in Fagin and Halpern’s langauge, of which she has
awareness.
The idea of introducing a function that sieves an agent’s unbounded knowledge to distil those items of
knowledge she is in fact actively entertaining represent a first step towards the explicit introduction of
the element of context representation in logical formalisms. Technically, the intuition is that the
juncture where issues of accessibility become prominent is the linguistic representation of propositions;
hence the language distinguishes the formulas an agent is aware of (has access to) and those she is not.
At the semantic level, each world is associated with an ‘awareness set’, representing the propositions to
which the agent has access. Implicit knowledge is defined as usual, while explicit knowledge is yielded,
syntactically, by the conjunction of implicit knowledge and awareness while, semantically, the truth
conditions for an agent to explicitly know ϕ at w hold if and only if she implicitly knows ϕ and ϕ belongs
to the agent’s awareness set at w.
Awareness logics are versatile, in that various properties of awareness can be axiomatically included in
the system. A kind of awareness (particularly favored by economists is the so-called awareness as
generated by primitive propositions. In these systems, a subset of the set of atoms in the language is
specified, and agents are aware of all (and only) those formulas that mention atoms from the relevant
subset. Clearly, these systems are still a wild idealization of agents’ deductive capabilities, since it is not
hard to see that, with respect to the part of the language of which the agent is aware, the agent is fully
logical omniscient. Thus, such models are perhaps seen as models of unawareness rather than
awareness. In the economic literature, attempts to model agent’s awareness information go back to
Modica and Rustichini (1994, 1999). At first, following the lead of Geanakoplos (1989, 1992) the two
Italian economists attempted to model awareness in standard structures by dropping the negative
introspection axiom and positing Aϕ ↔ (Kϕ ∨ ¬K¬Kϕ). Dekel, Lipman and Rustichini (1998) show that
the standard approach trivializes awareness (either, if an agent has access to one formula, she has
access to all formulas or she has access to no formulas at all), and Modica and Rustichini (1999) came up
with a non-standard approach introducing the notion that agents have access to a (possibly proper)
subset of the set of all atoms in the language. Halpern (2001) shows that their approach, although
lacking an implicit knowledge operator, is equivalent to a special case of the epistemic logic of
awareness introduced by Fagin and Halpern (1988). This author has contributed to the literature on the
logics of awareness in various publications: in Sillari (2005, 2008c), awareness logic is used to model the
distinction form Lewis (1969) between beliefs and reasons to believe; in Sillari (2008a), a predicate
system of awareness logic is defined and it is shown that it can be used to solve a problem with the
expressivity of propositional systems, namely that the latter cannot express ‘awareness of unawareness’
which, however, is a fairly common epistemic phenomenon; in the same paper, it is shown that the
predicate logic has an expressive decidable fragment; in Sillari (2008b), the equi-expressivity is proven
between various systems of epistemic logic meant to cope with logical omniscience and systems based.
6. Further Research and Conclusions
6.1 A Non Expected Utility model of decision theory. In the seminal QJE 1955 article “A Behavioral
Model of Rational Choice”, where he introduced the satisficing heuristics, Herb Simon writes: “Models
of rational behavior — both the global kinds usually constructed, and the more limited kinds to be
discussed here — generally require some or all of the following elements:[…] The subset of behavior
alternatives that the organism ‘considers’ or ‘perceives.’ That is, the organism may make its choice
within a set of alternatives more limited than the whole range objectively available to it. The
‘considered’ subset can be represented by a point set A’, with A’ included in A.” The intuition seems
captured by the semantic distinction between formulas of which an agent is aware and those of which
she is not. A model of decision theory with non-omniscient agent is due to Lipman (1999) and, to my
knowledge, is the only such model available. However it is cast in a different framework than the one
proposed here (impossible worlds rather than awareness structures). By enriching the state space A
with a subjective state space A’, we introduce a thrid variable, besides the two (utility and probability)
present in the traditional Bayesian framework, namely the subjective portion of the state space. The
task would be that of formulate a prove a representation theorem in this setting. Intuitively, the model
will allow for a larger number of behavioral patterns to be represented: some of the patterns forbidden
by traditiona expected utility models will now be explained by failures of logical omniscience. Some
staple axioms of decision theory will have to be dropped: for instance, the Sure-Thing principle would
proabbly cease to hold, since because of the subjective variable, agents need not be able to tell that two
action induce the same outcome at each state of a given event.
6.2 Intelligent Interaction. So far, I haven’t considered important notions pertaining knowledge in
groups. Common knowledge (the epistemic state in which everybody in a group of agents knows a given
proposition, everybody knows that everybody knows it, everybody knows that everybody knows that
everybody knows it, etc., cf. Vanderschraaf and Sillari 2007 for an overview) and its dual distributed
knowledge (the knowledge that can be derived by pooling together the individual items of knowledge
from a group of agents) are key notions in both economics and computer science applications. Modeling
group epistemic notions in a formalism in which different agents may lack access to different portions of
the state space is a challenge that cannot be solved apodictically, but that, rather, needs to take into
account a richer context in which agents interact. Notions of group knowledge necessarily involve
interaction and a variety of logical tasks, of which inference is only one. Evaluation, and update—calling
for a dynamic setting—are crucial reasoning tasks for intelligent agency. In a dynamic setting we can
take into account how agents come to have access (or come to forget) a given proposition. Perhaps an
agent is made aware of a proposition because she has performed an inference. Or perhaps, because
another agent has communicated with her. Different agents differ in significant ways(cf. Liu 2006): some
are more introspective, some are less trusting, all have different policies for revising their beliefs (and
therefore for learning) etc. Such different attitudes can be accommodated in a fuller version of the
logical formalism sketched in this paper.
6.3 Conclusions. Research on bounded rationality has considered theoretical and psychological aspects
of choice, discovered common and persistent biases in probabilistic reasoning, revealed the importance
of contextual framing for adequate theories of decision. In this paper I concentrate on bounded
epistemic rationality, and characterize the way boundedly rational agents reason about their or other
agents’ knowledge. The central idea is to introduce in the formalism the notion of accessibility: by
describing agent who do not entertain at once all the possible formulas of the language, but only a
subset of them, we are able to limit their deductive abilities and to move first steps towards the formal
characterization of patterns of intelligent interaction.
Bibliography
Dekel, E., B. L. Lipman and A. Rustichini (1999) Standard State-space Models Preclude Unawareness.
Econometrica, 66(1):159-173
Fagin R. and J. Halpern (1988) Belief, Awareness and Limited Reasoning. Artificial Intelligence, 34(1):3976
Geanakoplos, J. (1989) Game Theory Without Partitions, and Applications To Speculation and
Consensus. Cowles Foundation Discussion Paper
Geanakoplos, J. (1992) Common Knowledge, Bayesean Learning and Market Speculation with Bounded
Rationality. Journal of Economic Perspectives 6:58-82
Halpern, J. (2001) Alternative Semantics for Unawarness. Games and Economic Behavior, 37(2):321-339
Hintikka, J. (1962) Knowledge and Belief. Cornell University Press
Levi, I. (1980) The Enterprise of Knowledge. The MIT Press
Levi, I. (1997) The Covenant of Reason. Cambridge University Press
Lipman, B. (1999) Decision Theory Without Logical Omniscience: Toward an Axiomatic Framework for
Bounded Rationality. Review of Economic Studies, April 1999
Liu, F. (2006) Diversity of Agents. In Thomas Agotnes and Natash Alechina (eds.) Proceedings of the
Workshop on Logics for Resource-Bounded Agents, ESSLLI, Malaga, Spain 2006, pp. 88-98
Modica S. and A. Rustichini (1994) Awareness and Partitional Information Structures. Theory and
Decision, 37(1):107-124, 1994
Modica S. and A. Rustichini (1999) Unawareness and Partitional Information Structures. Games and
Economic Behavior, 27(2):265-298
Newell, A. (1982) The Knowledge Level. Artificial Intelligence, 18(1)
Pellettier, F. J., R. Elio and P. Hanson (2008) Is Logic all in our Heads? From Naturalism to Psychologism.
Studia Logica, 88:3-66, special issue on “Psychologism in Logic?”, H. Leitgeb, ed.
Rips, L. J. The Psychology of Proof. (1995) Deductive Reasoning in Human Thinking. The MIT Press
Sillari, G. (2005) A Logical Framework for Convention. Synthese 147(2):379-400
Sillari, G. (2008a) Models of Awareness. In Bonanno, G., W. van der Hoek and M. Wooldridge, eds. Logic
and the Foundations of Game and Decision Theory. Amsterdam University Press, pp. 209-240
Sillari, G. (2008b) Quantified Logic of Awareness and Impossible Possible Worlds. Review of Symbolic
Logic 1(4):1-16
Sillari, G. (2008c) Common Knowledge and Convention. Topoi 27(1)-29-40
Simon, H. A. (1955) A Behavioral Model of Rational Choice. Quarterly Journal of Economics 69:99-118
Stenning, K. and M. van Lambalgen (2008) Human Reasoning and Cognitive Science. The MIT Press
van Benthem, J. F. A. K. (2008) Logic and Reasoning: Do the Facts Matter? Studia Logica, 88:67-84,
special issue on “Psychologism in Logic?”, H. Leitgeb, ed.
van Benthem, J. F. A. K. (ms.) Logical Dynamics of Information and Interaction. Manuscript
Vanderschraaf, P. and G. Sillari (2007) Common Knowledge. In Zalta, E. (ed.) Standford Encyclopedia of
Philosophy
Verbrugge, R. (2009) Logic and Social Cognition. The Facts Matter and so do Computational Models.
Journal of Philosophical Logic 38:649-680
Wason, P. C. (1966) Reasoning. In B. M. Foss (ed.), New Horizons in Psychology. Penguin
Wason, P. C. and P. N. Johnson-Laird (1972) Psychology of Reasoning: Structure and Content. Harvard
University Press