How Can Psychology Help Artificial Intelligence

How Can Psychology Help Articial Intelligence
Alvaro del Val
Departamento de Ingeniera Informatica
Universidad Autonoma de Madrid
[email protected]
http://www.ii.uam.es/ delval
March 29, 1999
Abstract
This paper discusses the relationship between Articial Intelligence
(AI) and Psychology. As an AI practicioner, I'll let psychologists evaluate the impact of AI, or more generally computational models of
intelligence, perception, and agency, on their discipline. I'll focus on
the impact of psychology on AI today, specially in how psychology
could help AI more than it is currently doing. I will emphasize the
dierences in evaluation criteria for each discipline. Looking into the
future, I'll discuss a number of fundamental technological hurdles faced
by AI today, which could benet from a better understanding of certain forms of human reasoning and acting. In particular, I'll suggest
that cognitive psychology, in order to be useful to AI, needs to study
common-sense knowledge and reasoning in realistic settings and to
focus less in errors in performance in favour of studying how people do
well the things they do well.
1 Introduction
Articial intelligence is interdisciplinary in its very nature, and has from its
birth beneted by input from disciplines as diverse as mathematical logic,
philosophy, psychology, linguistics, economy and other social sciences. Of
course, AI is at its core a computer science discipline, which as most of
computer science has a strong engineering orientation. Its main goal is
the design and construction of automated systems that exhibit intelligent
behavior. I won't attempt to dene intelligent behavior, as it would be
1
futile. From a system design and analysis perspective, even a thermostat
can be said to react intelligently to its environment, given its goals (or, more
precisely, its designer's goals). It is clear though that high level cognitive
and reasoning processes are crucial to AI, and that from this perspective AI
has a lot to learn, potentially, from humans, our best model of intelligent
behavior. Among other aspects, humans engage in quite sophisticated ways
in goal-furthering behavior which involves knowledge about the natural and
social world the ability to react to and shape evolving, incompletely known
environments in which they are embedded to learn from them to correct
errors, even planning for the possibility of error to cooperate or compete
with other agents in pursuing goals, etc.
There is therefore little doubt that the study of human cognitive processes is relevant to AI, and thus that psychology can play a signicant role
for AI. The relevant model here is not, as it should be clear from the above
remarks, that of an almost isolated agent, solving logic puzzles or even playing chess. Knowledge intensive tasks, involving \common sense reasoning,"
embedded in a complex and changing natural and social environment, are
much more relevant. I'll come back to this point later.
In this paper, I will make the following three claims:
(a) AI and psychology have very dierent standards of evaluation for their
contributions. These dierences arise for the most part from the fact
that AI is an engineering discipline with quite practical goals AI is not
concerned with modeling human behavior in an empirically acceptable
fashion, but only with synthesising intelligent systems by whichever
means are available.
(b) The impact of psychology on current AI is relatively limited, with the
area of learning perhaps the main exception. There are many AI systems inspired by \cognitive architectures" which aim at \psychological
plausibility." These are respected contributions to AI, but they are not
driving AI research. On the contrary, most recent successes in AI seem
to point in the opposite direction.
(c) Nevertheless, there are many aspects of human behavior which could
be of great help for AI if better understood. Some of them have been
hinted at above, and are at the root of many of the most dicult challenges of AI today. Psychology could be of most help in understanding
2
these phenomena.
These claims certainly have a subjective component, inuenced by my
(necessarily partial) perspective of AI as a eld. Many will disagree with
claims (a) and (b). Hopefully, however, even those who disagree strongly
will see some value in (c). For even if psychology and AI have dierent goals
and standards, they also share a lot of common ground.
My colleague Nigel Shadbolt titles another paper in the current volume
with the question \Should Psychology still care about AI?" I must confess I
was tempted to change the title of this paper to ask the opposite question:
\Should AI still care about Psychology?" The reader can guess my answer
to this latter question from claims (b) and (c). AI could learn a lot from
psychological research, but at this point is not getting what it needs.
2 Diering standards and goals
In order to get positive feedback in either direction, the rst thing we must
understand is that psychology and AI have quite dierent goals and standards of evaluation. The goal of the former is modeling psychological phenomena, while that of the latter is building intelligent systems, by whichever
means are available. Psychological models are validated by their ability to
empirically predict human behavior, whereas this is essentially irrelevant to
AI. AI techniques are not validated by their psychological plausibility, i.e.
by solving problems the way humans do. What matters is whether problems
are solved correctly and eciently, and for this we can use the designer's
perspective, which gives AI access to a wider range of tools and theories. We
discuss each of these points next.
Consider rst the issue of correctness, or more precisely of normative
adequacy. A psychological theory must be able to empirically predict human
error a system that imitates human behavior must also imitate error. AI
systems must achieve, on the other hand, normatively correct behavior.
Who would want a hand-held calculator which imitated human behavior, i.e.
which occasionally made errors in calculations? Or consider the Turing test,
so often considered as a true test of intelligence by an automated system.
Recall that the test consists in fooling a human examiner into thinking that
he or she is interacting with another human rather than with a machine.
No truly intelligent machine could possibly pass the test (unless its goal
3
is only to fool the examiner). It would suce for the human examiner to
pose complex calculations to his or her interlocutor if it never fails, then it
for sure is not human. This said, I do believe that \error" plays a deeply
productive role in intelligence, as discussed in the last section.
Second, there is the question of eciency versus psychological plausibility. A deep recent trend is driving away AI from \human-like" methods
towards compute-intensive methods in solving a variety of problems Ginsberg, 1996], Kautz and Selman, 1997]. Consider for example Deep Blue, the
IBM chess program which was able to defeat Kasparov, by examining over
200 million moves a second. Deep Blue is often described as using \brute
force" or \blind" search in the space of possible moves as successfully opposing raw power against intelligence. This is an oversimplied statement.
Deep Blue uses the quintessential AI technique of heuristic search guided
by knowledge. It uses knowledge of chess in its large database of opening
and ending games, and in the evaluation functions used to choose moves.
The evaluation function, which considers material, position, King safety, and
tempo, is crucial as a pruning mechanism to eliminate unpromising moves,
because of the huge (exponential) number of ways in which a game can
evolve. Nevertheless, Deep Blue's main advantage over human players does
lie in raw speed, and nobody would claim that the way it reasons mimic that
of human grandmasters. Using more sophisticated evaluation functions, as
a grandmaster may do, simply hasn't worked as well in AI. Thus, from the
point of view of AI, one of the main contributions of Deep Blue is showing
that being \more intelligent" in the evaluation function does not pay o
i.e. it is better looking at more moves than expending more time in judging
each possible move. But this is not exclusive of chess.1 With much better
hardware, and better search techniques, this is a lesson that is making its
impact in various areas of AI: trying to be too smart is often too slow.
There are many other examples to illustrate this point. In order to
achieve eciency in problem solving, AI is moving away from human-like behavior and the use of very specialized knowledge, towards compute-intensive,
generic search methods. Techniques for dealing with constraint satisfaction
problems, a recent great success of AI, exemplify this trend perfectly Puget,
1
Though Deep Blue's developers seem not to be aware of this. They deny using AI
techniques, precisely on the grounds that they do not attempt to mimic human behavior.
See http://www.chess.ibm.com/meet/html/d.3.3.html.
4
1998]. For another example, most research in automated planning until not
long ago would proceed in a human like fashion by reasoning backwards
from goals but current best planners use forward reasoning from the initial situation Blum and Furst, 1995], or encodings of planning problems
into logical formulas which are then solved by the best available methods
Kautz et al., 1996] Kautz and Selman, 1998]. (It is perhaps worth pointing
out here, for the uninitiated, that there is a quite direct mapping between
general search techniques and methods for solving logical formulas). Other
examples of ecient but unhuman-like behavior in game playing, scheduling, natural language understanding, qualitative reasoning about physical
systems, theorem proving, and learning can be found in Ginsberg, 1996],
Kautz and Selman, 1997].
Another aspect of the dilemma between eciency and psychological plausibility is that cognitive theories must be able to predict the diculty of
various reasoning tasks in humans. Thus, if a software system is built with
the goal of testing the validity of a psychological theory, as suggested by
Simon, 1995] (see also the discussion of cognitive architectures below), it
should be able to imitate the relative diculty of problems for humans. AI,
on the other hand, just wants to solve all problems as easily as possible.
Third, and nally, we have mentioned that AI can benet from a designer's perspective in choosing its tools an theories, in a way that psychology
cannot. I'll give three examples.
First, even if people is not good at (mathematical) logical inference,
or even at formalizing knowledge in logical languages, the representation
languages of choice for AI are still variants of classical logic, propositional or
rst order, extended or restricted to suit various tasks. The reason is simple:
its formal syntax and lack of ambiguity, suitable for easy manipulation by
computers.
Second, consider decision theory, which models agents as utility maximisers. As a model of human behavior, it can be criticized on psychological
and sociological grounds, specially if one wants to get theories with any
predictive power, which requires being quite specic about what form the
utility function should take (e.g. utility understood in monetary terms). On
the other hand, decision theory is, from a mathematical point of view, extremely general, specially when qualitative utility functions are used Doyle
and Thomason, 1997]. Thus it only requires the agent to have some pref5
erences, partially ordered (but see Gilboa and Schmeidler, 1998] for an
alternative view). We can use these preferences in any way we please as
designers of intelligent systems. The modeling task is of course helped by
the fact that the systems designed in AI only have some limited goals to
achieve.
As a nal example, take epistemic logics, which provide a number of
axioms on the properties of epistemic attitudes such as knowledge and belief
in a multiagent setting. Often the axioms have quite strong and \unrealistic"
properties, such as making agents know all consequences of their knowledge,
perfect introspection, non-contradiction, etc. Many attempts have been
made to weaken these properties to make them more realistic. Nevertheless,
epistemic logics in their stronger forms have proven useful in the analysis of,
and design of algorithms for, systems with multiple \agents" in restricted
settings, for example for distributed systems (where agents are processors) or
robotics Fagin et al., 1995]. And while the proposals to weaken these logics
are in part motivated by psychological plausibility, they are only constrained
by the needs of designing computational systems.
3 The impact of psychology on AI
In assessing the impact of psychology on AI, one must clearly distinguish the
inuence of psychologists on AI from that of psychology as a discipline. For
there is no doubt that some renowned psychologists have had a tremendous
inuence on the history and current state of AI. Allen Newell and Herbert
Simon helped shape the discipline of AI from its inception. In addition to
writing some of the earliest inuential programs in AI (Logic Theorist and
the General Problem Solver, which used production rules and means-end
analysis), they introduced the physical symbol hypothesis (PSH) Newell
and Simon, 1976], which can be seen as an essential tenet of much of AI
throughout its history. As they put it:
A physical symbol system consists of a set of entities, called
symbols, which are physical patterns that can occur as components of another type of symbol called an expression (or symbol
structure). Thus a symbol structure is composed of a number
of instances (or tokens) of symbols related in some physical way
(such as one token being next to another). At any instant of time
6
the system will contain a collection of these symbol structures.
Besides these structures, the system also contains a collection
of processes that operate on expressions to produce other expressions: processes of creation, modication, reproduction and
destruction. A physical symbol system is a machine that produces through time an evolving collection of symbol structures.
Such a system exists in a world of objects wider than just these
symbolic expressions themselves. . . . ]
The Physical Symbol Hypothesis. A physical symbol systems
has the necessary and sucient means for intelligent action.
Symbolic AI can be seen as essentially guided by the PSH, at least by
the \suciency" condition. Of course, symbols are used to represent knowledge, but equally obviously a computer does not \understand" them as we
do: it just has a set of procedures to manipulate them, which we, the designers and programmers, interpret as reasoning. This separation between the
\declarative" encoding of knowledge and inference procedures is essential
to much of AI. Levesque Levesque, 1986] reformulates the same basic idea
as the assumption that \thinking can be usefully understood as mechanical operations on symbols" and that \intelligence is best served by explicitly
representing in the data structures of a program as much as possible of what
a system needs to know." Allen Newell's later, and also inuential, idea of
a \knowledge level", for which symbol systems would oer only concrete
implementations, goes along the same lines Newell, 1982]Newell, 1990]. It
is worth quoting Levesque and Brachman, 1987] on the role of symbolic
systems:
the symbolic structures within a knowledge-based system must
play a causal role in the behavior of that system, as opposed to,
say, comments in a programming language. Moreover, the inuence they have on the behavior of the system should agree with
our understanding of them as propositions representing knowledge. Not that the system has to be aware in any mysterious way
of the interpretation of its structures and their connections to the
world but for us to call it knowledge-based, we have to be able
to understand its behavior as if it believed these propositions,
just as we understand the behavior of a numerical program as if
7
it appreciated the connection between bit patterns and abstract
numerical quantities.
From the point of view of psychology, the PSH has led to the development
of unied theories of cognition Newell, 1990]. From an AI perspective,
on the other hand, the PSH is an empirical hypothesis that can only be
conrmed by the practical success of (symbolic) AI in building intelligent
systems. While the PSH has been recently challenged by neurologicallyinspired models of behavior, it is here to stay. It seems clear that both
symbolic and subsymbolic behavior play a role in intelligent behavior, and
one of the challenges is to achieve a successful theoretical and practical
integration of both approaches.
But what about specically psychological contributions to AI? Simon,
in a recent talk Simon, 1995] cites a few contributions, such as the already
cited General Problem Solver, Feigenbaum's EPAM model of human memory, Anderson's ACT* system. These are quite old systems, and one would
hardly hear of them in todays's AI conferences, though GPS was certainly
inuential2, and EPAM led to some early work on decision tree learning
Nilsson, 1997]. Nevertheless, there are a few systems in use today that
attempt to bring together both disciplines by developing and testing software systems based on theories of cognitive architectures, which attempt
to build systems which exhibit general intelligence, rather tyhan simply addressing some limited task VanLehn, 1991, Wray et al., 1992]. Implementations of cognitive architectures do not always follow the constraints imposed
by human cognition, e.g. Brook's subsumption architecture Brooks, 1987,
Brooks, 1991]. Perhaps the best known systems which do aim at psychological validity are Soar Laird et al., 1987] and Prodigy Carbonell et al., 1991],
both descendants of GPS. They have been used as testbeds for psychological
theories, and also for exploring AI techniques in problems such as planning
and learning. Curiously enough, in both cases the main form of learning is
explanation-based learning (EBL), that is, a form of learning which stores
only logical consequences of the knowledge base, and that therefore can be
accounted for in purely logical terms. This logical character has helped make
EBL a useful item in the repertoire of AI techniques (in particular, many
However, the fact that GPS was psychologically inspired and validated is somewhat
irrelevant, in the sense that AI only cares about the formal model of problem solving
embodied by GPS.
2
8
forms of storing previously made inferences for later reuse can be seen as
forms of EBL), but is also the reason why it is not seen as representative
of human learning capabilities, as no new knowledge can be acquired from
experience through EBL.
Work on psychologically-based cognitive architectures has been too often
driven by the goal of testing psychological theories Simon, 1995], rather than
building intelligent systems. This is a legitimate goal, but one which has
diminished its relevance to AI as engineering. They are not present in any of
the recent successes in AI, such as those cited on the discussion of eciency
in the previous section.
It is probably in the area of learning where cognitive psychology has been
more inuential. Early work on decision trees was inuenced by psychology,
as mentioned above, as is the important eld of reinforcement learning Nilsson, 1997]. But even in learning, psychology has faced strong competence
from other sources of inspiration, such as purely statistical techniques and
neural networks. We should also mention case-based reasoning and learning.
This family of AI techniques, which has its own international conferences,
has a clear psychological avor. Here reasoning and learning proceeds by
analogy with previous cases known to the agent, and knowledge is stored
as cases rather than as general rules. There are domains, such as the eld
of AI and law, where this form of reasoning has an obvious attractive in
other domains, such as planning, where case-based reasoning showed a lot
of promise Hammond, 1986], it is not competitive with current best approaches (though one should bear in mind that they often tackle dierent
kinds of problems, which makes comparisons harder). Nevertheless, it is an
area where research is pursued actively, with such interesting developments
as case-based decision theory Gilboa and Schmeidler, 1998].
4 How can psychology help AI
When AI was born, it was thought that weak generic methods, such as the
GPS, was all that was needed to achieve general intelligence. AI came of age
in the late 70's and 80's, with the advent of expert systems, which began to
deliver practical applications to the market place. The slogan \knowledge
is power" took hold, and it seemed as if knowledge based systems could
solve all problems requiring intelligence. Expert systems were knowledge9
intensive, but weak in inference, using for the most part simple backward
chaining, embellished in various ways (e.g. using multiple knowledge sources
accessing a common blackboard, something which today looks rather trivial
but that was then thought at the forefront of technology). Expert systems
are a well-established technology, still useful for restricted tasks, but now
much less relevant from a research point of view. Among their shortcomings, their \brittleness" is often cited, i.e. their inability to degrade their
performance gracefully as we move outside of their restricted domain of application. Related to this is their inability to recover from errors, to reason
with assumptions, to revise their knowledge base in the face of contradictory
evidence. In addition, the attempt to construct truly autonomous systems
launched new paradigms in robotics, specially starting with the already cited
work of Brooks, in what is now called the \new AI".
These shortcomings prompted two major undertakings during the 80's
and early 90's in \good old fashioned AI." (Nigel Shadbolt discusses new AI
in detail in this volume, in a much more knowledgeable way than I possibly
could, so I shall ignore new AI here.) On the one hand, it was thought that
the answer to these problems was to throw in even more knowledge. Brittleness was attributed to expert systems lack of common sense knowledge, and
the CYC project, a multimillion, decade long project was launched to build
a huge knowledge base with all the \trivial" facts of common sense Lenat
and Guha, 1990]. The idea was that, when endowed with this knowledge,
expert systems would not fall prey to errors than any human can detect immediately, just by common sense. Optimistic projections were made about
how this could be used for natural language understanding, and how eventually the system would be able to acquire common sense by learning, once
it was bootstrapped with the basic facts that \everybody knows."
The second undertaking focused on common sense reasoning. Starting
with McCarthy, 1981] and Doyle, 1979] over a decade of research has focused on so called non-monotonic reasoning. Classical logic is monotonic
in that adding new facts can never lead us to retract previous conclusions.
This is clearly a straight-jacket for reasoning even in the most mundane
tasks, where we often have to revise our beliefs in the face of contradictory
evidence or a changing environment, and where reasoning would be impossible without making lots of unproven assumptions (e.g. we certainly do not
consider all possible ways in which our plan for going to work may fail: the
10
car's battery is dead, or there is a potato in the tailpipe, or there is a nuclear
explosion, or...). Classical logics are extremely intolerant of contradictions,
and ill suited to formalize knowledge which is often uncertain and full of
exceptions to the general rules.
Both undertakings have failed to achieve their goals. It's not easy to nd
a clear explanation for the failure to build a useful common sense knowledge base. Some may attribute the failure to the use of logical languages
(where by \logical" I mean \formal" in a mathematical sense, e.g. probabilistic representations would fall under this label) as representation tools.
But of course the question is then what alternative there is, since a computer needs formal rules for mechanical manipulation of symbols. I believe
part of the reason for failure is that knowledge cannot be separate from
how it is used, and common sense involves very complex patterns of reasoning. Non-monotonic logics were an attempt to solve this problem, and
in fact they have been extremely successful from a formal, mathematical
point of view, in capturing a wide variety of forms of reasoning such as
those described above in addition, they provided a rm foundation for logic
programming, and made progress in temporal reasoning. The problem with
non-monotonic logics is eciency. The computational complexity of reasoning with any of these logics is signicantly larger than that of reasoning with
classical logic. Yet it was precisely a computational consideration which led
to the development of these logics: Humans can reason very eectively by
ignoring exceptions, only retracting conclusions when the need arise, etc.
This eectiveness was supposed to be captured by non-monotonic logics,
yet it turned out that reasoning became much harder, not easier! There's
been attempts to understand this paradox (basically, non-monotonic logics
can capture the same knowledge in a much more concise form than classical
logics, so, roughly, much smaller demands of memory compensate for harder
reasoning Cadoli et al., 1994]), but the bottom line remains: there are very
few systems using non-monotonic reasoning which can be regarded as clear
successes.3
Thus, as discussed in the previous section, AI has moved towards eProbabilistic formalisms, in particular Bayes nets, which can also be seen as capturing non-monotonic patterns of reasoning, have achieved a reasonably good compromise
between computational complexity and expressivity, i.e. they can model relatively significant problems eciently. It is unclear how much can this be pushed towards solving the
core problems of non-monotonic reasoning eciently.
3
11
cient solutions for specic application areas, solutions which are to a great
extent compute-intensive rather than knowledge-intensive Kautz and Selman, 1997]. Many of these areas are central to the eld of knowledge representation and reasoning, such as planning, diagnosis, reasoning with constraints, natural language understanding, and theorem proving. It is clear
that from the perspective of AI, as long as these approaches work well, all
is ne. But solving these problems eciently often involves many restrictive assumptions, and lifting these assumptions is very hard.4 Furthermore,
the problems discussed above involving common sense knowledge and reasoning are still there. Without solving them it is almost impossible to lift
those restrictions in a suciently general way so as to achieve human-level
performance (in the areas where humans are better than computers).
It is here where I suggest that psychology can be of great help. Quite
a few of the above listed successes deal with problems that humans are not
specially good at solving. These are problems where considering all possible
cases is too hard for a human, and thus we may throw in lots of specialized
knowledge in trying to simplify the problem. The lesson from recent AI
is that we will be better o letting machines consider all cases (with very
agressive pruning techniques to reduce the number of cases, of course), and
that machines are not helped much by the knowledge-intensive methods we
humans may attempt to use. Yet building a machine with the common sense
of a 4-year old child would be a major breakthrough, and is well beyond the
reach of current technology.
Humans do use lots of knowledge in the most mundane tasks these are
tasks which are inferentially shallow, i.e. do not require deep reasoning,
but which can trigger the appropriate bits of knowledge as needed in an
extremely ecient way. While any specic piece of reasoning may not be
knowledge-intensive, the overall adequacy of behavior does depend on a huge
amount of knowledge. Furthermore, and as suggested above, we are able to
reason in the presence of error, or at least allowing for the possibility of
error, and gracefully recovering from it when the need arises. We also make
lots of unproven assumptions that greatly simplify reasoning. This behavior
As an example, consider that recent progress in automated planning refers almost
exclusively to what in AI is called \classical planning," i.e. STRIPS-style planning. It is
instructive to compare the current state of the art with a review of the state of the art
over a decade ago George, 1987], when AI was feeling optimistic abut its ability to go
beyond classical planning.
4
12
seems clearly more ecient from a cognitive point of view than considering
all ways in which things can go wrong, and this was one of the motivations
for research in non-monotonic reasoning.
Thus I would like psychology to help me understand common-sense reasoning, and how we use knowledge in realistic tasks. How do we deal with
error and exceptions, how do we recover from them, why are mistaken assumptions not that harmful in everyday reasoning? How do we plan, and
more generally how does goal-directed behavior work? How do we reason
about time and the evolution of the world around us? These are some deep
problems for knowledge representation and automated reasoning, which are
by no means solved.
To be honest, I'm not sure what it would take for psychology to address
these problems in a way that would be useful for AI. There are many legitimate concerns of cognitive psychology that do not apply to AI. Many
factors that aect human performance, e.g. limitations in working memory,
do not aect a computer in the same way. Similarly, the ability to predict
the relative diculty for humans of reasoning tasks (see e.g. Johnson-Laird
and Byrne, 1991]) is likely to be of little application to computers, for which
dierent constraints apply, and which have their own well-developed theory
of relative diculty, namely complexity theory.
According to a relatively recent review of research on the psychology of
reasoning Evans et al., 1993], much work in cognitive psychology deals with
how well does people carry out tasks which are trivial for a computer (e.g.
the Wason selection task, or modus tollens) or how dierent factors (content,
degree of abstraction, context) aect whether people do well certain simple
tasks. One rst reaction is that computers must do well if they are to
be useful (recall the discussion of normative adequacy), so people's lack
of adequate performance is irrelevant to AI, to a rst approximation. On
further thought, what one would like to understand is how doing badly in
certain tasks translates into successful action in the real world. For example,
and as cited in Evans et al., 1993], Evans argues that belief bias (by which
the validity of an argument is judged more favorably if its conclusion agrees
with prior beliefs) is adaptive in the real world. But how, what are the
details? Or consider the inuence of content and context in reasoning tasks.
How is knowledge used to produce this eect, what kind of knowledge is
used, and how is this instrumental in pursuing the agent's goals? In my
13
view, determining whether these inuences facilitate or hinder the agent's
performance is the least interesting bit of it.
I agree that understanding the causes of error is a necessary goal for
explaining human intelligence, and that much work, to be signicant, needs
to be carried out under controlled, laboratory conditions. Nevertheless,
what I would like to know is how people do well the things they do well,
and how they do it in realistic settings, as opposed to, say, how subjects
handle multiple quantiers. Computers already know how to do that. But
computers do not have common sense.
References
Blum and Furst, 1995] A. L. Blum and M. L. Furst. Fast planning thorugh
planning graph analysis. In IJCAI'95, Proceedings of the Fourteenth International Joint Conference on Articial Intelligence, pages 1636{1642,
1995.
Brooks, 1987] R.A. Brooks. Intelligence without representation. Articial
Intelligence, 47:139{159, 1987.
Brooks, 1991] R.A. Brooks. How to build complete creatures rather than
isolated cognitive simulators. In K. VanLehn, editor, Architectures for
Intelligence, pages 225{239. Lawrence Erlbaum Associates, Hillsdale, NJ,
1991.
Cadoli et al., 1994] Marco Cadoli, Francesco M. Donini, and Marco
Schaerf. Is intractability of non-monotonic reasoning a real drawback?
In AAAI'94, Proceedings of the Twelveth National Conference on Articial Intelligence, pages 946{951, Menlo Park, CA, USA, 1994. AAAI
Press.
Carbonell et al., 1991] J.G. Carbonell, C.A. Knoblock, and S. Minton.
PRODIGY: An integrated architecture for prodigy. In Architectures
for Intelligence, pages 241{278. Lawrence Erlbaum Associates, Hillsdale,
N.J., 1991.
Doyle and Thomason, 1997] Jon Doyle and Richmond H. Thomason, editors. Working papers of the AAAI Spring Symposium on Qualitative
14
Preferences in Deliberation and Practical Reasoning. American Association for Articial Intelligence, Menlo Park, California, 1997. Available in
http://www.kr.org/qdt/.
Doyle, 1979] Jon Doyle. A truth maintenance system. Articial Intelligence, 12:231{272, 1979.
Evans et al., 1993] Jonathan St. B. T. Evans, Stephen E. Newstead, and
Ruth M. J. Byrne. Human Reasoning. The Psychology of Deduction.
Lawrence Erlbaum Associates, Potomac, Maryland, 1993.
Fagin et al., 1995] Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and
Moshe Y. Vardi. Reasoning about knowledge. mit, mit-ad, 1995.
George, 1987] Michael P. George. Planning,. Annual Review of Computer Science, 1987., 2:359{400, 1987.
Gilboa and Schmeidler, 1998] Itzhak Gilboa and David Schmeidler. Casebased decision: An extended abstract. In ECAI'98, Proceedings of the
Thirteenth European Conference on Articial Intelligence, pages 706{710,
1998.
Ginsberg, 1996] Matthew Ginsberg. Do computers need common-sense?
In Proceedings of the Fifth International Conference on Principles of
Knowledge Representation and Reasoning, pages 40{50. Morgan Kaufmann, 1996.
Hammond, 1986] Kristian Hammond. CHEF: A model of case-based planning. In Tom Kehler and Stan Rosenschein, editors, Proceedings of the 5th
National Conference on Articial Intelligence. Volume 1, pages 267{271,
Los Altos, CA, USA, August 1986. Morgan Kaufmann.
Johnson-Laird and Byrne, 1991] P. N. Johnson-Laird and Ruth M. J.
Byrne. Deduction. Lawrence Erlbaum Associates, Potomac, Maryland,
1991.
Kautz and Selman, 1997] Henry Kautz and Bart Selman. Computeintensive methods in articial intelligence. In Tutorial presented in
AAAI'97, Fourteenth American National Conference on Articial Intelligence, 1997.
15
Kautz and Selman, 1998] Henry Kautz and Bart Selman. BLACKBOX: A
new approach to the application of theorem proving to problem solving.
In Workshop on Planning as Combinatorial Search, held in conjunction
with AIPS-98 (Conference on Articial INtelligence Planning Systems),
1998.
Kautz et al., 1996] Henry Kautz, David McAllester, and Bart Selman. Encoding plans in propositional logic. In Luigia Carlucci Aiello, Jon Doyle,
and Stuart Shapiro, editors, KR'96, Proceedings of the Fifth International
Conference on Principles of Knowledge Representation and Reasoning,
pages 374{385, San Francisco, 1996. Morgan Kaufmann.
Laird et al., 1987] J. Laird, A. Newell, and P. Rosenbloom. Soar: an architecture for general intelligence. Articial Intelligence, 33:1{64, 1987.
Lenat and Guha, 1990] D. Lenat and R. V. Guha. Building Large
Knowledge-Based Systems. Addison-Wesley, Reading MA), 1990.
Levesque and Brachman, 1987] Hector J. Levesque and Ronald J. Brachman. Expresiveness and tractability in knowledge representation and reasoning. Computational Intelligence, 3:78{93, 1987.
Levesque, 1986] Hector J. Levesque. Knowledge representation and reasoning. Annual Review of Computer Science, 1:255{287, 1986.
McCarthy, 1981] John McCarthy. Circumscription: A form of nonmonotonic reasoning. In Matthew L. Ginsberg, editor, Readings in NonMonotonic Reasoning (1987). Morgan Kaufmann, 1981.
Newell and Simon, 1976] Alan Newell and Herbert Simon. Computer science as empirical enquiry: Symbols and search. Communications of the
ACM, 19, 1976.
Newell, 1982] Allen Newell. The knowledge level. Articial Intelligence,
18:87{127, 1982.
Newell, 1990] Allen Newell. Unied Theories of Cognition. Harvard University Press, Cambridge, Massachusetts, 1990.
Nilsson, 1997] Nils Nilsson. Intorudction to Machine Learning. Book draft,
unpublished, 1997.
16
Puget, 1998] Jean-Francois Puget. Constraint programming: A great AI
success. In ECAI'98, Proceedings of the Thirteenth European Conference
on Articial Intelligence, pages 698{705, 1998.
Simon, 1995] Herbert A. Simon. Explaining the ineable: AI on the topics
of intuition, insight and inspiration. In Chris S. Mellish, editor, Proceedings of the Fourteenth International Joint Conference on Articial Intelligence, pages 939{949, San Mateo, August 20{25 1995. Morgan Kaufmann.
VanLehn, 1991] K. VanLehn, editor. Architectures for Intelligence.
Lawrence Erlbaum Associates, Hillsdale, N.J., 1991.
Wray et al., 1992] Robert Wray, Ron Chong, Joseph Phillips, Seth Rogers,
and Bill Walsh. A survey of cognitive and agent architectures.
http://ai.eecs.umich.edu/cogarch0/, 1992.
17