The Cognitive and Communicative Demands of

Chapter 1
The Cognitive and Communicative Demands of
Cooperation
Peter Gärdenfors
Abstract I argue that the analysis of different kinds of cooperation will benefit from
an account of the cognitive and communicative functions required for that cooperation. I review different models of cooperation in game theory—reciprocal altruism,
indirect reciprocity, cooperation about future goals, and conventions—with respect
to their cognitive and communicative prerequisites. The cognitive factors considered
include recognition of individuals, memory capacity, temporal discounting, prospective cognition, and theory of mind. Whereas many forms of cooperation require no
communication or just simple communication, more advanced forms that are unique
to humans presuppose full symbolic communication.
1.1 The span of cooperation in game theory
The evolution of cooperation remains an enigma. In surprisingly many situations,
both animals and humans fail to behave as predicted by game theory. For example,
humans cooperate more in prisoner’s dilemma games than would be expected from
logical analysis. Several explanations for this mismatch have been suggested, ranging from the claim that animals and humans are not fully rational to the claim that
the kind of rationality presumed in classical game theory is not relevant to evolutionary arguments. The purpose of this chapter is to argue that the behavior of various
agents must be judged in relation to their cognitive and communicative capacities.
In my opinion, these factors have not been sufficiently considered in game theory.
There are many ways of defining cooperation. A broad definition is that it consists
of joint actions that confer mutual benefits.1
Peter Gärdenfors
Lund University Cognitive Science, Kungshuset, Lundagård S-223 50 Lund, Sweden e-mail: [email protected]
1
A stronger criterion, focused on human cooperation, is formulated by Bowles and Gintis [8]: “An
individual behavior that incurs personal costs in order to engage in a joint activity that confers ben-
1
2
Peter Gärdenfors
A narrower definition concerns situations in which joint action poses a dilemma,
so that, in the short run, an individual would be better off not cooperating, even
though in the long run cooperation may be preferable [65, p. 358]. The typical example of such a situation is some variant on the prisoner’s dilemma game. In this
chapter, the focus will be on the narrower concept.
Conventionally, cooperation has been studied from two perspectives. First, in traditional game theory, cooperation has been assumed to take place between individuals of the species Homo economicus, who are ideally rational. Homo economicus is
presumed to have a perfect theory of mind: that is, the capacity to judge the beliefs,
desires, and intentions of the other players. In a Bayesian framework, this amounts
to assuming that all the other players are Bayesian decision makers. In the case of
Homo sapiens, that theory of mind is well developed, at least in comparison to other
animal species, but it is far from perfect [26].
Second, cooperative games have been studied from an evolutionary perspective. The classical accounts are found in Axelrod and Hamilton [2] and Maynard
Smith [51]. Here, the “players” are the genes of various animal species. The genes
are assumed to have absolutely no rationality nor any cognitive capacities. However, their strategies can slowly adapt, via the mechanisms of natural selection, over
repeated interactions and generations. In the evolutionary framework, the theory of
mind of the “players” is either not accounted for or not considered relevant.
The differences in cognitive and communicative demands between these two perspectives are seldom discussed. For example, in Lehmann and Keller’s [44] classification of models for the evolution of cooperation and altruism, only two parameters
related to cognition and communication are included: a one-period “memory” parameter m defined as the probability that “an individual knows the investment into
helping of its partner at the previous round” and a “reputation” parameter q defined
as the probability that “an individual knows the image score of its partner” [44, p.
1367].
I believe that the analysis of different kinds of cooperative games would benefit
from a richer account of the cognitive and communicative functions required for
the cooperation. The cognitive factors to be considered include recognition of individuals, memory capacity, temporal discounting, prospective cognition, and theory
of mind. The relevant communication ranges from simple signaling to full symbolic communication. I will argue that, whereas many forms of cooperation require
only very limited cognitive capacities and no communication, more advanced forms,
unique to humans, presuppose a high awareness of future needs, a rich understanding of the minds of others, and symbolic communication. For example, the reputation mechanism used in Nowak and Sigmund’s [57] model of indirect reciprocity
assumes that the individuals can recognize each other, remember previous interactions, have at least some aspects of a theory of mind, and possess a communication
system that, among other things, can refer to individuals in their absence and describe the roles of donor and receiver in interactions.
efits exceeding these costs to other members of one’s group.” It should be noted that ‘joint’ activity
need not imply that the actions are performed simultaneously: they can be done in sequence.
1 The Cognitive and Communicative Demands of Cooperation
3
1.2 Cognitive requirements for different kinds of cooperation
In this section, I will outline different kinds of cooperation. I will take as my point
of departure the forms of cooperation that have been studied within game theory—
in particular evolutionary game theory—though I will also borrow some themes
from animal behavior. The paradigmatic game in my analysis will be that of the
prisoner’s dilemma, mainly in its iterated form, although I will occasionally refer
to other games. A reason to focus on variations of the prisoner’s dilemma is that
such games frequently arise in biological/ecological settings. Biological players often cooperate considerably more than the total defection predicted by any strictly
rationalistic analysis: a finding that generates much of the interest in studying these
games. A bio-cognitively oriented analysis faces the problem of identifying those
factors that promote cooperation. I believe that they include varying forms of cognition and communication.
I shall present the different kinds of cooperation I discuss roughly in order of
increasing cognitive demands. My list is far from exhaustive; I have merely selected
some forms of cooperation where the cognitive and communicative components can
readily be identified, at least to some degree. Finer distinctions could be made for
several of the items on my list; those distinctions are not, however, of interest here.
1.2.1 Flocking behavior
Flocking behavior is a cognitively low-level form of cooperation. Flocking is found
in many species, and its function is often to minimise predation. Simulation models
of flocking (e.g. [64, 50, 39]) show that the sophisticated behavior of the flock can
emerge from the behaviors of single individuals following simple rules. Only two
rules seem to be required for birds to fly in a flock: (a) position yourself as closely
as possible to the centre of the flock and (b) maintain a certain minimum distance
from your neighbors. Such rules require no cognitive capacities beyond basic visual
perception. (Other senses such as echolocation could, in principle, be used as well).
1.2.2 In-group versus out-group
One simple way for a player to increase cooperation in an iterated prisoner’s
dilemma game is to divide the other individuals into an in-group and an out-group.
Then, the basic strategy is to cooperate with everybody in the in-group while defecting against everybody in the out-group. The only cognitive requirements of this
strategy is that you be able to separate members of the in-group from the out-group.
In nature this is often accomplished via olfaction: for example, bees from a different
hive have a different smell and are therefore treated with aggression. Meanwhile
Dunbar [17] speculates that dialects have evolved to serve as markers of the in-
4
Peter Gärdenfors
group among humans: a more advanced version of the same idea. The ubiquity and
strength of in-group mechanisms throughout the animal kingdom provide good reason to suspect that the mechanisms are at least partially genetically determined [42].
Consider a special case. The spatial prisoner’s dilemma games studied by, among
others, Nowak and May [56], Lindgren and Nordahl [49], Hauert [34], Brandt et
al. [9] can be seen as a form of in-group cooperation. In these games, players are
organised spatially so that one can only interact with one’s neighbors. The results of
these studies show that the spatial structure is manipulated so as to promote cooperation. Cooperators survive by forming clusters, creating spatially defined in-groups
that defect against outliers.
It seems fairly clear that the evolutionary origin of in-group formation is kinship selection. In a kin group, the outcome of a prisoner’s dilemma game cannot be
judged solely by the benefits accruing to single players. This is because of the genetic relatedness of the individuals; in many cases, kinship mechanisms make cooperation the only evolutionarily stable strategy. In other words, a prisoner’s dilemma
game that is seemingly irrational or less than fully rational at the individual agent
level may be perfectly cooperative when the genes are taken as the players. Kin
groups can form kernels from which larger cooperative groups can subsequently develop [48, p. 351]. The in-group requires some form of marker, be it just physical
proximity, to distinguish the in-group from the out-group. Evolutionary biologists
(e.g. [88]) stress that such markers should be hard to fake in order to exclude free
riders from the in-group.
Another way of generating an in-group is to identify a common enemy. West
et al. [86] present empirical evidence that when a group playing an iterated prisoner’s dilemma game competes with another group, within-group cooperation will
increase. Bernhard et al. [4] present an anthropological study of two groups in
Papua New Guinea; it supports the same conclusion. The downside of an in-groupversus-out-group mechanism, as Hamilton [32] notes, is that what favors cooperation within groups will, at the same time, favor hostility between groups. A parallel
example can be found among apes. A troop of chimpanzees in the Gombe forest in
Tanzania became divided into a northern and a southern group. The chimpanzees
in the two groups, who had earlier played and groomed together, started fighting
lethally against each other [12].
1.2.3 Reciprocal altruism
In an iterated prisoner’s dilemma game, one player can retaliate against another’s
previous defection. Trivers [81] argues that this possibility can make cooperation
more attractive and lead to what he calls reciprocal altruism. In their seminal paper,
Axelrod and Hamilton [2] used computer simulations to show that cooperation can
evolve through multiple iterations of the prisoner’s dilemma game. A Nash equilibrium strategy, such as the well-known Tit-for-Tat, can lead to reciprocal cooperative
behavior among individuals who encounter each other frequently. The minimal cog-
1 The Cognitive and Communicative Demands of Cooperation
5
nitive capacities required for this strategy are the ability to recognize the individual
one interacts with and a memory of previous outcomes (see below for further specification of these criteria). Trust may be an emotional correlate of reciprocity that
strengthens reciprocal behavior. Frank [21] suggests that such an emotion may become selected for and thus genetically grounded in a species that engages in reciprocal altruism. On this point, Van Schaik and Kappeler [84, p. 17] write that “we must
expect the presence of derived psychological mechanisms that provide the proximate control mechanisms for these cooperative tendencies.... [T]he evidence suggests a critical role for unique emotions as the main mechanisms. Extreme vigilance
toward cheaters, a sense of gratitude upon receiving support, a sense of guilt upon
being detected at free riding, willingness to engage in donation of help and zeal to
dole out altruistic punishment at free riders – all these are examples of mechanisms
and the underlying emotions that are less developed in even our closest relatives.”
Meanwhile Trivers [82, p. 76] writes about his 1971 paper on reciprocity that its
most important contribution was that “it laid the foundation for understanding how
a sense of justice evolved.”
If the players in an iterated prisoner’s dilemma game choose their moves synchronously, then the size of available memory does not seem to affect which strategies are successful [1, 47, 48, 35]. However, for so-called alternating iterated prisoner’s dilemma games in which players take turns choosing their moves, it seems
that the best strategies depend on the size of memory for previous moves [22, 55].
Field studies of fish [63], vampire bats [87], and primates [69], among others,
have revealed the presence of reciprocal altruism. Still, the evidence for reciprocal
altruism in non-human species is debated, and the results of some laboratory experiments seem to argue against it [76]. In line with this, Stevens and Hauser [77]
argue that the cognitive demands of reciprocal altruism have been underestimated
(see also [33]). They claim that reciprocal altruism will be evolutionarily stable in
a species only if (a) the temporal discounting of future rewards is not too steep, (b)
members of the species have sufficient ability to discriminate the value of the rewards so as to judge that what an altruist receives back is comparable to what it has
given, and (c) there is sufficient memory capacity to keep track of interactions with
several individuals.
Temporal discounting refers to the tendency to discount the future value of a
reward, the further removed it is in time. Studies reveal that the rates of temporal
discounting differ drastically between species [41, 15]. Humans have the lowest
rate, a prerequisite for the prospective planning that will be discussed below. New
studies on apes reveal that, in food contexts, apes are at level with humans [68, 60].
What factors (ecological, cognitive, neurological, etc.) explain the discounting rate
of a particular species is an interesting problem that should receive more attention
within evolutionary game theory.
According to Stevens and Hauser [77], cognitive demands explain why reciprocal altruism is difficult to establish in species other than Homo sapiens. Weighing
against their argument, one may present the three types of reciprocity proposed by
@De@ Waal and Brosnan [13, pp. 103-104]: (1) Individuals reciprocate on the
basis of symmetrical features of dyadic relationships (kinship, similarities in age,
6
Peter Gärdenfors
or sameness of gender). De Waal and Bosnan say that this is the cognitively least
demanding form: “it requires no scorekeeping since reciprocation is based on preexisting features of the relationship. [. . .] All that is required is an aversion to major,
lasting imbalance in incoming and outgoing benefits” (2) Next comes attitudinal
reciprocity, where individual willingness to cooperate depends on the attitude the
partner has recently shown. Cognitively, the “involvement of memory and scorekeeping seems rather minimal, as the critical variable is general social disposition
rather than specific costs and benefits of exchanged behavior” [13, p. 104]. (3) Finally there is calculated reciprocity, in which “individuals reciprocate on a behavioral one-on-one basis with a significant time interval.” This is the cognitively most
demanding form of reciprocity, since it “requires memory of previous events, some
degree of score-keeping, [and] partner-specific contingency between favors given
and received” [13, p. 104]. Reciprocity as discussed by Stevens and Hauser [77] is
most similar to this third form. It is not surprising that it would be the most difficult to achieve. When analyzing to what extent different species exhibit reciprocal
altruism, @De@ Waal and Brosnan’s [13] distinctions must be kept in mind.
Meanwhile, attitudinal reciprocity as defined by de Waal and @Brosnan@ [13]
involves cognitively fairly minimal capacities: individuals must be recognized over
time and their attitude as a partner remembered. Understanding the attitude of a
partner requires some kind of theory of mind, with at least the capacity to understand
the desires of the other (see [26]).
Understanding the attitude of a partner is a good general mechanism for improving cooperation. Charness and Dufwenberg [11, p. 1580] write of “guilt aversion” as
a way to explain why individuals behave more cooperatively than predicted by game
theory in prisoner’s-dilemma-type (and similar) games. To wit: “decision makers experience guilt if they believe they let others down.” By considering the expectations
of your partner(s) and having an emotional predisposition to avoid letting a partner down, your payoff matrix in a prisoner’s dilemma game changes, and you end
up being much more cooperative than traditional game theory would predict. For
Charness and Dufwenberg (ibid.) guilt aversion requires beliefs about the beliefs
of others. However, this claim—about the theory of mind of the decision maker—
seems to be too strong for the mechanism they are trying to explain. Surely it is
sufficient that the decision maker understand the desires of the other just far enough
to realize that the other is let down. This caveat aside, the notion of guilt aversion
seems key to analyzing many forms of human cooperative behavior, and it merits
further investigations, both theoretical and experimental. The question arises as to
what extent guilt aversion can be shown to exist in other species. At the least, the
capacity to understand the desires of others should be within the cognitive reach of
many species.
1 The Cognitive and Communicative Demands of Cooperation
7
1.2.4 Indirect reciprocity
Reciprocal altruism can be rendered in slogan form: “scratch my back and I’ll
scratch yours.” As we have seen, this principle makes sound evolutionary sense.
However, in humans one often finds more extreme forms of altruism: “If I help you,
somebody else will help me.” Such cooperation has been called indirect reciprocity,
and it seems to be unique to humans. Nowak and Sigmund [57] show that, at least
under certain conditions, indirect reciprocity can be given an evolutionary grounding (see also [45]) . However, as we shall see, their explanation depends on very
strong assumptions concerning the communication of the inter-actors (which may
explain why indirect reciprocity is only found in humans).
In Nowak and Sigmund’s [57] account of indirect reciprocity, any two players
are meant to interact at most once with each other. This is, of course, an idealising
assumption; but it has the useful effect that the recipient of a defection (cheat) in a
prisoner’s dilemma game cannot retaliate. In this way, all strategies that have been
considered for the iterated prisoner’s dilemma game are excluded. So the question
becomes: under what conditions can cooperation evolve as an evolutionarily stable
strategy (ESS) by means of indirect reciprocity?
Nowak and Sigmund note that indirect reciprocity (as we saw, earlier, with attitudinal reciprocity) seems to require some kind of “theory of mind”. An individual
watching a second individual (the donor) helping (or failing to help) a third (the
receiver) (or not helping a third in need) must be able to judge that the donor does
something “good” (“bad”) to the receiver.2 The form of intersubjectivity required
for such comparisons is related closely to empathy [62, 26].
The key concept in Nowak and Sigmund’s evolutionary model of indirect reciprocity is that of the reputation of an individual.3 An individual’s reputation i is
built up by members of the society observing i’s behavior towards third parties and
then spreading this information to other members of the society. To wit, gossip becomes a way of achieving societal consensus about reputation [17, 73]. In this way
the reputation for i being a ‘selfless” helper can be known by more or less all the
members of the group. The level of i’s reputation can then be used by any individual
when deciding whether or not to assist i in a situation of need.4 Reputation is not, of
course, something immediately visible to others in the way of such status markers
as a raised tail among wolves. Instead each individual must keep a private account
of the reputation of all others with whom she interacts. Semmann et al. [70] provide
a nice experimental demonstration that building a reputation through cooperation is
valuable for future social interactions, not only within but also beyond one’s social
group.
2
This is in contrast to kinship-based altruism, where what an individual experiences as “good” can
be that which improves her own reproductive fitness.
3 A precursor to this concept, foreshadowing it, is Sugden’s [79] notion of “good standing.”
4 The in-group behavior discussed in Section 2.2 can be understood as a limiting case of indirect
reciprocity whereby all in-group members are treated as having a good reputation and all out-group
members as having a bad one. Turned the other way around, indirect reciprocity becomes a flexible
way of determining the membership of the in-group.
8
Peter Gärdenfors
It is important to distinguish in these interactions between justified and unjustified defections. If a potential receiver has defected repeatedly in the past, the donor
may be justified in defecting: e.g., as a form of punishment. However, the donor
runs a risk that his own reputation may drop. To prevent this, the donor should communicate his reason for defecting: i.e., the receiver has a bad reputation.
A range of more sophisticated strategies is available for distinguishing between
justified and unjustified defections. Nowak and Sigmund [57] call a strategy first
order if the assessment of an individual i in the group depends only on i’s own
actions. A strategy is second order if it depends on the reputation of the receiver
and third order if it depends additionally on the reputation of the donor. Nowak and
Sigmund argue that only eight of the possible strategies—all of which depend on the
distinction between justified and unjustified defection—are evolutionarily stable.
Thus, the success of indirect reciprocity depends heavily on determining reputation. This means that indirect reciprocity, in Nowak and Sigmund’s [57] model,
requires several cognitive and communicative mechanisms. The minimal cognitive
requirements are (a) recognising the relevant individuals over time, (b) remembering
and updating the reputation scores of these individuals, and (c) judging by means of
empathy whether a particular donor action is “good” or “bad”. The minimal communicative requirements are (d) being able socially to identify individuals in their
absence, e.g. by name; (e) being able to express that “x was good to y” and “y was
bad to x”. No non-human communication system seems able to handle these requirements. It should come as no surprise that Homo sapiens is the only species
exhibiting indirect reciprocity.5 The communication system need not be a humanstyle language with full syntax; a form of protolanguage along the lines discussed by
Bickerton [5] is sufficient. The communication could be symbolic, but it is possible
that a system based on miming [16, 89] would be adequate.
The trust that is built up through reciprocal altruism is dyadic: that is, constituting
a relation between two individuals. In contrast, reputation is an emergent social
notion involving everybody in the group. Nowak and Sigmund [57, p. 1296] write:
Indirect reciprocity is situated somewhere between direct reciprocity and public goods. On
the one hand it is a game between two players, but it has to be played within a larger group.
As noted above, a potential donor should signal that a potential recipient has a
‘bad” reputation before defecting, in order not to lose reputation himself. This is
another requirement on communication: the ability to form expressions of the type
“y has a bad reputation”. Nowak and Sigmund’s [57] model considers only two
kinds of reputation: “good” and “bad.” It goes without saying that, in real groups,
communication about reputation is far more nuanced.
Of course, the need for communication is dependent on the size of the group
that is facing the prisoner’s dilemma situations. In a small, tightly connected group
where everybody sees everybody else most of the time, no reputation mechanism is
5
In contrast, Van Schaik and Kappeler [84, p. 15] write: “the three basic conditions for reputation
are individual recognition, variation in personality traits, and curiosity about the outcome of interactions involving third parties.” In my opinion, however, these criteria are too weak, since they
place no requirement on reputation to be communicated.
1 The Cognitive and Communicative Demands of Cooperation
9
needed. One can look at the way within-group ranking is established. If I observe
that x has higher rank than y and I know that y has higher rank than I do, there is
no need for (aggressive) interaction or for communication to establish the relative
ranking. However, in large and loosely connected groups, these mechanisms are
not sufficient, and some form of communication about non-present individuals is
required. This “displacement” of the message is one of the criteria that Hockett [40]
uses to identify symbolic language (see also [24]).
Still other mechanisms may exist that influence the reputation of an individual.
In many situations, people are willing to cooperate but also to punish free riders.
Punishment behavior is difficult to explain because it is costly, and the cooperative non-punisher benefits [20]. Thus, in general, punishment should decrease in
frequency.6
Barclay [3] presents evidence that the reputation of a punisher increases, and
that people are more willing to cooperate with punishers. In this way, punishment
behavior may be rewarded in the long run (see also [71, 36, 37, 54] and can stabilize
cooperative behavior in iterated prisoner’s dilemma games [48].
For all its cost, punishment behavior seems to be something that all human populations are willing to use, to varying degrees.
Using computer simulations, Hauert et al. [36] show that, if participants not only
have the choice to cooperate, to defect, or to punish, but also to stand aside, defectors
will eventually be eliminated from the population. Surprisingly, if standing aside is
not an option, then, in the long run, the defectors take over. Hauert at al. conclude
that compulsory joint enterprises are less likely to lead to cooperation than voluntary
ones.
These results clearly show that Nowak and Sigmund’s idealizing assumption
about one-shot interactions is too strong. In real life it is important to be able, at
least sometimes, to retaliate. No evidence exists for altruistic punishment among
animals in nature ([84], although @De@ Waal and Brosnan [13] report experimentally induced refusal to cooperate that is costly for the non-cooperative individual).
1.2.5 Cooperation to achieve future goals
Cooperation often occurs over an extended timespan and requires a form of planning. One must distinguish between immediate planning that concerns present needs
and prospective planning that concerns future ones. Humans can predict that they
will be hungry tomorrow and so should save some food. They can imagine that the
winter will be cold and so should start building a shelter already in the summer.
The cognition of most other animals appears concerned almost exclusively with the
here and now, while humans live both here and now and in the future. The central
cognitive aspect of prospective planning is the capacity to represent future needs.
Prospective thinking is sometimes also called “mental time travel” (e.g. [66]).
6 In line with Frank [21], Fehr and Gächter [20] say that negative emotions towards defectors are
the proximate causes of altruistic punishment.
10
Peter Gärdenfors
Bischof [6] and Bischof-Köhler [7] argue that non-human animals cannot anticipate future needs or drive states (see also [31]). Their cognition is therefore bound
to their present motivational state (see also [78]). This so-called Bischof-Köhler hypothesis had seemed supported by the previously available evidence on planning
in non-human animals. However, recent experimental results on non-human primates [53, 66, 60] as well as recent observations of spontaneous planning behaviors [58] demonstrate that the hypothesis no longer can be upheld.
For most forms of cooperation one may find among animals, mental representations for the goal of the cooperation are not needed. If the goal is immediately
present in the environment—for example as food to be eaten or an antagonist to be
fought—the collaborators need no mutual representation of the goal before acting.
If, on the other hand, the goal is distant in time or space, a mutual representation
of it must be produced before cooperative action can be taken. So building a shared
dwelling requires coordinated planning of how to obtain the building materials and
advanced collaboration during the construction. In general, cooperating on future
goals requires that the conceptual spaces of the individuals be coordinated.
A mutual representation of a future goal changes a situation from a cooperative
dilemma into a game where the cooperative strategy is the equilibrium solution. For
example, if my group lives in a very dry area, each individual (or each family) will
benefit from digging a well. However, if my neighbor digs a well, I may defect,
and take all my water from his well instead of digging my own. Meanwhile, if nobody digs a well, we are all worse off than if everybody does so. Now, if somebody
suggests that we cooperate in digging a communal well, then such a well, by being potentially much deeper, could yield much more water than all the individual
wells taken together. Once such cooperation is established, the incentive to defect
may either disappear or at least be significantly reduced, since everybody stands to
benefit more from supporting the common goal. In game-theoretic terms, digging
a communal well may work out to be a new equilibrium strategy. This example
shows how the capacity for sharing detached future goals can strongly enhance the
value of cooperative strategies within the group. Strategies based on future goals
may introduce new equilibria that are beneficial to all participants.
The cognitive requirements for cooperating on future goals crucially involve the
capacity to represent future drives or wishes: that is, they require prospective planning. Even if water is currently plentiful, you can foresee the coming dry season.
This representation, and not any current drive state, forms the driving force behind
a wish to cooperate in digging a well.
What are the communicative requirements for such (shared) representations?
Symbolic language is the primary tool by which agents can make their personal
representations known to each other. In previous work [10, 24, 25, 28], I have proposed a strong connection between the evolution of prospective cognition and the
evolution of symbolic communication. In brief, symbolic language makes it possible to cooperate on future goals in an efficient way. Again, such communication
involves any complex syntactic structures; protolanguage [5] will serve perfectly
well.
1 The Cognitive and Communicative Demands of Cooperation
11
An important feature of the use of symbols in cooperation is the way they set the
cooperators free from the goals that are immediately available in the environment.
This requires that present goals can be suppressed, which depends upon the executive functions of the frontal brain lobes [43]. Future goals and the means to reach
them can then be picked out and shared through symbolic communication. This
kind of sharing gives humans an enormous cooperative advantage in comparison
to other species. I propose that symbolic communication and cooperation on future
goals have co-evolved (cf. the “ratchet effect” discussed by Tomasello [80, pp. 3740]). That said, without the prior presence of prospective cognition, the selective
pressures that resulted in symbolic communication would not have emerged.
1.2.6 Commitments and contracts
Commitments and contracts are special cases of cooperating about the future. They
rely on an advanced theory of mind that allows for joint beliefs.7 Joint beliefs form
the basis for much of human culture. They make many new forms of cooperation
possible. For example, one might promise to do something, which only means that
you intend to do it. On the other hand, when you commit to another person to perform an action, the other person normally wants you to do it and intends to check
that you do so. Furthermore, the two parties share joint beliefs concerning these
intentions and desires [18]. Unlike promises, commitments cannot arise unless the
parties have such joint beliefs and possess prospective cognition. It has been argued in several places (e.g. [80, 24, 26]) that the capacity for joint beliefs is only
found in humans. Interestingly, studies show that three-year-olds already have some
understanding of joint commitments [30].
For similar reasons, contracts cannot be established without joint beliefs and
prospective cognition. Deacon [14] argues that marriage is the first example of
cooperation where symbolic communication is needed. It should be obvious that
marriage is a form of contract. Even though I do not know of any evidence that
marriage agreements came first, I still find this idea interesting in the discussion
of early prospective cognition. Any sort of pair-bonding agreement implicitly determines which future behaviors are allowed and not allowed. These expectations
concerning future behavior do not only involve the pair, but also the other members
of the social group who are supposed to support the relationship, and not to disturb
it by cheating. Anybody who breaks the agreement risks punishment by the entire
group. Thus in order to maintain such bonds as marital bonds, they must be linked
to social sanctions.
7 For an analysis of different levels of a “theory of mind,” see Gärdenfors [24], ch. 4 and
Gärdenfors [27].
12
Peter Gärdenfors
1.2.7 Cooperation based on conventions
In human societies, many forms of cooperation are based on conventions. Conventions assume common beliefs (often called common knowledge). If two cars meet
on a narrow gravel road in Australia, then both drivers know that this co-ordination
problem has been solved numerous times before by passing on the left. Both know
that both know this, both know that both know that both know this, etc., and so they
pass on the left without any hesitation [46]. Many conventions are established without explicit communication; but, of course, communication makes the presence of a
convention clearer.
Conventions function as virtual governors in a society. Successful conventions
create equilibrium points that, once established, tend to be stable. The convention
of driving on the left forces me to “get into step” and drive on the left. The same applies to language itself: a new member of any society must adjust to the predominant
language of the community. The collective meaning of all the linguistic utterances
emerges from individuals’ individual meanings.8 There is, of course, a potentially
infinite number of possible “equilibrium” languages in a society. The point is that,
once a language has developed, it will both strongly constrain and have strong explanatory effects regarding the behavior of individuals in the society.
Similarly, the existence of money depends on a web of conventions. Money requires cooperation in the form of a mutual agreement to accept e.g. certain decorative pieces of paper as a medium of exchange for goods or labor. As a member
of a monetary society, or as a visitor to one, I must “get into step” and accept the
conventional medium of exchange as legal tender. Money is an example of “social
software” [61] that improves the cooperation within a society.
Actually, there are many similarities between language and money as tools for
cooperation. Humans have been trading goods for as long as they have existed.
When a monetary system does emerge, it makes certain transactions more efficient.
The same applies to language: hominids were communicating long before they had
a language, but language makes the exchange of information more efficient. The
analogy carries further. When money is introduced into a society, a stable system of
prices normally emerges. Similarly, when linguistic communication is introduced,
individuals come to share a relatively stable space of meanings: i.e., components
of their private worlds that, as communicators, they can publicly exchange between
each other. In this way, language fosters a common structure to the private worlds
of the individual members in a society (see [29]).
In general, the social framework we construct, in the form of e.g. our institutions,
increases the possibilities for establishing contracts, conventions, and other matters
of the social good.
8
This emergence of a social dimension to language is analysed in detail in Gärdenfors [23] and
Gärdenfors and Warglien [29].
1 The Cognitive and Communicative Demands of Cooperation
13
1.2.8 The cooperation of Homo economicus
Homo economicus, the rational human, is an idealization, typically assumed to have
perfect reasoning powers. He is fully logically consistent, a perfect Bayesian when
it comes to probability judgements, and a devoted maximiser of utility. He possesses
a complete theory of mind: he can put himself fully into the shoes of his opponents
(as they can with him) and see them as fellow perfect Bayesians reasoning about
the possible equilibrium strategies in the game in which he is playing. When he
communicates, his messages have a logically unambiguous meaning. He cooperates,
if and only if such cooperation maximizes his utility according to the rules of the
game.
Similar idealizations are made in analysing cooperation, using epistemic and deontic logics or so called belief-desire-intention logics. The semantics is, typically,
formulated in terms of possible worlds. This leads, of course, to unrealistic assumptions of strict logical consistency, unbounded self-reflective powers, and unlimited
recursive knowledge attribution—as Verbrugge [85, p. 657] points out. The traditional logical approaches cannot be used as realistic models of human cooperation.
Several suggestions have recently been presented for how one might generalize the
logical approach to provide such models [83, 85].
The problem with Homo economicus is that he seldom faces up to the complexities of reality, where the rules of the game are not well defined, where the possible
outcomes and available strategies may vary from minute to minute, where the utilities of the various outcomes are uncertain if not unknown, where reasoning time
and memory resources are limited, where communication is vague and prone to
misunderstandings, and where his opponents are idiosyncratic and have all manner
of cognitive shortcomings. Trivers [82, p. 79] writes: “I know of no species, humans included, that leaves any part of its biology at the laboratory door; not prior
expectations, nor natural proclivities, nor ongoing physiology, nor arms or legs, nor
whatever.”
Game theory has, almost exclusively, studied strategies within a fixed and clearly
defined game. In nature, it is more realistic to speak of strategies that can be used in
a variety of different but related games. How should one rationally act when meeting an opponent face to face, given all kinds of biological constraints, knowing that
one’s opponent is not fully rational, but not knowing what the opponent’s exact limitations? Game theory without biology is esoteric at best; with biology, it becomes
as complicated as life itself (see e.g. [19]).
1.3 The effects of communication on cooperation
I want to distinguish between, on the one hand, communication that involves a
choice (conscious or not) by some individual to send some signal; and, on the other
14
Peter Gärdenfors
hand, mere signaling without such intention.9 Thus a caterpillar, though by its bright
colors signals that it is poisonous, is not communicating. By this definition, communication becomes, in itself, a trust-based cooperative strategy. (Of course, communication can be abused: what the sender communicates may be a deliberate lie
intended to mislead the receiver.)
When a communicator sends a message, the receiver may not, initially, be able
to interpret the intended meaning of the message. However, over iterated interactions, a message may develop a fairly fixed meaning. Lewis [46] introduced a gametheoretic account to conventions of meaning, according to which signals transmit information as a result of an evolutionary process. A variety of computer simulations
and robotic experiments (e.g. [74, 75]), have shown how a stable communication
system can emerge out of iterated interactions between (natural or) artificial agents,
even though nobody determines any rules for the communications. The greater the
number of “signalers” and “recipients” involved, the stronger the convergence of
reference and the faster the convergence is attained.
The addition of communication to a prisoner’s-dilemma-type game changes the
space of strategies in the game. In game theory, it has widely been assumed that
signals that “cost nothing” are of no significance to any games that are not purely of
common interest; even while evolutionary biologists have emphasized signals that
are designed to be so costly that they (effectively) cannot be faked [88]. In contrast,
Skyrms [72] shows how, in the “stag hunt” game and in another bargaining game,
costless signals have dramatic effects on the game dynamics compared to the same
games. The mere presence of signals creates new equilibria and shifts the basins of
attraction of old equilibria. To wit: even if communication is not strictly necessary
for all of the forms of cooperation listed above, adding it to a game may drastically
change the game’s structure.
Such an effect was first pointed out by Robson [67] with respect to a population
of defectors in an evolutionary setting of the prisoner’s dilemma game. A signal that
is not used by the population at large can be used by mutant members as a kind
of secret handshake to identify an in-group. Mutants can then cooperate with those
who share the secret handshake and defect against the others. They would perform
better than ”pure” defectors and invade the population. Without the availability of
signaling, ”pure” defection would be the evolutionarily more stable strategy. Van
Schaik and Kappeler [84, p. 9] argue that, even without anything as sophisticated as
a secret handshake, “communication about the intentions of each player before the
interactions and negotiation with them about payoffs is likely to make reciprocity
much more stable than under the conditions of prisoner’s dilemma games. Thus
communication before engaging in risky cooperation is frequently observed in primates.”
In traditional game theory, the moves in a game are assumed to be fixed. Either
the players cannot communicate at all, or else they have access to full linguistic communication. From an evolutionary perspective such extremes are not very realistic.
In many natural settings, the inter-actors can communicate in some at least limited
9
This is in contrast to e.g. Hauser [38], who uses “communication” to also include all forms of
signals.
1 The Cognitive and Communicative Demands of Cooperation
15
way, and there is almost always the possibility to create a new signal, which immediately multiplies the number of possible moves in the game. Taking these factors
into account means that a game-theoretic analysis done originally along traditional
lines becomes immensely more complicated. Already the simple communication
strategies analysed by Skyrms [72] bear witness of this.
An animal can communicate in various ways without employing much in the way
of cognitive capacity. A system of signals can reach a communicative equilibrium
without actively exploiting any cognitive capacities of the participants. However,
as soon as one introduces the theory of mind and the strong tendency to assign
meanings to signals that one finds in humans, the game-theoretic situation becomes
yet more complex again.
To give but one example of the powerful effects of communication: Mohlin and
Johannesson [52] investigated the effects of communication on a dictator variation
of the prisoner’s dilemma game, where the dictator decides how much of an allotted amount to give to a recipient. They compared one-way written communication
from an anonymous recipient to no communication, and found that communication
increases donations by more than 70 percent. In order to eliminate any “relationship
effect” from communication, another test condition was designed, to allow communication from a third party. donations were still about 40 percent higher than in the
condition with no communication, suggesting that even the impersonal content of
communication affects donations.
Charness and Dufwenberg’s [11] notion of guilt aversion provides one possible
rationale for the effects of this “endogenous communication”: the dictator suffers
from guilt if he hurts the recipients relative to what they believe that they will receive. By taking guilt into account, the dictator can be seen to exhibit some understanding of the minds of his subjects, or at least of their desires. Communication
may affect the dictator’s beliefs and, in this way, influence any allocation to the
recipient. One might reasonably expect that, on such an account, communication
from a bystander will have less effect than a direct message from a recipient, even
an anonymous one.
1.4 Conclusion
Traditional analysis of cooperation in game theory assumes fully rational and fully
intersubjective participants. In contrast, evolutionary game theory assumes no rationality and no intersubjectivity, but relies on evolutionary mechanisms for the
establishment of cooperation. However, for most forms of animal and human cooperation, the truth falls somewhere in between. I have shown how different forms
of cooperation build on different cognitive and communicative capacities. The reason why certain forms of cooperation are so far found only among humans is that
those forms of cooperation presuppose a high awareness of future needs, a rich understanding of the minds of others, and availability of symbolic communication.
16
Peter Gärdenfors
The somewhat eclectic list of forms of cooperation and their cognitive and communicative prerequisites, as presented above, is summarized in the following table:
Type of cooperation
Cognitive demands
Communicative demands
Flocking behavior
Perception
None
In-group—out-group
Recognition of group
None
member
Reciprocal altruism
(attitudinal reciprocity)
Individual recognition,
minimal memory, unNone
derstanding the desires
of others
Reciprocal altruism
(calculated reciprocity)
Individual recognition,
memory, slow temporal
None
discounting, value comparison
Indirect reciprocity
Individual recognition,
memory, slow temporal
Proto-symbolic (mimediscounting, value comtic) communication
parison, minimal theory
of mind (empathy)
Cooperation about
future goals
Individual recognition,
memory,
prospective
Symbolic communicaplanning, value comtion (protolanguage)
parison, more advanced
theory of mind
Individual recognition,
Symbolic communicaprospective
Commitment and contract memory,
tion (protolanguage)
planning, joint belief
Cooperation based on
conventions
Theory of mind that al- None, but enhanced by
lows common knowl- symbolic communicaedge
tion
The cooperation of
Homo economicus
Full theory of mind, per- Perfect symbolic comfect rationality
munication
Table 1: The cognitive and communicative demands of different forms of cooperation.
The different kinds of cooperation presented here are ordered according to increasing cognitive demands. They certainly do not exhaust the field of possibilities, and
there is a real need to make finer divisions within the forms presented. Nevertheless,
1 The Cognitive and Communicative Demands of Cooperation
17
the table clearly shows how cognitive and communicative constraints are important
factors when analysing evolutionary aspects of cooperation.
Only humans are known to clearly exhibit reciprocal altruism, indirect reciprocity, cooperation on future goals, and cooperation based on conventions.10 This
correlates well with the cognitive factors that are required for these forms of cooperation.
The analysis, or a more detailed version of it, can be used to make predictions
about the forms of cooperation to be found in different species. For example, if a
monkey species can be shown to be able to recognize members of one’s own troop,
and to remember and evaluate the results of previous encounters with individual
members, even though two monkeys of that species cannot communicate about a
third member, then the prediction is to expect reciprocal altruism but not indirect
reciprocity between the members of the species. In this way, a causal link from
cognitive capacities to cooperative behaviors can be established. Conversely, but
perhaps less reliably, if a species does not exhibit a particular form of cooperative
behavior, it can be expected not to possess the required cognitive or communicative
capacities.
The analysis can also be turned into a tool for predictions about the evolution
of cognition in hominids. Osvath and Gärdenfors [59] argue that the first traces of
prospective planning are to be found during the Oldowan culture 2.5 million years
ago. It is reasonable to assume that the forms of cooperation toward future goals that
depend on this form of cognition had already been established by then. Conversely,
if the archaeological record of some hominid activities provides evidence for certain
types of cooperation, one can reasonably assume that the hominids had developed
the cognitive and communicative skills necessary for that form of cooperation. This
may, in turn, generate predictions concerning brain structure and other matters of
physiology.
Acknowledgements
This article is an expanded and updated version of an article by the same title that
was published electronically in the Festschrift Hommage à Wlodek for Wlodek
Rabinowicz in January 2007 (see http://www.fil.lu.se/HommageaWlodek/). It also
overlaps with Gärdenfors [27]. I would like to thank two anonymous referees as well
as Barbara Dunin-Kȩplicz, Jan van Eijck, Martin van Hees, Kristian Lindgren, Erik
Olsson, Mathias Osvath, Rohit Parikh, Erik Persson, Wlodek Rabinowicz, Robert
Sugden, Rineke Verbrugge, and Annika Wallin. I would also thank the members of
the seminars in cognitive science and philosophy at Lund University and the members of the project group on Games, Action and Social Software at the Netherlands
Institute for Advanced Studies for very helpful comments on drafts of the paper.
I also want to thank the Netherlands Institute for Advanced Studies for excellent
10
As mentioned above, whether other species engage in reciprocal altruism is open to some controversy.
18
Peter Gärdenfors
working conditions during my stay there in October 2006 and the Swedish Research
Council for general support that made this research possible. This paper was originally written as part of the EU project Stages in the Evolution and Development of
Sign Use (SEDSU). The updated version has been prepared with support from the
Linneaus environment Cognition, Communication and Learning that is financed by
the Swedish Research Council.
References
1. A, R. The Evolution of Cooperation. Basic Books, New York, 1984.
2. A, R.,  H, W. D. The evolution of cooperation. Science 211, 4489 (1981),
1390–1396.
3. B, P. Reputational benefits for altruistic punishment. Evolution and Human Behaviour
27, 5 (2006), 325–344.
4. B, H., F, U.,  F, E. Parochial altruism in humans. Nature 442, 7105
(2006), 912–915.
5. B, D. Language and Species. The University of Chicago Press, Chicago, IL, 1990.
6. B, N. On the phylogeny of human morality. In Morality as a Biological Phenomenon,
G. S. Stent, Ed. Abakon Verlagagesellschaft, Berlin, 1978. Report of the Dahlem Workshop.
7. B-K̈, D. Zur Phylogenese menschlicher Motivation. In Emotion und Reflexivität,
L. H. Eckensberger and E.-D. Lantermann, Eds. Urban & Schwarzenberg, München, Vienna,
Baltimore, 1985, pp. 3–47.
8. B, S.,  G, H. The origins of human cooperation. In The Genetic and Cultural
Origins of Cooperation, P. Hammerstein, Ed. MIT Press, Cambridge, MA, 2003, pp. 429–443.
Dahlem workshop report.
9. B, H., H, C.,  S, K. Punishment and reputation in spatial public goods
games. Proceedings of the Royal Society of London. Series B: Biological Sciences 270, 1519
(2003), 1099–1104.
10. B, I.,  G̈, P. Co-operation and communication in apes and humans. Mind
and Language 18, 5 (2003), 484–501.
11. C, G.,  D, M. Promises and partnership. Econometrica 74, 6 (2006),
1579–1601.
12. D W, F. B. M. Our Inner Ape. Riverhead, New York, NY, 2005.
13. D W, F. B. M.,  B, S. F. Simple and complex reciprocity in primates. In
Cooperation in Primates and Humans, P. M. Kappeler and C. P. van Schaik, Eds. Springer,
Berlin, 2006, pp. 85–105.
14. D, T. W. The Symbolic Species. Penguin Books, London, 1997.
15. D, N.,   E, J. Time discounting and time consistency. In Games, Actions, and
Social Software, J. van Eijck and R. Verbrugge, Eds., vol. zzz?? of Texts in Logic and Games,
LNAI. Springer Verlag, Berlin, 2011, pp. ????–???
16. D, M. Origins of the Modern Mind. Harvard UniversityPress, Cambridge, MA, 1991.
17. D, R. Grooming, Gossip and the Evolution of Language. Harvard UniversityPress,
Cambridge, MA, 1996.
18. D-K̧, B.,  V, R. A logical view on teamwork. In Games, Actions, and
Social Software, J. van Eijck and R. Verbrugge, Eds., vol. zzz?? of Texts in Logic and Games,
LNAI. Springer Verlag, Berlin, 2011, pp. xxxx??–yyy??
19. E, A.,  L, K. Cooperation in an unpredictable environment. In ICAL 2003:
Proceedings of the Eighth International Conference on Artificial Life (Cambridge, MA, USA,
2003), MIT Press, pp. 394–399.
20. F, E.,  G̈, S. Altruistic punishment in humans. Nature 415, 6868 (2002), 137–
140.
1 The Cognitive and Communicative Demands of Cooperation
19
21. F, R. H. Passions within Reason: The Strategic Role of the Emotions. Norton, New York,
NY, 1988.
22. F, M. R. The prisoner’s dilemma without synchrony. Proceedings of the Royal Society of
London. Series B: Biological Sciences 257, 1348 (1994), 75–79.
23. G̈, P. The emergence of meaning. Linguistics and Philosophy 16, 3 (1993), 285–
309.
24. G̈, P. How Homo Became Sapiens. Oxford University Press, Oxford, 2003.
25. G̈, P. Cooperation and the evolution of symbolic communication. In The Evolution
of Communication Systems, K. Oller and U. Griebel, Eds. MIT Press, Cambridge, MA, 2004,
pp. 237–256.
26. G̈, P. Evolutionary and developmental aspects of intersubjectivity. In Consciousness Transitions: Phylogenetic, Ontogenetic and Physiological Aspects, H. Liljenström and
P. Arhem, Eds. Elsevier, Amsterdam, 2007, pp. 281–305.
27. G̈, P. The role of intersubjectivity in animal and human cooperation. Biological
Theory 3, 1 (2008), 51–62.
28. G̈, P.,  O, M. Prospection as a cognitive precursor to symbolic communication. In Evolution of Language: Biolinguistic Approaches, R. K. Larson, V. Déprez, and
H. Yamakido, Eds. Cambridge University Press, Cambridge, 2010, pp. 103–114.
29. G̈, P.,  W, M. Cooperation, conceptual spaces and the evolution of semantics. In Symbol Grounding and Beyond (Berlin / Heidelberg, 2006), P. Vogt, Y. Sugita,
E. Tuci, and C. Nehaniv, Eds., vol. 4211 of Lecture Notes in Computer Science, Springer,
pp. 16–30. Proceedings EELC 2006.
30. G̈, M., B, T., C, M.,  T, M. Young children’s understanding of joint commitments. Developmental Psychology 45, 5 (2009), 1430–1443.
31. G, A. The Planning of Action as a Cognitive and Biological Phenomenon. PhD thesis,
1991. Lund University Cognitive Studies 2, Lund.
32. H, W. D. Innate social aptitudes of man, an approach from evolutionary genetics. In
Biosocial Anthropology, R. Fox, Ed. Malaby Press, London, 1975, pp. 133–157.
33. H, P. Why is reciprocity so rare in social animals? A protestant appeal. In The
Genetic and Cultural Origins of Cooperation, P. Hammerstein, Ed. MIT Press, Cambridge,
MA, 2003, pp. 83–93.
34. H, C. Fundamental clusters in spatial 2x2 games. Proceedings of the Royal Society of
London. Series B: Biological Sciences 268, 1468 (2001), 761–769.
35. H, C.,  S, H. G. Effects of increasing the number of players and memory size
in the iterated prisoner’s dilemma: a numerical approach. Proceedings of the Royal Society of
London. Series B: Biological Sciences 264, 1381 (1997), 513–519.
36. H, C., T, A., B, H., N, M. A.,  S, K. Via freedom to coercion: The emergence of costly punishment. Science 316, 5833 (2007), 1905–1907.
37. H, C., T, A., D S ́ B, H., N, M. A.,  S, K. Public
goods with punishment and abstaining in finite and infinite populations. Biological Theory 3,
2 (2008), 114–122.
38. H, M. D. The Evolution of Communication. MIT Press, Cambridge, MA, 1996.
39. H, C. K.,  H, H. Self-organized shape and frontal density of fish
schools. Ethology 114, 3 (2008), 245–254.
40. H, C. F. The origin of speech. Scientific American 203, 3 (1960), 88–96.
41. K, A. The evolution of patience. In Time and Decision: Economic and Psychological
Perspectives on Intertemporal Choice, G. Loewenstein, D. Read, and R. F. Baumeister, Eds.
Russell Sage Foundation Publications, New York, NY, 2003, pp. 115–138.
42. K, T., B, J.,  F, T. Evolution in group-structured populations can resolve
the tragedy of the commons. Proceedings of the Royal Society B: Biological Sciences 273,
1593 (2006), 1477–1481.
43. K, E., O, C.,  K, F. The architecture of cognitive control in the human
prefrontal cortex. Science 302, 5648 (2003), 1181–1185.
44. L, L.,  K, L. The evolution of cooperation and altruism: A general framework
and a classification of models. Journal of Evolutionary Biology 19, 5 (2006), 1365–1376.
20
Peter Gärdenfors
45. L, O.,  H, P. Evolution of cooperation through indirect reciprocity. Proceedings of the Royal Society of London. Series B: Biological Sciences 268, 1468 (2001),
745–753.
46. L, D. Convention: A Philosophical Study. Harvard University Press, Cambridge, MA,
1969.
47. L, K. Evolutionary phenomena in simple dynamics. In Artificial Life II, C. G. Langton,
J. D. Farmer, S. Rasmussen, and C. Taylor, Eds. Addison-Wesley, Redwood City, CA, 1992,
pp. 295–312.
48. L, K. Evolutionary dynamics in game-theoretic models. In The Economy as an Evolving Complex System II, B. Arthur, S. Durlauf, and D. Lane, Eds. Addison-Wesley, Reading,
MA, 1997, pp. 337–367.
49. L, K.,  N, M. G. Evolutionary dynamics of spatial games. Physica D:
Nonlinear Phenomena 75, 1-3 (1994), 292–309.
50. L, H.,  W, M. Parallel bird flocking simulation, 1993. British Computer Society
Conference on Parallel Processing for Graphics and Scientific Visualization.
51. M S, J. Evolution and the Theory of Games. Cambridge University Press, Cambridge, 1982.
52. M, E.,  J, M. Communication: Content or relationship? Journal of Economic Behavior and Organization 65, 3-4 (2008), 409–419.
53. M, N. J.,  C, J. Apes Save Tools for Future Use. Science 312, 5776 (2006),
1038–1040.
54. N, M.,  I, Y. The coevolution of altruism and punishment: Role of the selfish
punisher. Journal of Theoretical Biology 240, 3 (2006), 475–488.
55. N, D. B. Optimality under noise: Higher memory strategies for the alternating prisoner’s
dilemma. Journal of Theoretical Biology 211, 2 (2001), 159–180.
56. N, M. A.,  M, R. M. Evolutionary games and spatial chaos. Nature 359, 6398
(1992), 826–829.
57. N, M. A.,  S, K. Evolution of indirect reciprocity. Nature 437, 7063 (2005),
1291–1298.
58. O, M. Spontaneous planning for future stone throwing by a male chimpanzee. Current
biology 19, 5 (2009), R190–R191.
59. O, M.,  G̈, P. Oldowan culture and the evolution of anticipatory cognition,
2005. Lund University Cognitive Studies 122, Lund.
60. O, M.,  O, H. Chimpanzee (pan troglodytes) and orangutan (pongo abelii) forethought: self-control and pre-experience in the face of future tool use. Animal Cognition 11
(2008), 661–674.
61. P, R. Social software. Synthese 132 (2002), 187–211.
62. P, S. D.,   W, F. B. M. Empathy: Its ultimate and proximate bases. Behavioral
and Brain Sciences 25, 1 (2003), 1–72.
63. R, J. E. Fish service stations. Sea Frontiers 8 (1962), 40–47.
64. R, C. W. Flocks, herds and schools: A distributed behavioral model. Computer Graphics 21, 4 (1987), 25–34. SIGGRAPH ’87: Proceedings of the 14th annual conference on
Computer graphics and interactive techniques.
65. R, P. J., B, R. T.,  H, J. Cultural evolution of human cooperation. In Genetic and cultural evolution of cooperation, P. Hammerstein, Ed., vol. 90 of Dahlem workshop
reports. MIT Press, Cambridge, MA, 2003, pp. 357–388.
66. R, W. A. Mental time travel: Animals anticipate the future. Current Biology 17, 11
(2007), R418–R420.
67. R, A. J. Efficiency in evolutionary games: Darwin, nash and the secret handshake. Journal of Theoretical Biology 144, 3 (1990), 379–396.
68. R, A. G., S, J. R., H, B.,  H, M. D. The evolutionary origins of human
patience: Temporal preferences in chimpanzees, bonobos, and human adults. Current Biology
17, 19 (2007), 1663–1668.
69. S, G. Grooming and agonistic support: a meta-analysis of primate reciprocal altruism.
Behavioral Ecology 18, 1 (2007), 115–120.
1 The Cognitive and Communicative Demands of Cooperation
21
70. S, D., K, H.-J.,  M, M. Reputation is valuable within and outside
ones own social group. Behavioral Ecology and Sociobiology 57 (2005), 611–616.
71. S, K., H, C.,  N, M. A. Reward and punishment. Proceedings of the
National Academy of Sciences of the United States of America 98, 19 (2001), 10757–10762.
72. S, B. Signals, evolution and the explanatory power of transient information. Philosophy
of Science 69, 3 (2002), 407–428.
73. S, I., M, M.,   V, E.,  V, R. A multi-agent systems
approach to gossip and the evolution of language. In Proceedings of the 31st Annual Meeting
of the Cognitive Science Society (Austin, TX, 2009), N. A. Taatgen and H. van Rijn, Eds.,
Cognitive Science Society, pp. 1609–1614.
74. S, L. The Talking Heads Experiment. Volume 1. Words and Meanings. Laboratorium,
Antwerpen, 1999.
75. S, L. Social and cultural learning in the evolution of human communication. In Evolution
of Communication Systems, D. K. Oller and U. Griebel, Eds. MIT Press, Cambridge, MA,
2004, pp. 69–90.
76. S, D. W., ML, C. M.,  S, J. R. Discounting and reciprocity in an iterated
prisoner’s dilemma. Science 298, 5601 (2002), 2216–2218.
77. S, J. R.,  H, M. D. Why be nice? Psychological constraints on the evolution
of cooperation. Trends in Cognitive Sciences 8, 2 (2004), 60–65.
78. S, T.,  C, M. C. Mental time travel and the evolution of human mind.
Genetic, Social and General Psychology Monographs 123, 2 (1997), 133–167.
79. S, R. The Economics of Rights, Co-operation and Welfare. Blackwell, Oxford, 1986.
80. T, M. The Cultural Origins of Human Cognition. Harvard University Press, Cambridge, MA, 1999.
81. T, R. L. The evolution of reciprocal altruism. The Quarterly Review of Biology 46, 1
(1971), 35–57.
82. T, R. L. Reciprocal altruism: 30 years later. In Cooperation in Primates and Humans,
P. M. Kappeler and C. P. van Schaik, Eds. Springer, Berlin, 2006, pp. 67–83.
83.  B, J. F. A. K. Logic and reasoning: Do the facts matter? Studia Logica 88 (2008),
67–84.
84.  S, C. P.,  K, P. M. Cooperation in primates and humans: Closing the gap.
In Cooperation in Primates and Humans, P. M. Kappeler and C. P. van Schaik, Eds. Springer,
Berlin, 2006, pp. 85–105.
85. V, R. Logic and social cognition: The facts matter, and so do computational models.
Journal of Philosophical Logic 38 (2009), 649–680.
86. W, S. A., G, A., S, D. M., R, T., B-C, M., S, E. M.,
G, M. A.,  G, A. S. Cooperation and the scale of competition in humans.
Current Biology 16, 11 (2006), 1103–1106.
87. W, G. S. Reciprocal food sharing in the vampire bat. Nature 308, 5955 (1984),
181–184.
88. Z, A. Mate selection: A selection for a handicap. Journal of Theoretical Biology 53, 1
(1975), 205–214.
89. Z, J., P, T.,  G̈, P. Bodily mimesis as “the missing link” in human
cognitive evolution, 2005. Lund University Cognitive Studies 121, Lund.