Complex dynamics of an agent-based model of innovation

chapter 9 – preliminary draft, not for quotation
Complex dynamics of an agent-based model of innovation
Marco Villani, Roberto Serra, David Lane
Department of Social, Cognitive and Quantitative Sciences
Modena and Reggio Emilia University
1 Introduction
The role of the model
In this chapter we will describe a dynamical model which is based on the Lane-Maxfield
theory of innovation processes, described elsewhere in this volume.
The first question to be addressed is why should we be interested in the behaviour of such a
model: is the theory itself, integrated by accurate case studies, not sufficient to understand the
phenomenon? What does dynamical modelling add to our understanding?
There are two basic answers to this question [Lane, 1993, Arthur, 2005], the first being that
formal modelling requires that the hypotheses be stated in a very precise way, a feature which
is not always fulfilled by theories of social processes. On the other hand, it often turns out
that the theory is more sophisticated and articulated than those aspects which can be captured
by a model, therefore achieving precision may require simplifications which can spoil the
theory of some of its most significant features
The other answer is that simulation allows one to observe the system-level unfolding of
theoretical hypotheses which concern the behaviour of single entities, or the interactions
among just a few of them. This is indeed the most important aspect in our case: the theory of
Lane and Maxfield describes the behaviour of agents at different levels, which may be either
individual human beings or organizations or scaffolding structures, and it concentrates on
interactions among a few key players which give rise to so-called generative relationships. It
is assumed that new artefact functionalities, and/or new roles for the agents, are discovered
(or invented) in these relationships. But this is in principle an endless process, since other
agents may follow the innovators, and they may also come to develop in turn new
“interpretations” which promote further changes, a.s.o.
The case studies can follow in detail some of these cascades of changes, but specific cases
may be influenced by specific events, so in order to appreciate what can happen when many
players interact a model may be indeed very useful.
Which kind of model
Although equation-based models are more powerful than what is sometimes assumed in the
recent literature [Serra, 2006] we will not compare here their merits and limitations with those
1
chapter 9 – preliminary draft, not for quotation
of agent-based models, which have been chosen in this case for reasons discussed in detail in
[Lane, Serra, Villani & Ansaloni, 2005, 2006]. Agent-based models is a term which usually
refers to systems where the behaviour of single agents is defined by rather complicated
algorithms, alongside with interaction rules among the agents. It is indeed the fact that the
Lane-Maxfield theory assumes that the actors possess sophisticated information-processing
and decision-making capabilities which requires the use of agent-based models.
Trendy as it is, this choice is not riskless. In particular, it is often too easy to devise a model
which is based on reasonable assumptions, and which gives rise to behaviours which have
some resemblance with observed phenomena. The question of the relevance of these models
is indeed very serious. In hard sciences, the possibility to perform accurate experiments or
measurements provides strong means to test the hypotheses, but nothing similar can be found
in social sciences. In crude terms, the risk is that of developing plausible videogames which
tell us almost nothing about what really happens.
Although agent-based models (from now on, ABMs) are seemingly friendly, they are highly
nonlinear and can therefore give rise to very different behaviours. While this is well known in
the case of nonlinear differential equations, it is no less true of ABM. Moreover, there are
often many parameters, so the parameter space is very large. Therefore, if one finds a
behaviour which somehow reminds of observed phenomena, but this happens only for a
limited set of parameter values, this result is not particularly meaningful – unless one can give
reasons why the parameters should take just those values.
We therefore apply agent-based models, because they are necessary here, but with caution and
care. In particular, we think that in order to avoid the high degree of arbitrariness typical of
these models, it is important to keep a tight relationship with a theory of the social processes,
like the one by Lane and Maxfield, which has been developed so far independently of the
model. This provides firmer grounds for the model than most similar ABMs.
Another important aspect is that the model can give rise to a dialogue with the theory, not
only by forcing the adoption of very precise statements, but also by pointing to new
phenomena and posing therefore new questions and challenges to the theorists and the
scientists in the field. We will come back to this dialogue between model and theory after
examining some examples of the actual behaviour of the model.
Chapter outline
The outline of the chapter is the following. The aspects of the Lane-Maxfield theory which
are more relevant for our study are recalled in section 2, alongside with the main motivations
and goals of this research. The model has been extensively described elsewhere [Lane, Serra,
Villani & Ansaloni, 2005, 2006] so it will be only briefly summarized in section 3. In sections
4 and the following we will describe its behaviour, focussing on some questions which appear
particularly interesting.

goals (section 4): in the model agents define a goal (ie an artefact they want to
produce) and try to achieve it. This can be compared with a version of the same
model, but with random creation of artefacts, thus providing clues on the role of the
persistence of goals
2
chapter 9 – preliminary draft, not for quotation

agents’ styles (section 5): agents may be endowed with different properties, which are
described by suitable model parameters. How is their performance affected by their
(and the others’) propensity to innovate?

privileged relationships (section 6): the theory places a high emphasis on the role of
interactions among agents, so how do the changes in interaction patterns affect the
model outcome? do agents which follow a “generative relationship” approach as the
one advocated by the theory have an edge with respect to others?
Extensive sets of experiments have been performed concerning these topics, but reasons of
space and need for readability prevent us from giving here a full account. So we will focus
here on some major aspects of these studies. A more complete summary [Villani et al., 2006]
and some detailed papers concerning specific issues are currently in preparation.
In the final section we will present some comments and indications for further work.
2 Motivations and goals
AS discussed in the previous Section, in this work we consider an agent-based model of
innovation (called I2M model) which relies upon an existing theory of the phenomenon in
order to constrain our modelling options. The theory, which is qualitative in nature, provides
the basic entities of the model and predicates about their relationships. The model represents a
simplified universe inhabited by (some of) the theoretical entities. Simplification is necessary
in order to deal with manageable systems: we do not look for an all-encompassing model, but
we rather imagine a family of models which capture different features of the theory.
The theory which lies at the basis of this model is the innovation theory of Lane and Maxfield
(briefly, LM theory) , discussed in chapter 6, and in particular their notion of generative
relationships.
In the work of Lane and Maxfield [], artefacts are given meanings by the agents that interact
with them, but these meanings cannot be understood without taking into account the roles
which different agents can play. Thus, artefacts may be given different meanings by different
agents, or by the same agents at different times.
Therefore, LM theory can be seen as a theory of the interpretation of innovation. According
to LM, a new interpretation of a potential artefact functionality can be put forth in the context
of so-called generative relationships. By interacting, a few agents come to invent and share
this interpretation, based on the discovery of different perspectives and use of existing
artefacts. The generative potential of a relationship may be assessed in terms of the following
criteria:

heterogeneity: the agents are different from each other, they have different features and
different goals; the heterogeneity is not so intense as to prevent communication and
interaction
3
chapter 9 – preliminary draft, not for quotation

aligned directedness: the agents are all interested in operating in the same region (or in
neighbouring regions) of agent-artefact space

mutual directedness: the agents should be interested in interacting with each other.
Moreover, Lane and Maxfield discuss two further features that deal with organizational
issues, i.e. permissions and action opportunities. The former refers to the fact that the agents
are authorized to engage in such a relationship, the latter to the possibility of moving from
“talk” to action.
Lane and Maxfield further argue that, in a situation where innovations happen at a very fast
pace, predicting the future is impossible; so a better strategy would be to identify those
relationships that have the potential for generativeness, and to foster them in order to
effectively explore the new opportunities they can give rise to. It is therefore very important
to be able to estimate the “generative potential” of the existing and prospective relationships.
The LM theory of innovation is highly sophisticated in describing the interactions between
different players in innovation processes, and it cannot be entirely mapped onto a specific
computer-based model. Therefore, the modelling activity aims at developing models that are
based on abstraction of some key aspects of this theory, which is of a qualitative nature.
The basic requirements for the model derived from the theory can be summarized as follows:
1. the meanings of artefacts must be generated within the model itself: since the LM theory
claims that new meanings are generated through interactions among agents and artefacts,
it would be inappropriate here to resort to an external oracle to decide a priori which
meanings are better than others
2. the roles of agents must also be generated within the model: indeed the LM theory claims
that also new roles are generated through interactions among agents and artefacts
3. agents must interact with artefacts and with other agents: interacting with artefacts only
would prevent the possibility of describing agent-agent relationships
4. an agent should be able to choose the other agents with whom to start a relationship; in
general, an agent will be able to handle a finite number of relationships at a time, and it
will choose a subset of the other agents as its partners. Agents must be allowed to cut a
disappointing or unsatisfactory relationship to look for a better one
5. an agent should be able to have different degrees of interaction with another agent: in this
way it will be possible to tune the intensity of these relationships
6. some agent-agent relationships should be generative in character, i.e. they should be able
to lead to new attributions (for agents and/or artefacts) and should respect the criteria for
generative potential described above.
4
chapter 9 – preliminary draft, not for quotation
3 Brief model outline
The model has been described in detail elsewhere [Lane, Serra, Villani & Ansaloni, 2005,
2006] so we will limit here to sketch some of its features, in order to make the chapter
readable. Many choices will appear arbitrary in this brief account, and the interested reader is
referred to the long papers for all the details.
In our model agents can “produce” artefacts, which in turn can be used by other agents to
build their own artefacts, etc. An agent can produce several artefacts for different agents (and
it can sell one type of artefact to several different customers). While this model may remind
of a production network, it is to be intended at a fairly abstract level
Each agent has a set of recipes which allow it to build new artefacts from existing ones.
Agents can try to widen the set of their recipes by applying genetic operators either to their
own recipes or, by cooperating with another agent, to the joint set of their recipes. Moreover,
each agent has a store where the products of its production activity are put, and from where its
customers can take them (therefore, contrary to what happened in the first versions of the
model, quantities matter, i.e. types of artefacts are distinct from tokens)
The meaning of artefacts is just what agents do with them, while the role of agents is defined
by which artefacts they produce, with whom, and for whom. The role of agents is also partly
defined by the social network they are embedded into. In this network, the strong ties between
two agents are mediated by a chain of artefacts. There are also weak ties between two agents
(“acquaintances”) which refer to the fact that agent A knows something about agent B (e.g. its
products).
Successful provider/customer interactiosn increase the value of a numerical variable (the
“vote”) which ranks the value which an agent assigns to its relationship with other agent. The
global vote also takes into account the results of joint cooperation in new projects, if any
(agents can indeed cooperate to produce new artefacts; the successes and failures in these
endevours affect the vote).
A key point is the structure of artefact space. What is required is that the space has an
algebraic structure, and that suitable constructors can be defined to build new artefacts by
combining existing ones. For reasons discussed elsewhere, we have adopted the number
representation and the use of mathematical operators. Therefore the agents are “producers” of
numbers by combination of other numbers. The recipes are therefore sequences of arithmetic
operators.
The network initialization procedure starts from “raw materials” (which are assumed to be
always available). The first agent which is introduced is given some recipes (sequences of
operators) and it uses some raw materials as inputs to its recipes, therefore creating other
products. The second agent can take either the raw materials or the products of the first agent
as its inputs, and it produces further products. The third agent can take either raw materials or
products of the first two agents as inputs , and so on. This gives the first agents a privileged
position, as it will be discussed below – but this may hold true also in real market systems.
5
chapter 9 – preliminary draft, not for quotation
As far as innovation is concerned, let us remark that an agent can invent new recipes or
eliminate old ones. In the present version of the model no new agents are generated, while
agents can die because of lack of inputs or of customers.
In the following sections we will describe some properties of the I2M model. As it has been
observed, there are many parameters and therefore different behaviours are observed. For
example, there are choices which lead to the eventual disappearance of all the agents.
In order to explore some properties of the system, we identified a set of “basic” parameter
which give a reasonably repeatable behaviour in different runs, and we considered some
modifications on this basic set. The “basic set” of parameters is given in the Appendix.
Many variables can be and are actually computed, and the most interesting plots are those
where these variables are shown versus time. For reasons of clarity we will show below only
some of these variables, which appear to be the most relevant, focusing in particular on the
following:





diversity, i.e. number of different types (also called names) of artefacts; an interesting
plot makes also use of the fact that artefacts are represented by numbers, so a picture
of the way in which artefact space evolves is given by plotting the various names
present at a given time on the y-axis, versus time on the x-axis
diameter, i.e. the difference between the largest and the smallest number
corresponding to existing artefacts
number of alive agents
average number of recipes per agent
average number of agents known by an agent (it includes suppliers and acquaintances)
Other variables will be introduced when needed
4 Goal-oriented systems
One of the most interesting issues analyzed in the present volume is the relationship between
biological and social evolution, which is discussed in depth in chapters 2 and 3. One of the
facets of this problem which is particularly intriguing is the comparison between the different
mechanisms which drive the introduction of novelties in the systems: random changes, the
rule in biology, versus goal-oriented, i.e. intentional changes in human systems. But what
happens in this latter case in an unpredictable world, where both the system’s own dynamics
and the results of our intentional behaviour cannot be forecasted? Is there any difference
between the two cases?
We will not consider this topic in its generality, but we will rather focus on how such
questions can be addressed in the framework of the I2M model. In this model, agents have a
“goal” in artefact space, i.e. a new artifact they aim at producing, either by themselves or in
cooperation with another agent. The goal (G) itself is determined by the following method:
first, one of the existing artefacts is chosen at random (heuristically, this is a way to sample
the set of existing artefacts), let us call it the intermediate goal (IG); then this is modified by a
“jump”. Recall that artefacts are numbers: the number corresponding to the IG is multiplied
6
chapter 9 – preliminary draft, not for quotation
times a random number belonging to a given range R (briefly: G=J(IG)). Resorting to
randomness simulates the definition of a goal in a system where it is hard to predict whether a
particular artefact will be successful or not, and where all the properties of an artefact “in use”
can not be established a priori, solely from engineering design considerations.
An important parameter is the goal persistence, which measures how long an agent will stick
to its goal if it has been so far impossible to reach it: we can assume that each agent has the
probability pm of maintaining its own goal, ranging from 0 (changing the goal each time the
agent has to innovate) to 1 (always keeping the same goal till it is produced). Qualitatively,
we can regard pm as a measure of the agent flexibility, i.e. its propensity to change its
objectives (of course, it is actually a decreasing function of “flexibility”).
Figure 4-1 Diversity of artifacts present in the system (average, median, minimum and maximum out of
10 runs) as a function of the agent probability pm of maintaining the goal. If pm is equal to 1, the agents try
to realise the same goal till success
7
chapter 9 – preliminary draft, not for quotation
Figure 4-2 Average number of recipes that the agents are able to maintain active during 2000 step of
simulation versus pm
The effects of the different attitudes are evident by observing Fig. 4-1, which shows the
diversity of artifacts in the system, and Fig. 4-2, which shows the average number of recipes
that the agents are able to maintain active during 2000 step of simulation.
If the probability of changing objective is high enough the system is able to maintain a
sustained growth of diversity (Fig. 4-3); otherwise, diversity levels and remains always lower
than in the previous case. A threshold seem to appears around pm0.5.
Figure 4-3 artifact diversity for a system with high pm (a) and low pm (b) versus time. Note the change of
scale on the graphs
This effect is likely to be due to the fact that to persist in the attempt to create an artefact
which is hard or impossible to achieve with the available operators constrains some agents to
8
chapter 9 – preliminary draft, not for quotation
continue in unsuccessful attempts, and, what is more important, prevents them from
introducing other innovations.
In the following, we will therefore consider agents which are flexible enough to make the
diversity of their artefacts grow. Coming back to the question posed at the beginning of this
paragraph, we want to compare them with a situation without goals, where novelties are
introduced at random.
This can be done by allowing the agents to produce new recipes without the constraint of
realising a goal. That is, to realise a recipe an agent simply has to randomly choose the inputs
and the sequence of operators, among those which are available to it: the result is always
accepted and produced. This change has the effect of increasing the number of successes
(there are no failures), but on the other hand it is possible that lots of new artifacts be useless
for the system, and that therefore lots of innovations be lost.
In Figure 4-4 one finds a comparison between the diameter in artefact space in the two cases:
a system with low-persitence goals (LPG), and a system with no goals (NG). Note that the
former system reaches quickly much larger values; although slower, the NG system shows a
continuous growth of its diameter.
Fig. 4-4: the diameter of artifact space in a LPG (left) and a NG (right) system
It is also interesting to consider the structure of artifact space (see Fig.4-5): nothe that in the
NG case the names of artifacts are very dense, and fill nearly the entire known area
9
chapter 9 – preliminary draft, not for quotation
Figure 4-5 (a) The names present during a simulation for a LPG (left) and NG (right) system
A possible consequence of the presence of empty zones within the artifact space of the goaloriented systems could be a major difficulty in building new recipes: this effect could be
responsible for the limited number of recipes of these systems. This means that it is possible
that the structure of the artifact space can influence the dynamical possibilities of the agents.
In order to better understand the features of the two cases, in Fig. 4-6 the mean number of
recipes per agent is shown. It is also worth recalling that in the NG system each agent quickly
comes to know all the others, while in the LPG system this is not the case.
Figure 4-6 The average number of recipes owned thy the agents in the LPG (left) and NG (right) case
The difference is large: in the NG case the agents quickly reach a complete knowledge of the
other agents, and the average number of recipes per agent doesn’t saturate – the opposite
happens in the LPG system.
The effect is even more impressive if we remember a puzzling feature of goal-oriented
systems, which is their constant tendency toward a limitation on the number of recipes owned
by each agent. We never observed agents able to indefinitely increase their own recipes, even
imposing a complete world knowledge. On the contrary, systems where the agents are not
goal-oriented are able to indefinitely increase the number of recipes owned by each agent,
allowing (and forcing) in such a way a complete world knowledge.
10
chapter 9 – preliminary draft, not for quotation
A possible way to summarize the results of the above analysis is that the goal-oriented system
appears able to explore new regions of artefact space much more quickly than the one without
goals, but that it reaches some plateaus while the latter seems to be able to sustain a
continuous albeit slower growth. The no-goal system appears in a sense slower but more
robust.
In order to further verify this hypothesis, we considered the effect of modifications in the
number and nature of the “raw materials” which are always present in the system, and are
supposed to be continually available. In the cases discussed above, only two such materials
were available, and they were defined by two successive integer numbers. We considered the
effects of



increasing the number of raw materials only
separating the names of raw materials only (creating in such a way empty spaces ab
initium)
performing both the changes
The effects of the increase in the number of raw materials are limited, but those of separating
them are intense. Indeed, the goal-oriented system dies out in this case (i.e. all the agents are
eliminated), while the no-goal system survives, but it no longer shows a steady increse in the
number of recipes per agent, which rather tend to a plateau, similar to the LPG system without
separation.
Therefore, the hypothesis that goal-oriented systems are less robust than no-goal systems
finds further support from these studies. This sheds light on the issue of goal-oriented versus
indirected mechanisms of change, and provides clues for the theory: this aspect will be
discussed in the final section.
Note also that the above results clearly indicate that the introduction of empty spaces inside
the artifact space implies a difficulty for agents to manage the environment. Maybe this could
explain the limited number of recipes owned by the agents living in goal-oriented systems.
Starting from sparsely filled artifact spaces, the agents are not able to totally fill the space
they are exploring, maintaining this initial characteristic and creating in such a way more and
more difficulties to build new artifacts. At the end a spatial structure which appears on the
artifact space is able to sustain a certain number of recipes but not able to sustain a continuous
recipes growth.
Therefore it seems that

Flexible agents are able to adapt themselves to the changing environment, allowing good
performances and a better system exploitation, whereas too rigid agent are not able to take
advantage of the new opportunities presented by the system

Goal-oriented systems are able to rapidly explore the artifact space, by continuously
augmenting the diameter of the known artifact space

The spatial structure of the artifact space influences deeply the system performances
11
chapter 9 – preliminary draft, not for quotation
5 Agent heterogeneity
Agents play different roles in I2M because they develop different recipes and different social
networks, so their role is defined and modified during the system evolution. On the other
hand, there are some parameters which are common to all the agents, e.g. the propensity to
change the goal discussed in the previous section.
One of the main reasons why agent-based models are considered important in economics is
the possibility to handle heterogeneous agents, overcoming the limitations of the approach
based solely on the description of a “representative agent”. Therefore it is interesting to
consider a further source of heterogeneity in our model, associated to the fact that some
parameters may take different values. We consider first the propensity to innovate of a given
agent (not to be confused with the goal persistence discussed above!). Agents in I2M are
“natural-born innovators” that try to introduce new artefacts with a certain pace, ruled by a
specific parameter. Precisely, the attempt to innovate is decided on a stochastic basis, and this
parameter rules the average rate.
Two kinds of experiments can be, and have been, devised: i) a comparison between systems
which are homogeneous (i.e. all the agents have the same value for the relevant parameters)
but these parameters take different values in different worlds and ii) a heterogeneous system
where agents with different values for the relevant parameters coexist and interact.
We remember that in order to create a new artifact an agent executes the following steps:
 with a given probability, it decides to innovate
 it creates a goal, that is:
 it selects the name of a known artifact
 with a given probability, it changes the selected name by multiplying it for
a number belonging to a given range
 the result is the agent goal
 it tries to create a recipe that realises the goal (by recombining the already owned recipes)
There are three main ways to characterise the agents’ propensity to innovate by tuning:



the innovation probability pi of each agent (i.e. the probability that it attempts to create a
new artefact)
the jump probability pj that an agent changes its goal, by multiplying the artefact selected
at the previous step times a random number r (otherwise there is simply imitation, i.e. the
new goal is identical to the chosen artefact!)
the range [rmin, rmax] of the jump
The innovation probability, the jump probability and the range define the agent’s “style” of
innovation.
Innovation probability
12
chapter 9 – preliminary draft, not for quotation
Let us first consider the comparison between homogeneous systems. As it can be expected, a
higher innovation probability leads to more robust and more successful systems. First of all, it
can be observed that in systems with many agents (100) a certain fraction dies when the
innovation probability is low, and this fraction vanishes as it increases.
What is more important, fhe faster agents’ innovation dynamics has several consequences,
including a wider exploration of artifact space and a greater artifact diversity (see figure 5-1).
(a)
(b)
Figure 5-1 Diameter of artifact space (a) and artifact diversity (b) for systems with 40 agents and
different innovation probabilities (blue line for pi=0.1, red line for pi=0.5, black line for pi=0.9). All the
quantities shown are normalized by dividing by the number of living agents, in order to compare steps
where the systems don’t have the same size (because of the death of some agents)
Also systems where agents with different innovation probability levels coexist are interesting:
here one observes that:



the agents with the lowest innovation probabilities have low survival probabilities
the agents with the lowest innovation probabilities are the first agents that die
the number of successful projects made by each agent is an increasing function of the
innovation probability
The only remarkable exceptions to these rules are the agents that are “central” in network
analysis terms: there is an important “positional advantage”, related to the network
initialization mechanism, that allows these agent to survive despite low innovation
probabilities. Agents that are well known by a large part of the other agents, or that produce
artifacts necessary for many others, are not compelled to generate continuously new artifacts.
13
chapter 9 – preliminary draft, not for quotation
(a)
(b)
Figure 5-2 Initial and final distribution of the innovation probabilities (a) and number of successful
projects as function of the innovation probability (b) for a 40 agents heterogeneous system. Note the
particular position of Agent1, that thanks to its central position within the initial network is able to
survive despite its low innovation propensity
Jump probability
Note first that this quantity does not coincide with the goal persistence defined in section 4.
The more evident consequences of an increasing jump probability is that the robustness of the
system decreases. In homogeneous systems with many (100) agents, in the case where the
jump probability is high, one observes indeed the disappearance of a sensible fraction of the
agents.
Note that, in order to create a goal the agents start with the imitation of existing artifacts: this
action allows the agent to sample the artifact space, allowing (probabilistically) the discovery
of zones with high artifact density (the higher the density, the higher the probability to find it).
High densities often indicate profitable zones, where artifacts can be used by other agents and
therefore sold. If jump is rare, exploitation of the information located in the density of the
artefact space dominates with respect to exploration of new regions.
This description is confirmed by the observation that, typically, homogeneous systems with
high jump rates have higher diameters, very high artifact diversity (that however decreases in
time) and a high number of successful innovations, but low number of recipes per agent.
As a further check we can observe what happens in non homogeneous systems (see Fig. 5-3)
where the number of successful projects is a decreasing function of the jump probability.
14
chapter 9 – preliminary draft, not for quotation
(a)
(b)
Figure 5-3 Initial and final distribution of the mutation probabilities (a) and number of successful projects
as a function of the jump probability (b) for a 40 agents system
Jump range
By increasing the jump range, an effect somehow similar to that of increasing the jump rate is
observed. For example, the fraction of agents which die in a homogeneous system in 1000
time steps rises from 30% to 90% when there are 100 agents (when there are few agents the
effect is less evident, see the following table).
Number of dead agents
after 1000 steps
40 initial agents
100 initial agents
Mutation range
[0.5, 1.5]
[0.1, 20.0]
18
21
30
91
The explanation is likely to be similar to the one of the previous paragraph (exploitation is
more effective when the range of the jump is low, while with a high jump range exploration
definitely prevails). Indeed, it can be observed that the average number of recipes per agent,
and the number of agents known by an agent are both lower in the high range case.
15
chapter 9 – preliminary draft, not for quotation
Figure 5-4 Average number of recipes owned by each agent for a system with (a) 40 agents and (b) 100
agents. Fraction of world known by each agents for a system with a mutation range of [0.1,20.0] and (c) 40
agents or (d) 100 agents. Sold artifacts (blue line corresponds to the range of [0.5,1.5], red line corresponds
to the range of [0.1,20.0]) for a system with (e) 40 agents and (f) 100 agents. We remember that with high
mutation range at step 1000 only 9 agents are alive.
We can analyse in a more detailed way the considerations just shown by means of Fig. 5-4,
that moreover allows us to reveal the contemporary presence of a significant size effect.. The
left column of Fig. 5-4 shows the variables corresponding to a 100 agents system: Fig. 5-4a
shows the comparison between the average number of recipes owned by each agent for low
16
chapter 9 – preliminary draft, not for quotation
jump range (blue line) and high jump range (green line), whereas Fig. 5-4f shows the
corresponding comparison between the number of sold artifacts. Clearly, the system with low
jump range overwhelms the system with high jump range. In fact, the low level of world
knowledge of these systems (Fig. 5-4d) depresses the configuration with high jump range,
making the construction of useful recipes very hard to achieve.
In this case, the behaviour of the heterogeneous systems appear particularly interesting. Let us
consider the case where the system is inhabited by an equal proportion of agents with low
([0.5,1.5]) and high ([0.1,20.0]) ranges.
The first remarkable observation is that no agent leaves the systems (it does not matter if the
systems are composed by 40 or 100 agents.). That is, while homogeneous systems are not able
to sustain all the initial agents, a heterogeneous system composed by agents with low and high
jump range is able to do it. This is different from the previous two cases (innovation and jump
probabilities) where different levels of innovation or jump probabilities imply different
survival probabilities. In the present case, it seems that the two groups are able to collaborate
with the final effect of enhancing the performances of the whole system: a form of
cooperation emerges.
Additionally, if we monitor the two groups, we can observe that the agents with high jump
range (previously the weakest group) are now able to have (for most of the time) the highest
knowledge of other agents, the highest number of active recipes, the highest total number of
successful project and the highest level of sales.
A tentative explanation is based on the artifact space structures that arise from the combined
action of the two groups. The agents with high jump range are able to explore large regions
in agent-artifact space, but the relationships that arise from this wide search link very distant
elements and are fragile. On the contrary, the agents with low jump range are able to exploit
in a more exhaustive way the local possibilities, without however being able to rapidly extend
their influence area. Therefore, it is possible that heterogeneous system combine the
exploration capabilities of “long jumpers” with the stabilization effects due to agents with low
jump range.
This is a finding which may have interesting implications for the theory, a point which will be
highlighted in the final section.
6 Relationships
6.1
Introduction
A key issue of the set of theories which I2M is built upon is the agents’ ability of creating
(stable) relationships that should allow the survival of agents themselves. In condition of
ontological uncertainty, the theory states that successful relationships should be based upon
the generative potential of the prospective partners. That is, an agent should not create
preferential relations with arbitrary agents, but only with the subset of the agents that at the
moment show higher potential for generativity.
17
chapter 9 – preliminary draft, not for quotation
In our model the agents can maintain two kinds of relationships:


an agent can “know” another agent (i.e. the first agent knows the existence and the outputs
of the second one) – this knowledge, if not subsequently enforced by means of a artifact
exchange, has the duration of only one step
an agent can have artifact exchanges with another agent (being one of its providers or one
of its customers)
The second kind of relation is carried by the agent’s recipes that thanks to their presence
guarantee the temporal stability of the relationship. Therefore, exchange of artifacts involves
stronger relations than those due to “acquaintances”.
We considered different situations, where the choice of the partners is either random or based
upon the agent’s past history.
This latter alternative is implemented as a vote, which each agent gives to each other agent it
knows; the vote is a function of the history of the relationships between the two. In general, it
is the sum of two terms, one related to the history of buy&sell relationships, the other
dependent upon the history of attempts to develop together a joint project (i.e. to share the
inputs and recipes in order to reach a common goal).
Let us consider the model results when the vote is influenced only by the projects the two
agents made in the past. In this case, the vote dynamics is very simple:
V t 1  V t    t  V t 
2 if there is a successful join t project at time t
where  t  
 0 otherwise
Unknown agents are given V(t)=0, occasional acquaintances (agents just known by chance)
are given an initial V(t)=, with <1. The third term of the equation is a forgetting term, that
 which do no longer engage in partnerships. When an agent has
lowers the vote of those agents
to choose a partner in order to collaborate with the intention of realise its goal, the choice is
probabilistic, based on the votes.
18
chapter 9 – preliminary draft, not for quotation
Figure 6-1 (a) The vote given by agent11 to the other agents during a 2000 steps run and (b) the relative
table of presence/absence of stable collaborations (the blue lines correspond to the numbers which identify
the collaborating agents)
The effects of this vote attribution mechanism is shown in Figure 6-1, where we can see the
vote given by an agent to the other agents and the relative table of presence/absence of stable
collaborations (a stable collaboration being a relationship whose vote is higher than t for
hundreds of steps). Note that stable collaboration could arise, be interrupted and later start
again.
The vote and the partnership mechanisms are able to establish reciprocal trust (and reciprocal
high votes): Table 6-1 shows, for the first ten agents of the same run of Figure 6-1, the
reciprocity degree, defined as the fraction of relationships where both agents give to each
other votes higher than t.
Table 6-1 The reciprocity degree of the first then agents of the same run of Figure 6-1
Now we can compare this situation with the results obtained by a random choice of the
partner. The system performances are different, in terms of number of successful project (i.e.
projects that reached the predefined goal), diameter of the system and artifacts’ diversity (see
Fig. 6-2 c and d).
19
chapter 9 – preliminary draft, not for quotation
Figure 6-2 Upper row: histogram of successful projects of a system with (a) and without (b) the vote
attribution rule, a successful project being a project that reached the predefined goal. Lower row:
comparison of (c) system diameter and (d) artifacts’ diversity, between systems with vote with (blue line)
and without vote (red line).
It is worth observing that, in accordance with the statements of the Lane-Maxfield theory, the
system where relationships matter outperform those based on random pairing. Note that the
system with the vote mechanism outperform the system without vote only after a transient of
hundreds of steps: this is a constant feature, and could indicate that agents need some time to
establish a well working vote distribution.
We also considered situations were the choice of the partner was based not on the history of
past relationships, but on a measure of how different the “profiles” of the agents are. Such
heterogeneity is computed in such a way that two specialists (agents whose products fall in a
narrow interval) are considered similar, as well as two generalists, while a generalist and a
specialist are highly heterogeneous (without entering into the details of the calculation, let it
suffice to mention that the measure is based on the standard deviation of the agent’s
products). The preference is for pairing with agents which differ from the one which is
making the choice.
20
chapter 9 – preliminary draft, not for quotation
These systems show a higher number of short lasting collaborations than the previous one;
however, the performances turn out to be close Fig. 6-3).
Another criterion for pairing which has been tested is the closeness of the goals of the two
agents. Also in this case the system performances are similar to those of the voting
mechanism. Finally, even a combination of the three criteria (vote, heterogeneity, goal
alignment), which mimics in a sense the concept of generative potential, provides similar
performances (fig. 6-3). These results favour the hypothesis that the increase in performances
is linked to the stability of some relationships, while the criterion adopted to find the preferred
partner does not seem to differentiate among the performances of the different cases.
Figure 6-3 Comparison among systems where the choice of the partners is based on vote only,
heterogeneity only, goal distance and a combination of the three: (a) artifacts’ diameter, (b) artifacts’
diversity
7. Conclusions
to be completed
nota that also two further sections should be added, one on the robustness properties of the
model, another on the structures which are observed in agent-agent, artefact-artefact and agent
artefact networks
References
to be completed
Appendix
to be completed
21