PowerPoint-Präsentation - Webarchiv ETHZ / Webarchive ETH

851-0585-04L – Modeling and Simulating
Social Systems with MATLAB
Lecture 5 – Introduction to agent-based modeling
Giovanni Luca Ciampaglia, Stefano Balietti and Karsten Donnay
Chair of Sociology, in particular of
Modeling and Simulation
© ETH Zürich |
2010-10-25
Projects (last call)
 If you haven’t done it yet, send us an abstract of
your project by Wed 27th Oct.
 Don't re-invent the wheel but try to build upon
existing models.
 Start simple and extend functionality.
 Look at project reports from previous semesters.
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
2
Lesson 5 - Contents
1. Swarm intelligence
2. Human cooperation and coordination
3. How to develop a multi-agent simulator
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
3
Swarm Intelligence (1989)
 Ant colonies, bird flocking, animal herding, bacterial
growth, fish schooling…
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
4
Swarm Intelligence
Key Concepts:
 Decentralized control
 Interaction and learning
 Self-organization
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
5
Boids (Reynolds,1986)
 a boid is a simulated individual particle or object
moving about in a virtual space.
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
6
More information: http://www.red3d.com/cwr/boids/
Separation: steer to
avoid crowding local
flockmates
Alignment: steer
towards the average
heading of local
flockmates
Cohesion: steer to move
toward the average
position of local
flockmates
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
7
Human Cooperation
 Prisoner’s dilemma
2010-10-25
15,15
20,0
0,20
19,19
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
8
The Evolution of Cooperation (Axelrod,1984)
 Tit for tat
 Shadow of the future (discount parameter)
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
9
The Evolution of Cooperation (Axelrod,1984)
 Tit for tat



Being nice (cooperate first)
Pay back immediately
Easy to be recognized
 Shadow of the future (discount parameter)

2010-10-25
If the probability of meeting again is large enough, it is
better to be nice…
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
10
Coordination Games
2010-10-25
1,1
0,0
0,0
1,1
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
11
Evolutionary Learning
 The main assumption underlying evolutionary
thinking is that the entities which are more
successful at a particular time will have the best
chance of being present in the future.
 Selection
 Replication
 Mutation
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
12
Evolutionary Learning
 Selection is a discriminating force that favors
some specific entities rather than others.
 Replication ensures that the entities (or their
properties) are preserved, replicated or inherited
from one generation to another
 Selection and replication work closely together,
and in general tend to reduce diversity
 The generation of new diversity is the job of the
mutation mechanism
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
13
How agents can learn
 Imitation : People copy the behavior of others, especially
behavior that is popular or appears to yield high payoffs.
 Reinforcement : People tend to adopt actions that
yielded a high payoff in the past, and to avoid actions
that yielded a low payoff.
 Best reply : People adopt actions that optimize their
expected payoff given what they expect others to do.
Subjects choose best replies to the empirical frequency
distribution of their opponents’ previous actions, i.e.
“Fictitious Play”. Agents may also update their beliefs
about others’ behavior.
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
14
How agents can learn
 Imitation : People copy the behavior of others, especially
behavior that is popular or appears to yield high payoffs.
 Reinforcement : People tend to adopt actions that
yielded a high payoff in the past, and to avoid actions
that yielded a low payoff.
 Best reply : People adopt actions that optimize their
expected payoff given what they expect others to do.
Subjects choose best replies to the empirical frequency
distribution of their opponents’ previous actions, i.e.
“Fictitious Play”. Agents may also update their beliefs
about others’ behavior.
More on Evolutionary Game Theory further on in the course
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
15
From Factors to Actors (Macy,2002)
 Simple and predictable local interactions can
generate familiar but enigmatic global patterns,
such as the diffusion of information, emergence
of norms, coordination of conventions, or
participation in collective action
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
16
From Factors to Actors (Macy,2002)
 Simple and predictable local interactions can generate familiar but
enigmatic global patterns, such as the diffusion of information,
emergence of norms, coordination of conventions, or participation in
collective action
 Agents are autonomous
 Agents are interdependent
 Agents follow simple rules
 Agents are adaptive and backward-looking
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
17
Programming a simulator
 Agent based simulations

The models simulate the simultaneous operations of
multiple agents, in an attempt to re-create and predict
the actions of complex phenomena.

Agents can be pedestrians, animals, customers,
internet users, etc…
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
18
Programming a simulator
 Agent based simulations
State update
According to set of
behavioral rules
Time t
Time t + dt
Characterized by a state
(or states)
New state
(location, dead/alive,
color, level of excitation)
Time line
2010-10-25
T1
dt
T2
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
19
Programming a simulator
 Agent based simulations
State update
According to set of
behavioral rules,
including neighborhood
Time t
Time t + dt
Characterized by a state
New state
(location, dead/alive,
level of excitation)
Time line
2010-10-25
T1
T2
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
20
Programming a simulator
 Programming an agent based simulator often
goes in five steps

Initialization
Initialization
- Initial state; parameters; environment

Time loop
Time loop
Agents loop
- Processing each time steps

Agents loop
Update state
- Processing each agents

Update
end
end
- Updating agent i at time t

Save data
Save data
- For further analysis
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
21
Programming a simulator
 Step 1: Defining the initial state & parameters
% Initial state
t0=0;
dt=1 ;
T=100;
%begining of the time line
%time step
%number of time steps
%Initial state of the agents
State1 = [0 0 0 0];
State2 = [1 1 1 1];
etc…
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
22
Programming a simulator
 Step 1: Defining the initial state & parameters
% Initial state
t0=0;
dt=1 ;
T=100;
%begining of the time line
%time step
%number of time steps
%Initial state of the agents
State1 = zeros(1,50);
State2 = rand(1,50);
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
23
Programming a simulator
 Step 2: Covering each time step
% time loop
For t=t0:dt:T
What happen at each time step?
- Update environment
- Update agents
end
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
24
Programming a simulator
 Step 3: Covering each agent
% agent loop
For i=1:length(States)
What happen for each agent?
end
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
25
Programming a simulator
 Step 4: Updating agent i at time t
% update
% Each agent has 60% chance to switch to state 1
randomValue=rand();
if (randomValue<0.6)
State(i)=1;
else
State(i)=0;
end
2010-10-25
Use sub-functions for
better organization!!
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
26
Programming a simulator
 Step 4: Updating agent i at time t
% update
% Each agent has ‘proba’ chance to switch to state 1
randomValue=rand();
if (randomValue<proba)
State(i)=1;
else
State(i)=0;
end
2010-10-25
Define parameters
in the first part!!
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
27
Programming a simulator
 Step 4: Updating agent i at time t
% Initial state
t0=0;
dt=1 ;
T=100;
proba=0.6;
2010-10-25
%begining of the time line
%time step
%number of time steps
%probability to switch state
Define parameters in
the first part!!
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
28
Programming a simulator
 Step 5: Final processing
% Outputs and final processing
propAlive = sum(states)/length(states);
2010-10-25
And return propAlive
as the output of the
simulation.
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
29
Programming a simulator
 Running simulation
>> p=simulation()
p =
0.54
>> p=simulation()
p =
0.72
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
Programming a simulator
 Alternatively : a framework for the simulator
% Running N simulations
N=1000
%number of simulation
for n=1:N
p(n)=simulation();
end
2010-10-25
Use a framework to
run N successive
simulation and store
the result each time.
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
31
Programming a simulator
 Alternatively : a framework for the simulator
>> hist(p)
2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
32
Exercise 1
 Start thinking about how to adapt the general
simulation engine seen before for your project
 Problems:

Workspace organization (files, functions, data)
 Parameters initialization
 Number of agents
 Organize loops

Which data to save, how frequently, in which format

What kind of interactions are relevant
…

2010-10-25
G. L. Ciampaglia, S. Balietti & K. Donnay / [email protected] [email protected] [email protected]
33