Modelling and Simulating Social Systems with MATLAB Lesson 4 – Introduction to Agent-based simulations A. Johansson & W. Yu © ETH Zürich | Datum Lesson 4 - Contents Swarm intelligence Human cooperation and coordination How to develop a multi-agent simulator Exercises 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 2 Swarm Intelligence Decentralization Interaction Self-organization 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 3 Boids (Reynolds,1986) 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 4 More information: http://www.red3d.com/cwr/boids/ Separation: steer to avoid crowding local flockmates Alignment: steer towards the average heading of local flockmates Cohesion: steer to move toward the average position of local flockmates 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 5 Human Cooperation Prisoner’s dilemma 2008-03-09 15,15 20,0 0,20 19,19 A. Johansson & W. Yu / [email protected] [email protected] 6 The Evolution of Cooperation (Axelrod,1984) Tit for tat Shadow of the future 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 7 Coordination 2008-03-09 1,1 0,0 0,0 1,1 A. Johansson & W. Yu / [email protected] [email protected] 8 Learning Imitation Reinforcement Best reply 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 9 From Factors to Actors (Macy,2002) Agents are autonomous Agents are interdependent Agents follow simple rules Agents are adaptive and backward-looking 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 10 Programming a simulator Agent based simulations The models simulate the simultaneous operations of multiple agents, in an attempt to re-create and predict the actions of complex phenomena. Agents can be pedestrians, animals, customers, internet users, etc… 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 11 Programming a simulator Agent based simulations State update According to set of behavioral rules Time t Time t + dt Characterized by a state (or states) New state (location, dead/alive, color, level of excitation) Time line 2008-03-09 T1 dt A. Johansson & W. Yu / [email protected] [email protected] T2 12 Programming a simulator Agent based simulations State update According to set of behavioral rules, including neighborhood Time t Time t + dt Characterized by a state New state (location, dead/alive, level of excitation) Time line 2008-03-09 T2 T1 A. Johansson & W. Yu / [email protected] [email protected] 13 Programming a simulator Programming an agent based simulator often goes in five steps Initialization Initialization - Initial state; parameters; environment Time loop Time loop Agents loop - Processing each time steps Agents loop Update state - Processing each agents end Update end - Updating agent i at time t Save data Save data - For further analysis 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 14 Programming a simulator Step 1: Defining the initial state & parameters % Initial state t0=0; dt=1 ; T=100; %begining of the time line %time step %number of time steps %Initial state of the agents State1 = [0 0 0 0]; State2 = [1 1 1 1]; etc… 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 15 Programming a simulator Step 1: Defining the initial state & parameters % Initial state t0=0; dt=1 ; T=100; %begining of the time line %time step %number of time steps %Initial state of the agents State1 = zeros(1,50); State2 = rand(1,50); 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 16 Programming a simulator Step 2: Covering each time step % time loop For t=t0:dt:T What happen at each time step? - Update environment - Update agents end 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 17 Programming a simulator Step 3: Covering each agent % agent loop For i=1:length(States) What happen for each agent? end 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 18 Programming a simulator Step 4: Updating agent i at time t % update % Each agent has 60% chance to switch to state 1 randomValue=rand(); if (randomValue<0.6) State(i)=1; else State(i)=0; end 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] Use sub-functions for better organization!! 19 Programming a simulator Step 4: Updating agent i at time t % update % Each agent has ‘proba’ chance to switch to state 1 randomValue=rand(); if (randomValue<proba) State(i)=1; else State(i)=0; end 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] Define parameters in the first part!! 20 Programming a simulator Step 4: Updating agent i at time t % Initial state t0=0; dt=1 ; T=100; proba=0.6; 2008-03-09 %begining of the time line %time step %number of time steps %probability to switch state A. Johansson & W. Yu / [email protected] [email protected] Define parameters in the first part!! 21 Programming a simulator Step 5: Final processing % Outputs and final processing propAlive = sum(states)/length(states); 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] And return propAlive as the output of the simulation. 22 Programming a simulator Running simulation >> p=simulation() p = 0.54 >> p=simulation() p = 0.72 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] Programming a simulator Alternatively : a framework for the simulator % Running N simulations N=1000 %number of simulation for n=1:N p(n)=simulation(); end 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] Use a framework to run N successive simulation and store the result each time. 24 Programming a simulator Alternatively : a framework for the simulator >> hist(p) 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 25 A simple dice game The rules: Each player throws 2 dice and gain a number of points equal to the sum of the two dice. If the dice produce the same value, the score is double. 2;5 :7 Normal score 4;4 : 8 x 2 = 16 Bonus score The players roll the dice N times, in turn. Each one cumulate his points. The player that has the best score after N rolls win. 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 26 A simple dice game The cheater: There is a cheater among the players. This cheater own some special dice that have a higher probability to produce the same value. In a multi-agents simulation, each player would be an agent characterized by his score, and his behaviour (cheater or fair player). We now develop this simulation… 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 27 Exercise 1 The behaviour of a fair player: Write a function called fairRoll.m that returns the score of a fair player: - Inputs : nothing - Outputs: the score S - Write the function in 3 steps: (1) simulate the rolling of dice n°1 and dice n°2, (2) calculate the sum, (3) check if the two values are the same and, in that event, double the sum to get the final score. Hints : - The command rand()*6 returns a random value between 0 and 6 - The function ceil(n) returns the nearest integer greater than or equal to n - Combine them to get a random integer value between 1 and 6. 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 28 Exercise 2 The dice game with fair players Use the template file simulation.m to build the simulator - Initialization: set variables N=100 agents, T=50 time steps, and a vector called score of length N, used to store the score of each player (initial values set to zero). - In the update section, use the function fairRoll.m to update the score of player n - Return the final vector score as an output. Run the simulation. What is the mean score of all players at the end of the game? Use the function max(score) to determine who is the winner. 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 29 Exercise 3 The behaviour of a cheater: Write a function called cheatRoll.m that return the score of a cheater: - Inputs : a variable called probaCheat representing the probability to have a successful cheat. Outputs: the final score S Hints Write your code in four steps : 1. Determine the value of the first dice 2. Pick a random number between 0 and 1, the cheat is successful if this number is lower then probaCheat 2008-03-09 3. Determine the value of the second dice according to step 2. 4. Calculate and return the score S A. Johansson & W. Yu / [email protected] [email protected] 30 Exercise 4 The dice game with one cheater: Go back to your simulator. In the initialization section, add a new state called behaviour with length(behaviour)=N , behaviour(i)=1 if player i is a cheater and behaviour(i)=0 if i is a fair player. Set player n°1 as a cheater and all the others as fair players. Modify the update section in order to call either fairRoll or cheatRoll, with respect to the behaviour of the player n . Set the value of probaCheat to 0.6 (ideally, set it as a parameter in the initialization section). Run the simulation. Who is the winner? What is the score of the cheater and the average score of the fair players? 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 31 Exercise 5 A framework for the simulator: Create a new script file. Use a for loop to repeat S times the simulation. At the end of each simulation, store the id of the winner (the cheater has its id=1). What is the winning rate of the cheater after S=100 games? Try to change the probability of successful cheating (probaCheat) to reduce the winning rate to 75%. 2008-03-09 A. Johansson & W. Yu / [email protected] [email protected] 32 Exercise 6 (optional) Interactions and behavioural changes In most of the case, the cheater does not cheat systematically, in order not to be too suspicious. Change the update section again, so that the cheater only use his special dice when another player has a better score than him. Call the functions fairRoll or cheatRoll according to the current score of the cheater. The behaviour vector should eventually be updated as well, in case of a change of strategy. 2008-03-09 What is the cheater’s new winning rate ? A. Johansson & W. Yu / [email protected] [email protected] 33
© Copyright 2025 Paperzz