A Generic Mean Field Convergence Result
for Systems of Interacting Objects
From Micro to Macro
Jean-Yves Le Boudec, EPFL
Joint work with David McDonald, U. of Ottawa
and Jochen Mundinger, EPFL
1
The full text of my talk and this slide show are available from my
web page
http://people.epfl.ch/jean-yves.leboudec
Direct access:
Full text:
http://infoscience.epfl.ch/getfile.py?recid=108827&mode=best
Slide show:
http://icawww1.epfl.ch/PS_files/mean-field-leb-vt07.ppt
2
Contents
1. Motivation
2. A Generic Model for a System
of Interacting Objects
3. Convergence to the Mean
Field
4. Fast Simulation
5. Full Scale Example: A
Reputation System
6. Outlook
E.L.
3
Motivation
Find re-usable approximations of large scale systems
Examples from my field
Performance of UWB impulse radio : many sensors, each has a MAC
layer state
Ad-Hoc networking
Reputation Systems
From microscopic description to macroscopic equations
Understand fluid approximation and mean field approximation
4
Example 1 : TCP/ECN
ECN Feedback q(R(t))
1
n
ECN router
queue length R(t)
N
N connections
TCP connection n transmits at a
rate 2 {s0, …, si, …, sI}
Queue length at router is R(t)
With probability q(R(t)) connection
i receives an Explicit Congestion
Notification (ECN) in next time slot
When connection n does not
receive an ECN, it increases its
rate:
If rate == si, new rate := si+1 (i<I)
Else it decreases its rate:
If rate == si, new rate := sd(i)
The question is the behaviour when N is large
5
Microscopic Description
Time is discrete
Connection n runs one Markov chain XNn(t);
no ECN received
ECN received
The transition probabilities of the Markov chain XNn(t) depend on
global state R(t) (queue size)
Global state R(t) depends on states of all connections
let MNi(t) = nb of connections in state i at time t ,
C = service rate of router
6
Macroscopic Description
The fluid approximation is often given as a simplification of the
previous model
Combined with
we have a macroscopic description of the system
In [17], Tinna. and Makowski show that it holds as large N
asymptotics
7
The Mean Field Approximation
Assume we want to analyze one TCP connection in detail
We can keep the microscopic description for this TCP connection,
and use the fluid approximation for the others:
We can call it fast simulation.
i.e. pretend XN1(t) (one connection) and R(t) (global resource) are
independent. This is similar to what is called the mean field
approximation in physics
8
Another Example:
Robot Swarm
N robots
Robot has S = 2 possible states
Transition for one robot depends
on this robot’s state + how many
other robots are in search state
[11] uses the fluid approximation :
9
A few other Examples …
M.-D. Bordenave, D. McDonald and A. Proutière, A
particle system in interaction with a rapidly varying
environment: Mean field limits and applications
arXiv:math/0701363v2
10
In these and other examples, some authors
assume the validity of the fluid / mean field
approximation and use the approximation to
do performance evaluation, parameter
identification, control…
Never
again !
… while, in contrast, others
spend most of the paper proving the
derivation and validity of the
approximations in their specific
setting
papers in this latter class are intimidating
cost of proof of one approximation result
¼ 1 PhD
and not re-usable
11
Can we have answers of general applicability to:
When are the fluid approximation and the mean field
approximation valid ?
Can we write them in a sound ( = mechanical) way ?
12
Contents
1. Motivation
2. A Generic Model for a System
of Interacting Objects
3. Convergence to the Mean
Field
4. Fast Simulation
5. Full Scale Example: A
Reputation System
6. Outlook
E.L.
13
Mean Field Interaction Model
A Generic Model, with generic results
Does not cover all useful cases, but is a useful first step
Time is discrete
N objects
Every object has a state in
.
Informally: object n evolves depending only on
Its own state
A global resource whose evolution depends only on how many other
objects are in each state
14
Model Assumptions
XNn(t) : state of object n at time t
MNi(t)
= proportion of objects that are in state i
MN is the “occupancy measure” ¼ the “mean field”
RN(t) = global resource =“history” of occupancy measure
Conditional to history up to time t, objects draws next state
independent of each other according to
15
Two Mild Assumptions
1. Continuity of the integration function g()
2. For large N, the transition matrix K becomes independent of N and
is continuous
16
TCP/ECN Example fits in this Framework
ECN Feedback q(R(t))
Function g() :
1
ECN router
n
queue length R(t)
N
N connections
thus
Intuitively satisfies the conditions
State of one connection depends
only on buffer content
Buffer contents depends only on
how many connections are in each
state
Formally:
One object = one TCP connection
State of one object = index i of
sending rate
RN(t) = total buffer occupancy / N
g() is continuous
Assumption 1 is satisfied
17
TCP/ECN Example fits in this Framework
ECN Feedback q(R(t))
Transition matrix K
1
ECN router
n
queue length R(t)
N
Let q(r) = proba of negative
feedback when R==r
N connections
no ECN received
ECN received
K is independent of N thus
Assumption 2 is is satisfied if q() is
continuous
18
A Multiclass Variant
Take same as previous TCP/ECN
model but introduce multiclass
Aggressive connections, normal
connection
Also fits in our framework
Mean Field does not mean all
objects are exchangeable !
State of an object = (c, i)
c : class
i : sending rate
Objects may change class or not
19
Contents
1. Motivation
2. A Generic Model for a System
of Interacting Objects
3. Convergence to the Mean
Field
4. Fast Simulation
5. Full Scale Example: A
Reputation System
6. Outlook
E.L.
20
21
Practical Application : Derivation of the Fluid
Approximation
The theorem replaces the stochastic system by a deterministic,
dynamical system
This gives a method to write and justify the fluid approximation in
the large N regime
Equation for the limiting occupancy measure can be rewritten as
where Ni(t) = N MNi(t) = number of objects in state i at time t
22
Proof of Theorem
Based on
The next theorem (fast simulation)
A coupling argument
An ad-hoc version of the strong law of large numbers
The Glivenko Cantelli lemma
23
Contents
1. Motivation
2. A Generic Model for a System
of Interacting Objects
3. Convergence to the Mean
Field
4. Fast Simulation
5. Full Scale Example: A
Reputation System
6. Outlook
E.L.
24
Fast Simulation / Analysis of One Object
Assume we are interested in one object in particular
E.g. distribution of time until a TCP connection reaches maximum rate
For large N, since mean field convergence holds, one may do the
mean field approximation and replace the set of other objects by
the deterministic dynamical system
The next theorem says that, essentially, this is valid
25
Fast Simulation Algorithm
State of one specific object
Returns next state for one object
When transition matrix is K
Replace true value by deterministic
limit
This is the mean field independence
approximation
26
Fast Simulation Result
27
Practical Application
This justifies the mean field approximation for the stochastic
evolution of one object in the large N regime
Gives a method for fast simulation or analysis
The state space for Y1 has S states, instead of SN
28
Contents
1. Motivation
2. A Generic Model for a System
of Interacting Objects
3. Convergence to the Mean
Field
4. Fast Simulation
5. Full Scale Example: A
Reputation System
6. Outlook
E.L.
29
A Reputation System
My original motivation for this work
Illustrates the complete set of steps, including a few modelling
tricks
System
N objects = N peers
Peers observe one subject and rate it
Rating is a number in (0,1)
Direct observations and spreading of reputation
Confirmation bias + forgetting
30
Operation of Reputation System: Forgetting
Zn(t) = reputation rating held by peer n
During a direct observation, subject is perceived as positive (with
proba ) or negative (with proba 1-)
In case of direct positive observation
In case of direct negative observation
w is the forgetting factor, close to 1 (0.9 in next slides)
31
Confirmation Bias
Peer also read other peer ratings
If overheard rating is z:
is the threshold of the confirmation bias
32
Liars and Honest Peers
Honest peer does as just explained
Liar tries to bring the reputation down
Uses different strategies, see later
33
proportion of peers
Example of exact simulation: N=100 peers
with maximal liars (always say Z=0)
Initially: peers have Z=0, 0.5 or 1
= 0.9
rating
Every time step: direct obs p=0.01, meet liar proba 0.30, meet honest proba 0.69
34
rating
3 particular peers, one of each type
= 0.9
time
35
Can we study the system with 106
users instead of 100 ?
36
The problem fits in our framework…
Assume discrete time
At every time step a peer
Makes a direct observation
Or overhears a liar
Or overhears some honest peer
Or does nothing
Object = honest peer
Assume first that liars use strategy 1: maximal lying (always say
Z=0)
Transition of one honest peer depends on
Own state
Distribution of states of all other peers
=> Fits in our framework with memory R = occupancy measure M
37
Different Liar Strategies
Strategy 1 (maximal lying): liars always say Z= 0
Strategy 2 (infer): liar guesses your rating based on past
experience
Transition of one honest peer depends on
Own state
Distribution of states of all other peers
What liars remember seeing in the past
=> Fits in our framework with memory R = occupancy measure of ratings
at steps t and t-1
Strategy 3 (side information): liars know your rating and is as
negative as you accept
not realistic but serves as benchmark (worst case)
Similar to strategy 1, memory = occupancy measure M
38
We would like to apply the mean field
convergence result to analyze very large N
But model has continuous state space
Discretize reputation ratings !
Quantize Zn on ca. L bits; replace Zn by Xn = 2L ZN with
Issue: small increments due to “forgetting” coefficient w (e.g. w =
0.9) are set to 0
Solution: use random rounding; replace previous equation by
where RANDROUND(2.7) = 2 with proba 0.3 and 3 with proba 0.7
E(RANDROUND(x)) = x
39
Transition Matrix K
The transition matrix KN is straightforward but tedious to describe.
Unlike in the TCP/ECN example, it does depend on N
It contains terms such as : the proba that an indirect observation
with a honest peer is with someone who has rating equal to k. This
proba is equal to
It depends on N, but for large N it converges uniformly to MNk(t),
with no term in N
The limiting matrix K is polynomial in MNk(t), thus continuous, thus
assumption 2 is satisfied
Assumption 1 is trivially satisfied, by inspection
40
Therefore we can apply the theorem and
derive the fluid approximation and the
mean field approximation
Both are true in the limit N = 1
41
Discrete event simulation, N = 100
Fluid Approximation
Limiting reputation ratings: 0.9 and 0.1
Fast Simulation based on
Mean Field Approximation
42
Fluid approximation
Can be written using Theorem 4.1
Is a deterministic recurrence with state vector the memory
number of dimensions is 2 L+1, where L = number of quantization bits
for reputation values (e.g. L=8)
Mean Field Approximation = Fast Simulation
Simulation of one Markov chain on state space with 2 L states, with time
varying transition probability
43
Different
Parameters
(few liars)
Few liars
Final ratings converge to true value
Phase transition
44
Different Initial Conditions
45
Liar Strategy 2
(infer)
Peers starting after 512 time units
Liar Strategy 3
(side information)
46
Modelling Locality with Multiclass Model
We can model spatial aspects
Object = honest peer ; state = (c, x) with
C = location (in a discrete set of locations)
X = rating (same as before)
This allows to account for locality of interaction
47
Contents
1. Motivation
2. A Generic Model for a System
of Interacting Objects
3. Convergence to the Mean
Field
4. Fast Simulation
5. Full Scale Example: A
Reputation System
6. Outlook
E.L.
48
Outlook
I have shown how a mean field convergence result can be used to write
and validate
the fluid approximation = macroscopic description
the mean field approximation = fast simulation (or analysis)
Applies to cases where objects interact such that
Transition depends on state of this object + current and past distribution of
states of all other objects
Number of objects is large compared to number of states of one object
Extensions
birth and death of objects
transitions that affect several objects simultaneously
Continuous time limits
Deterministic Approximations of Stochastic Evolution in Games, M. Benaïm and
J.W. Weibull, Econometrica. (2003), 71, 3 873-903
Quasi-stationary approximations
[Bordenave, McDonald and Proutière, arXiv 2007]
Gaussian approximations (central limit theorems)
49
… thank you for your attention
E. L.
50
© Copyright 2026 Paperzz