Numerical Methods for
Stochastic Networks
Peter W. Glynn
Institute for Computational and Mathematical Engineering
Management Science and Engineering
Stanford University
Based on joint work with Jose Blanchet, Henry Lam, Denis Saure, and Assaf Zeevi
Presented at Stochastic Networks Conference, Cambridge, UK
March 23, 2010
Stochastic models:
Descriptive
Prescriptive
Predictive
Today’s Talk:
• an LP based algorithm for computing the
stationary distribution of RBM (Saure / Zeevi)
• a Lyapunov bound for stationary expectations
(Zeevi)
• Rare-event simulation for many-server queues
(Blanchet / Lam)
Computing Steady-State Distributions for
Markov Chains
One Approach
An LP Alternative
n
( x ) h( x ) c
xKn
xK nc
n
( x)
c
inf xK c h( x)
n
("tightness")
• Where does constraint
n
( x ) h( x ) c
xKn
come from?
• We assume that we can obtain a computable bound on
i.e. E h( X ) c
E h( X )
• This will come from Lyapunov bounds (later).
Application to RBM
For some stochastic models, the LP algorithm
is particularly natural and powerful
Reflected Brownian Motion (RBM):
R d
(bij :1 i, j d )
dX (t ) dt dB(t ) RdY (t )
Yj
Xj 0
1
2
L i
bij
,
xi 2 i , j xi x j
i 1
d
D j = Rij
xi
i 1
d
min
s/t
u
|
xk int ( R d )
xk int ( R d )
d
pk (L fi )( xk )
(D f )( x )
j 1 xk F j
pk 1,
i i
pk 0, jk 0
k
jk
| u
R d
(dx)(L f )( x) c
Fj
j (dx)(D j f )( x).
j
Theorem: This algorithm converges as m, n , in the sense that
as n .
n
j
n
Numerical Results
Smoothed marginal distribution estimates for the two-dimensional diffusion.
The dotted line is computed via Monte Carlo simulation, and the solid line represents the algorithm
estimates based on n = 50 and m = 4, incorporating smoothness constraints.
• X ( X (t ) : t 0) S - valued Markov process with cadlag
paths
• We say g D ( A) if there exists
k such that
t
M (t ) g ( X (t )) k ( X (s))ds
0
is a Px –local martingale for each x S
h
E x g ( X (h)) g ( x) E x k ( X (s))ds
0
Computing Bounds on Stationary
Expectations
• Main Theorem:
Suppose g D ( A) is non–negative and satisfies
sup( Ag )( x) .
xS
Then,
(dx)( Ag )( x) 0.
S
Diffusion Upper Bound
dX (t ) ( X (t ))dt ( X (t ))dB(t )
• Suppose g 0 is C 2 and satisfies
d
for x R , where h 0 and
(L g )( x) h( x) c
1
2
L i ( x )
bij ( x)
xi 2 i , j
xi x j
i 1
d
where b( x) ( x) ( x)T . Then,
h c.
Many-Server Loss Systems
Many-Server Asymptotic Regime
Simplify our model (temporarily):
• using slotted time
• eliminate Markov modulation
• discrete service time distributions with finite
support
Consider equilibrium fraction of customers lost
in the network.
Key Idea:
Many server loss systems behave identically
to infinite-server systems up to the time of
the first loss.
Step 1:
Step 2:
Step 3:
Step 3:
Step 3:
Crude Monte Carlo : 3.7 days
I.S. : A few seconds
Network Extension
• Estimate loss at a particular station
• If the most likely path to overflow a given station does not
involve upstream stations, previous algorithm if efficient
• If an upstream station does hit its capacity constraint, we have
“constrained Poisson statistics” that need to be sampled
Questions?
© Copyright 2026 Paperzz