Stochastic Optimization for Markov Modulated
Networks with Application to
Delay Constrained Wireless Scheduling
A1(t)
State 1
State 3
A2(t)
State 2
AL(t)
Control-Dependent Transition Probabilities
Michael J. Neely
University of Southern California
http://www-rcf.usc.edu/~mjneely
Proc. 48th IEEE Conf. on Decision and Control (CDC), Dec. 2009
*Sponsored in part by the DARPA IT-MANET Program, NSF OCE-0520324, NSF Career CCF-0747525
Motivating Problem:
•Delay Constrained Opportunistic Scheduling
A1(t)
S1(t)
A2(t)
S2(t)
AK(t)
SK(t)
Status Quo:
•Lyapunov Based Max-Weight: [Georgiadis, Neely, Tassiulas F&T 2006]
•Treats stability/energy/thruput-utility with low complexity
•Cannot treat average delay constraints
•Dynamic Programming / Markov Decision (MDP) Theory:
•Curse of Dimensionality
•Need to know Traffic/Channel Probabilities
Insights for Our New Approach:
•Combine Lyapunov/Max-Weight Theory with Renewals/MDP
Lyapunov Functions
Max-Weight Theory
Virtual Queues
Renewal Theory
Stochastic Shortest Paths
MDP Theory
•Consider “Small” number of Control-Driven Markov States
Example: •K Queues with Avg. Delay Constraints (K “small”)
•N Queues with Stability Constraints (N arbitrarily large)
Delay
Constrained
Not Delay
Constrained
A1(t)
S1(t)
A2(t)
S2(t)
AK(t)
SK(t)
AK+1(t)
SK+1(t)
AM(t)
SM(t)
Key Results:
•Unify Lyapunov/Max-Weight Theory with Renewals/MDP
“Max Weight (MW)”
“Weighted Stochastic
Shortest Path (WSSP)”
•Treat General Markov Decision Networks
•Use Lyapunov Analysis and Virtual Queues to Optimize
and Compute Performance Bounds
•Use Existing SSP Approx Algs (Robbins-Monro) to Implement
•For Example Delay Problem:
•Meet all K Average Delay Constraints, Stabilize all N other queues
•Utility close to optimal, with tradeoff in delay of N other queues
•All Delays and Convergence Times are polynomial in (N+K)
•Per-Slot Complexity geometric in K
General Problem Formulation: (slotted time t = {0,1,2,…})
•Qn(t) = Collection of N queues to be stabilized
Rn(t)
Qn(t)
mn(t)
•S(t) = Random Event (e.g. random traffic, channels)
•Z(t) = Markov State Variable (|Z| states)
•I(t) = Control Action (e.g. service, resource alloc.)
•xm(t) = Additional Penalties Incurred by action on slot t
General functions for m(t), R(t), x(t):
mn(t) = mn(I(t), S(t), Z(t))
Rn(t) = Rn(I(t), S(t), Z(t))
xm(t) = xm(I(t), S(t), Z(t))
Control-Dependent Transition Probs:
Z(t)
I(t), S(t)
State 1
Z(t+1)
State 3
State 2
General Problem Formulation: (slotted time t = {0,1,2,…})
•Qn(t) = Collection of N queues to be stabilized
Rn(t)
Qn(t)
mn(t)
•S(t) = Random Event (e.g. random traffic, channels)
•Z(t) = Markov State Variable (|Z| states)
•I(t) = Control Action (e.g. service, resource alloc.)
•xm(t) = Additional Penalties Incurred by action on slot t
General functions for m(t), R(t), x(t):
mn(t) = mn(I(t), S(t), Z(t))
Rn(t) = Rn(I(t), S(t), Z(t))
xm(t) = xm(I(t), S(t), Z(t))
Goal:
Minimize: x0
Subject to: xm < xmav , all m
Qm stable , all m
Applications of this Formulation:
•For K of the queues, let:
Z(t) = (Q1(t), …, QK(t))
•These K have Finite Buffer: Qk(t) in {0, 1, …, Bmax}
•Cardinality of states:
|Z| = (Bmax +1)K
Recall: Penalties have the form: xm(t) = xm(I(t), S(t), Z(t))
1) Penalty for Congestion:
Define Penalty: xk(t) = Zk(t)
Can then do one of the following (for example):
• Minimize: xk
•
Minimize: x1 + … + xK
•
Constraints: xk < xkav
Applications of this Formulation:
•For K of the queues, let:
Z(t) = (Q1(t), …, QK(t))
•These K have Finite Buffer: Qk(t) in {0, 1, …, Bmax}
•Cardinality of states:
|Z| = (Bmax +1)K
Recall: Penalties have the form: xm(t) = xm(I(t), S(t), Z(t))
2) Penalty for Packet Drops:
Define Penalty: xk(t) = Dropsk(t)
Can then do one of the following (for example):
• Minimize: xk
•
Minimize: x1 + … + xK
•
Constraints: xk < xkav
Applications of this Formulation:
•For K of the queues, let:
Z(t) = (Q1(t), …, QK(t))
•These K have Finite Buffer: Qk(t) in {0, 1, …, Bmax}
•Cardinality of states:
|Z| = (Bmax +1)K
Recall: Penalties have the form: xm(t) = xm(I(t), S(t), Z(t))
3) A Nice Trick for Average Delay Constraints:
Suppose we want: W < 5 slots :
Define Penalty: xk(t) = Qk(t) – 5 x Arrivalsk(t)
Then by Little’s Theorem…
xk < 0
equivalent to: Qk – 5 x lk < 0
equivalent to: Wk x lk – 5 x lk < 0
equivalent to: Wk < 5
Solution to the General Problem:
Minimize: x0
Subject to: xm < xmav , all m
Qk stable , all k
•Define Virtual Queues for Each Penalty Constraint:
xm(t)
Ym(t)
xmav
•Define Lyapunov Function:
L(t) =
Qk(t)2 +
Ym(t)2
Solution to the General Problem:
•Define Forced Renewals every slot i.i.d. probability d>0
State 1
Renewal
State 0
State 3
State 2
Example for K Delay-Constrained Queue Problem:
Every slot, with probability d, drop all packets in
all K Delay-Constrained Queues (loss rate < Bmax d)
Renewals “Reset” the system
Solution to the General Problem:
•Define Variable Slot Lyapunov Drift over Renewal Period
DT(Q(t), Y(t)) = E{L(t+T) – L(t)| Q(t), Y(t)}
where T = Random Renewal Period Duration
t
t+T
•Control Rule: Every Renewal time t, observe queues,
Take action to Min the following over 1 Renewal Period:
Minimize: DT(Q(t), Y(t)) + VE{
t+T-1
t=t
x0(t) | Q(t), Y(t)}
*Generalizes our previous max-weight rule from [F&T 2006] !
Minimize: DT(Q(t), Y(t)) + VE{
Max-Weight
(MW)
t+T-1
t=t
x0(t) | Q(t), Y(t)}
Weighted Stochastic
Shortest Path (WSSP)
•Suppose we implement a (C,e)-approximate SSP, so that
every renewal period we have…
Achieved Cost < Optimal SSP + C + e[
Qk +
Ym + V]
Can achieve this using approximate DP Theory,
Neurodynamic Programming, etc., (see
[Bertsekas, Tsitsiklis Neurodynamic Programming])
together with a Delayed-Queue-Analysis.
Theorem: If there exists a policy that meets all
Constraints with “emax slackness,” then any (C, e)
approximate SSP implementation yields:
1) All (virtual and actual) Queues Stable, and:
E{Qsum} < (B/d + Cd) + V(ed + xmax)
emax - ed
2) All Time Average Constraints are satisfied ( xm < xmav )
3) Time Average Cost satisfies:
x0 < x0(optimal) + (B/d + Cd) + ed(1 + xmax/emax)
V
(recall that d = forced renewal probability)
Proof Sketch: (Consider exact SSP for simplicity)
DT(Q(t), Y(t)) + VE{
t+T-1
t=t
x0(t) | Q(t), Y(t)}
t+T-1
< B + VE{
t=t
x0(t) | Q(t), Y(t)}
t+T-1
-
Qk(t)E{
-
Ym(t)E{
[mk(t) – Rk(t)] | Q(t), Y(t)}
t=t
t+T-1
[xmav – xm(t)] | Q(t), Y(t)}
t=t
[We take control action to minimize the Right Hand Side above
over the Renewal Period. This is the Weighted SSP problem of interest]
Proof Sketch: (Consider exact SSP for simplicity)
DT(Q(t), Y(t)) + VE{
t+T-1
t=t
x0(t) | Q(t), Y(t)}
t+T-1
< B + VE{
t=t
x0*(t) | Q(t), Y(t)}
t+T-1
-
Qk(t)E{
-
Ym(t)E{
[mk*(t) – Rk*(t)] | Q(t), Y(t)}
t=t
t+T-1
[xmav – xm*(t)] | Q(t), Y(t)}
t=t
[We can thus plug in any alternative control policy in the Right Hand Side,
including the one that yields the optimum time average subject to
all time average constraints]
Proof Sketch: (Consider exact SSP for simplicity)
DT(Q(t), Y(t)) + VE{
t+T-1
t=t
x0(t) | Q(t), Y(t)}
t+T-1
< B + VE{
t=t
x0*(t) | Q(t), Y(t)}
t+T-1
-
Qk(t)E{
-
Ym(t)E{
[mk*(t) – Rk*(t)] | Q(t), Y(t)}
t=t
t+T-1
[xmav – xm*(t)] | Q(t), Y(t)}
t=t
[Note by RENEWAL THEORY, the infinite horizon time average is
exactly achieved over any renewal period ]
Proof Sketch: (Consider exact SSP for simplicity)
DT(Q(t), Y(t)) + VE{
t+T-1
t=t
x0(t) | Q(t), Y(t)}
X0(optimum)E{T}
t+T-1
< B + VE{
t=t
x0*(t) | Q(t), Y(t)}
t+T-1
-
Qk(t)E{
-
Ym(t)E{
0
[mk*(t) – Rk*(t)] | Q(t), Y(t)}
t=t
0
t+T-1
[xmav – xm*(t)] | Q(t), Y(t)}
t=t
[Note by RENEWAL THEORY, the infinite horizon time average is
exactly achieved over any renewal period ]
Proof Sketch: (Consider exact SSP for simplicity)
DT(Q(t), Y(t)) + VE{
t+T-1
t=t
x0(t) | Q(t), Y(t)}
< B + VX0(optimum)E{T}
[Sum the resulting telescoping series to get the utility performance bound! ]
Implementation of Approximate Weighted SSP:
•Use a simple 1-step Robbins-Monro Iteration
with past history Of W samples {S(t1), S(t2), …, S(tW)}.
•To avoid subtle correlations between samples and
queue weights, use a Delayed Queue Analysis.
•Algorithm requires no a-priori knowledge of statistics,
and takes roughly |Z| operations per slot to perform
Robbins-Monro. Convergence and Delay are log(|Z|).
•For K Delay constrained queues, |Z| = BmaxK
(geometric in K). Can modify implementation for
constant per-slot complexity, but then convergence time
is geometric in K. (Either way, we want K small).
Conclusions:
•Treat general Markov Decision Networks
•Generalize Max-Weight/Lyapunov Optimization to
Min Weighted Stochastic Shortest Path (W-SSP)
•Can solve delay constrained network problems:
•Convergence Times, Delays Polynomial in (N+K)
•Per-Slot Computation Complexity of Solving
Robbins-Monro is geometric in K. (want K small)
A1(t)
State 1
State 3
A2(t)
State 2
AL(t)
Control-Dependent Transition Probabilities
© Copyright 2026 Paperzz