Two-Stage Robust Integer Programming
Wolfram Wiesemann,1 Grani A. Hanasusanto,2 Daniel Kuhn3
!
1Imperial
College Business School, London, UK
2Department of Computing, Imperial College London, UK
3Risk Analytics and Optimization Chair, École Polytechnique Fédérale de Lausanne, Switzerland
Two-Stage Robust Integer Programs
Two-Stage Robust Integer Program:
t
here-and-now:
x 2 X ✓ RN
+
observe
⇠2⌅
wait-and-see:
y 2 Y ✓ {0, 1}M
Two-Stage Robust Integer Programs
Two-Stage Robust Integer Program:
t
here-and-now:
x 2 X ✓ RN
+
observe
⇠2⌅
wait-and-see:
y 2 Y ✓ {0, 1}M
Mathematical Formulation:
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
subject to x 2 X
y2Y
Two-Stage Robust Integer Programs
Two-Stage Robust Integer Program:
t
here-and-now:
x 2 X ✓ RN
+
observe
⇠2⌅
wait-and-see:
y 2 Y ✓ {0, 1}M
Mathematical Formulation:
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
y2Y
subject to x 2 X
Applications:
operations mgmt.
investment planning
game theory
Literature Review
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
subject to x 2 X
y2Y
Literature Review
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
y2Y
subject to x 2 X
1 Exact Approaches: not available
Literature Review
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
y2Y
subject to x 2 X
1 Exact Approaches: not available
2 Sampling-Based Approximations:
y(⇠) 2 {0, 1}
⇠2⌅
Literature Review
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
y2Y
subject to x 2 X
1 Exact Approaches: not available
2 Sampling-Based Approximations: progressive approximation
y(⇠) 2 {0, 1}
?
⇠2⌅
Literature Review
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
y2Y
subject to x 2 X
1 Exact Approaches: not available
2 Sampling-Based Approximations: progressive approximation
3 Space-Partitioning Approximations:
y(⇠) 2 {0, 1}
⇠2⌅
Literature Review
minimize
max ⇠ > C x + min ⇠ > Q y : T x + W y H⇠
⇠2⌅
y2Y
subject to x 2 X
1 Exact Approaches: not available
2 Sampling-Based Approximations: progressive approximation
3 Space-Partitioning Approximations: exponential growth
max 2 min {y
⇠2[0,1] y2{0,1}
⇠1
⇠2 : y
⇠ 1 + ⇠2
1}
The K-Adaptability Problem
K-Adaptability Problem: Bertsimas and Caramanis (2010)
t
here-and-now:
x2X
+
y1 2 Y
..
.
yK 2 Y
observe
⇠2⌅
wait-and-see:
k 2 {1, . . . , K}
The K-Adaptability Problem
K-Adaptability Problem: Bertsimas and Caramanis (2010)
t
here-and-now:
x2X
+
observe
⇠2⌅
wait-and-see:
k 2 {1, . . . , K}
y1 2 Y
..
.
yK 2 Y
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
The K-Adaptability Problem
K-Adaptability Problem: Bertsimas and Caramanis (2010)
t
here-and-now:
x2X
+
observe
⇠2⌅
wait-and-see:
k 2 {1, . . . , K}
y1 2 Y
..
.
yK 2 Y
max 2 min {y
⇠2[0,1] y2{0,1}
⇠1
⇠2 : y
⇠ 1 + ⇠2
1}
y1 = 0
y2 = 1
The K-Adaptability Problem
K-Adaptability Problem: Bertsimas and Caramanis (2010)
t
here-and-now:
x2X
+
observe
⇠2⌅
wait-and-see:
k 2 {1, . . . , K}
y1 2 Y
..
.
yK 2 Y
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
How good is the approximation?
The K-Adaptability Problem
K-Adaptability Problem: Bertsimas and Caramanis (2010)
t
here-and-now:
x2X
+
observe
⇠2⌅
wait-and-see:
k 2 {1, . . . , K}
y1 2 Y
..
.
yK 2 Y
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
How good is the approximation, and can we solve it?
The K-Adaptability Problem
The K-Adaptability Problem:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Objective Uncertainty
The K-Adaptability Problem with Objective Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k h
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Objective Uncertainty
The K-Adaptability Problem with Objective Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
Objective Uncertainty: Approximation Quality
The K-Adaptability Problem with Objective Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
How good is the approximation?
Objective Uncertainty: Approximation Quality
The K-Adaptability Problem with Objective Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
How good is the approximation?
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Consider the K -Adaptability Problem with K = {1, . . . , K = |Y|}:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
K -Adaptability Problem
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Consider the K -Adaptability Problem with K = {1, . . . , K = |Y|}:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
min ⇠ > Q y k = min
2
k2K
K -Adaptability Problem
where
K
K
X
k2K
>
k
·
⇠
Q
y
k
denotes the unit simplex in RK.
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Consider the K -Adaptability
Problem
with
:
K
=
{1,
.
.
.
,
K
=
|Y|}
2
3
minimize
max 4⇠ C x + min
⇠2⌅
>
2
K
X
k2K
>
k5
·
⇠
Q
y
k
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
K -Adaptability Problem
K -Reformulation
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Use Minimax Theorem to exchange
order
of
max
and
min:
2
3
minimize
min max 4⇠ C x +
2
K
⇠2⌅
>
X
k2K
>
k5
·
⇠
Q
y
k
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
K -Adaptability Problem
2
K -Reformulation
Minimax Theorem
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Combine minimization problems:
minimize
2
max 4⇠ > C x +
⇠2⌅
X
k2K
k
3
· ⇠> Q yk 5
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K,
2 K
1
K -Adaptability Problem
2
K -Reformulation
Minimax Theorem
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Reformulate objective function:
2
minimize
max 4⇠ C x + ⇠ Q ·
⇠2⌅
>
>
X
k2K
3
k5
y
k
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K,
2 K
1
K -Adaptability Problem
2
K -Reformulation
Minimax Theorem
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Apply Carathéodory’s Theorem:
2
minimize
max 4⇠ C x + ⇠ Q ·
>
⇠2⌅
>
3
X
k5
y
k
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K,
2 K
y9
y1
y
y2
8
y7
y6
X
k
y
k
y3
k2K
y5
y4
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Apply Carathéodory’s Theorem:
2
minimize
max 4⇠ C x + ⇠ Q ·
>
⇠2⌅
>
3
X
k5
y
k
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K,
2 K
y9
y1
y
y2
8
y7
y6
X
k
y
k
y3
k2K
y5
y4
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Apply Carathéodory’s Theorem:
minimize
"
max ⇠ > C x + ⇠ > Q ·
⇠2⌅
X
k
y
k
k2K
#
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K,
2 K
where K = {1, . . . , K = dim Y + 1} .
1
K -Adaptability Problem
2
K -Reformulation
3
Minimax Theorem
Carathéodory’s Theorem
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Separate the two minimizations:
minimize
"
min max ⇠ > C x +
2
K
⇠2⌅
X
k2K
>
k
·
⇠
Q
y
k
#
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
K -Adaptability Problem
2
K -Reformulation
3
Minimax Theorem
Carathéodory’s Theorem
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
Use Minimax Theorem to exchange order of min and max:
minimize
"
max ⇠ > C x + min
⇠2⌅
2
K
X
k2K
k
· ⇠> Q yk
#
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
K -Adaptability Problem
2
K -Reformulation
3
Carathéodory’s Theorem
Minimax Theorem
4
Objective Uncertainty: Approximation Quality
Theorem (Objective Uncertainty): The K-Adaptability Problem
attains the same objective value as the Two-Stage Robust Integer
Program whenever K min{dim Y, dim ⌅} + 1.
Proof Outline: for K
dim Y + 1
We recover the K-Adaptability Problem:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
K -Adaptability Problem
K
2
K -Reformulation
6
K
3
Carathéodory’s Theorem
Minimax Theorem
5
4
Objective Uncertainty: Tractability
The K-Adaptability Problem with Objective Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
Can we solve the approximation?
Objective Uncertainty: Tractability
The K-Adaptability Problem with Objective Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
Can we solve the approximation?
Theorem (Objective Uncertainty): The K-Adaptability Problem
has equivalent MILP reformulation whose size scales polynomially
in the problem data.
Objective Uncertainty: Tractability
Theorem (Objective Uncertainty): The K-Adaptability Problem
has equivalent MILP reformulation whose size scales polynomially
in the problem data.
Proof Outline:
Consider the K-Adaptability Problem:
minimize
max ⇠ > C x + min ⇠ > Q y k
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
Objective Uncertainty: Tractability
Theorem (Objective Uncertainty): The K-Adaptability Problem
has equivalent MILP reformulation whose size scales polynomially
in the problem data.
Proof Outline:
Epigraph reformulation of inner min:
minimize
⇥
>
>
max ⇠ C x + ⌧ : ⌧ ⇠ Q y
⇠2⌅,
⌧ 2R
k
8k 2 K
⇤
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
Epigraph Reformulation
Objective Uncertainty: Tractability
Theorem (Objective Uncertainty): The K-Adaptability Problem
has equivalent MILP reformulation whose size scales polynomially
in the problem data.
Proof Outline:
Strong LP duality:
minimize
min
↵2RR
+,
2RK
+
"
b> ↵ : A> ↵ = Cx +
X
k Qy
k
, e>
k2K
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
1
Epigraph Reformulation
2
Dualize Inner Problem
#
=1
Objective Uncertainty: Tractability
Theorem (Objective Uncertainty): The K-Adaptability Problem
has equivalent MILP reformulation whose size scales polynomially
in the problem data.
Proof Outline:
Strong LP duality:
minimize
b> ↵
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
X
R
K
>
k
>
↵ 2 R+ ,
2 R+ , A ↵ = Cx +
Qy
,
e
k
k2K
1
Epigraph Reformulation
2
Dualize Inner Problem
=1
Objective Uncertainty: Tractability
Theorem (Objective Uncertainty): The K-Adaptability Problem
has equivalent MILP reformulation whose size scales polynomially
in the problem data.
Proof Outline:
Strong LP duality:
minimize
b> ↵
subject to x 2 X , y k 2 Y, T x + W y k h, k 2 K
X
R
K
>
k
>
↵ 2 R+ ,
2 R+ , A ↵ = Cx +
Qy
,
e
k
=1
k2K
Linearize bilinear terms: via auxiliary variables z k 2 RM
+, k 2K
zk =
k
y
k
()
zk yk , zk
1
Epigraph Reformulation
2
k
e,
z
k
Dualize Inner Problem
(
k
1)e + y k
3
Exact Linearization
Summary
The K-Adaptability Problem:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Objective Uncertainty:
★
strong approximation
guarantees
★
MILP reformulation
that scales polynomially
Constraint Uncertainty
The K-Adaptability Problem with Constraint Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Constraint Uncertainty: Approximation Quality
The K-Adaptability Problem with Constraint Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
How good is the approximation?
Constraint Uncertainty: Approximation Quality
The K-Adaptability Problem with Constraint Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
How good is the approximation?
Theorem (Constraint Uncertainty): The K-Adaptability Problem
can attain a strictly larger objective value than the Two-Stage
Robust Integer Program whenever K < |Y| .
Constraint Uncertainty: Tractability
The K-Adaptability Problem with Constraint Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Can we solve the approximation?
Constraint Uncertainty: Tractability
The K-Adaptability Problem with Constraint Uncertainty:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Can we solve the approximation?
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Consider the 2-Adaptability Problem:
minimize
max ⇠ > C x + min
⇠2⌅
k2{1,2}
⇠ > Q y k : T x + W y k H⇠
subject to x 2 X , y k 2 Y, k 2 {1, 2}
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
minimize
max ⇠ > C x + min
⇠2⌅
k2{1,2}
⇠ > Q y k : T x + W y k H⇠
subject to x 2 X , y k 2 Y, k 2 {1, 2}
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
1
1
⌅(y 1 , y 2 )
2
⌅(y , y )
2
⌅(y , y ) =
⌅(y 1 , y 2 ) =
⌅(y 1 , y 2 )
1
2
⌅(y , y ) =
⌅(y 1 , y 2 )
1
2
⌅(y , y ) =
⇢
⇢
⇢
⇢
T x + W y 1 H⇠
⇠2⌅ :
T x + W y 2 H⇠
T x + W y 1 H⇠
⇠2⌅ :
T x + W y 2 6 H⇠
T x + W y1
6 H⇠
⇠2⌅ :
T x + W y 2 H⇠
T x + W y1
6 H⇠
⇠2⌅ :
T x + W y2
6 H⇠
⌅ = ⌅(y 1 , y 2 ) [ ⌅(y 1 , y 2 ) [ ⌅(y 1 , y 2 ) [ ⌅(y 1 , y 2 )
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
= max
(
max
⇠2⌅
max
1
⇠2⌅(y
,y 2 )
max
1
⇠2⌅(y
,y 2 )
max
1
⇠2⌅(y ,y 2 )
max
1
⇠2⌅(y ,y 2 )
⇠ > C x + min
k2{1,2}
⇠ > Q y k : T x + W y k H⇠
⇠ > C x + min
⇠ > Q y k : T x + W y k H⇠
⇠ > C x + min
⇠ > Q y k : T x + W y k H⇠
⇠ > C x + min
⇠ > Q y k : T x + W y k H⇠
⇠ > C x + min
⇠ > Q y k : T x + W y k H⇠
k2{1,2}
k2{1,2}
k2{1,2}
k2{1,2}
(
,
,
,
,
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
= max
(
max
⇠2⌅
⇠ > C x + min
k2{1,2}
>
⇠
C x + min
2
⇠2⌅(y ,y )
k2{1,2}
>
max
⇠
C x + min
1
2
⇠2⌅(y ,y )
k2{1,2}
>
max
⇠
C
x
+
min
⇠2⌅(y 1 ,y 2 )
k2{1,2}
>
max
⇠
C
x
+
min
1
2
max
1
⇠2⌅(y ,y )
k2{1,2}
⇠ > Q y k : T x + W y k H⇠
⇠> Q yk
,
⇠ > Q y k : T x + W y k H⇠
⇠ > Q y k : T x + W y k H⇠
⇠ > Q y k : T x + W y k H⇠
(
,
,
,
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
= max
(
max
⇠2⌅
max
1
⇠2⌅(y
,y 2 )
max
1
⇠ > C x + min
⇥
k2{1,2}
⇠ > C x + min
k2{1,2}
>
>
1
⇠ > Q y k : T x + W y k H⇠
⇠> Q yk
⇤
,
⇠
C
x
+
⇠
Q
y
,
⇠2⌅(y ,y 2 )
>
>
k
k
max
⇠
C
x
+
min
⇠
Q
y
:
T
x
+
W
y
H⇠
⇠2⌅(y 1 ,y 2 )
k2{1,2}
>
>
k
k
max
⇠
C
x
+
min
⇠
Q
y
:
T
x
+
W
y
H⇠
1
2
⇠2⌅(y ,y )
k2{1,2}
(
,
,
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
= max
(
max
⇠2⌅
max
1
⇠2⌅(y
,y 2 )
max
1
⇠2⌅(y ,y 2 )
max
1
⇠2⌅(y
,y 2 )
max
1
⇠2⌅(y ,y 2 )
⇠ > C x + min
⇥
⇥
k2{1,2}
⇠ > C x + min
k2{1,2}
>
>
1
>
>
2
⇠ C x + ⇠ Qy
⇠ > Q y k : T x + W y k H⇠
⇠> Q yk
⇤
⇤
,
,
⇠ C x + ⇠ Qy ,
⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
k2{1,2}
(
,
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Move second-stage constraints to uncertainty set:
= max
(
max
⇠2⌅
max
1
⇠2⌅(y
,y 2 )
max
1
⇠2⌅(y ,y 2 )
max
1
⇠2⌅(y
,y 2 )
max
1
⇠2⌅(y
,y 2 )
⇠ > C x + min
⇥
⇥
⇥
k2{1,2}
⇠ > C x + min
k2{1,2}
>
>
1
>
>
2
⇠ C x + ⇠ Qy
⇠ C x + ⇠ Qy
>
⇤
⇠ Cx+1 ,
⇠ > Q y k : T x + W y k H⇠
⇠> Q yk
⇤
⇤
,
,
,
(
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Uncertainty sets have become non-convex:
1
1
2
⌅(y , y )
2
⌅(y , y ) =
⇢
T x + W y 1 H⇠
⇠2⌅ :
T x + W y 2 6 H⇠
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Uncertainty sets have become non-convex:
T x + W y 1 H⇠
⌅(y , y ) = ⇠ 2 ⌅ :
T x + W y 2 6 H⇠
8
9
1
Tx + Wy
H⇠
>
>
>
>
<
=
⇥
⇤
2
= ⇠ 2 ⌅ : T x + W y 1 > [H⇠]1 _
>
>
>
>
⇥
⇤
:
;
2
T x + W y 2 > [H⇠]2
1
⌅(y 1 , y 2 )
⇢
2
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Uncertainty sets have become non-convex:
T x + W y 1 H⇠
⌅(y , y ) = ⇠ 2 ⌅ :
T x + W y 2 6 H⇠
8
9
1
Tx + Wy
H⇠
>
>
>
>
<
=
⇥
⇤
2
= ⇠ 2 ⌅ : T x + W y 1 > [H⇠]1 _
>
>
>
>
⇥
⇤
:
;
2
T x + W y 2 > [H⇠]2
1
⌅(y 1 , y 2 )
⇢
2
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Uncertainty sets have become non-convex:
T x + W y 1 H⇠
⌅(y , y ) = ⇠ 2 ⌅ :
T x + W y 2 6 H⇠
8
9
1
Tx + Wy
H⇠
>
>
>
>
<
=
⇥
⇤
2
= ⇠ 2 ⌅ : T x + W y 1 > [H⇠]1 _
>
>
>
>
⇥
⇤
:
;
2
T x + W y 2 > [H⇠]2
1
⌅(y 1 , y 2 )
⇢
2
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Uncertainty sets have become non-convex:
⇢
T x + W y 1 H⇠
⌅(y , y ) = ⇠ 2 ⌅ :
T x + W y 2 6 H⇠
8
9
1
Tx + Wy
H⇠
>
>
>
>
<
=
⇥
⇤
2
= ⇠ 2 ⌅ : T x + W y 1 > [H⇠]1 _
>
>
>
>
⇥
⇤
:
;
2
T x + W y 2 > [H⇠]2
(
)
1
Tx + Wy
H⇠
= ⇠2⌅ : ⇥
⇤
2
T x + W y 1 > [H⇠]1
1
2
Constraint Uncertainty: Tractability
Theorem (Constraint Uncertainty): The K-Adaptability Problem
has an equivalent MILP reformulation that scales exponentially in
the number of policies K (but polynomially in other problem data).
Proof Outline: Uncertainty sets have become non-convex:
⇢
T x + W y 1 H⇠
⌅(y , y ) = ⇠ 2 ⌅ :
T x + W y 2 6 H⇠
8
9
1
Tx + Wy
H⇠
>
>
>
>
<
=
⇥
⇤
2
= ⇠ 2 ⌅ : T x + W y 1 > [H⇠]1 _
>
>
>
>
⇥
⇤
:
;
2
T x + W y 2 > [H⇠]2
(
)
1
Tx + Wy
H⇠
= ⇠2⌅ : ⇥
⇤
[
2
T x + W y 1 > [H⇠]1
(
)
1
Tx + Wy
H⇠
⇠2⌅ : ⇥
⇤
2
T x + W y 2 > [H⇠]2
1
2
Summary
The K-Adaptability Problem:
minimize
max ⇠ > C x + min ⇠ > Q y k : T x + W y k H⇠
⇠2⌅
k2K
subject to x 2 X , y k 2 Y, k 2 K
Objective Uncertainty:
★
Constraint Uncertainty:
strong approximation
guarantees
★
MILP reformulation
that scales polynomially
★
★
weak approximation
guarantees
MILP reformulation
that scales
✤
exponentially in K
✤
polynomially in rest
Numerical Experiments
Supply Chain Design:
t
here-and-now:
build facilities
observe:
customer demands
wait-and-see:
product delivery
Can be modelled as two-stage robust integer program
with objective uncertainty!
Numerical Experiments
Supply Chain Design:
25
Improvement (%)
20
2-adaptable
3-adaptable
4-adaptable
15
10
5
0
10
15
20
25
30
35
40
30
35
40
Number of locations
12
Optimality Gap (%)
10
2-adaptable
3-adaptable
4-adaptable
8
6
4
2
0
10
15
20
25
Number of locations
Numerical Experiments
Investment Planning:
t
here-and-now:
early-stage investment
(first mover advantage)
costs: c(⇠)> x
profits: r(⇠)> x
observe:
risk factors
wait-and-see:
late-stage investment
(no advantage)
costs: c(⇠)> y(⇠)
>
0.8
·
r(⇠)
y(⇠)
profits:
Can be modelled as two-stage robust integer program
with constraint uncertainty!
Numerical Experiments
Investment Planning:
110
100
Improvement (%)
90
80
70
60
2-adaptable
3-adaptable
4-adaptable
50
40
5
10
15
20
25
30
Number of projects
70
Optimality Gap (%)
60
50
40
30
20
2-adaptable
3-adaptable
4-adaptable
10
0
5
10
15
20
Number of projects
25
30
References
[1] D. Bertsimas and C. Caramanis. Adaptability via sampling. In Proceedings of the
46th IEEE Conference on Decision and Control (CDC), pages 4717–4722, 2007.
[2] D. Bertsimas and C. Caramanis. Finite adaptibility in multistage linear
optimization. IEEE Transactions on Automatic Control, 55(12):2751–2766, 2010.
[3] D. Bertsimas and A. Georghiou. Design of near optimal decision rules in
multistage adaptive mixed-integer optimization. Optimization Online, 2013.
[4] B. L. Gorissen, I. Yanikoglu, and D. den Hertog. Hints for practical robust
optimizations. SSRN, 2013.
[5] G. A. Hanasusanto, D. Kuhn, and W. Wiesemann. Two-Stage robust integer
programming. Optimization Online, 2014.
[6] P. Vayanos, D. Kuhn, and B. Rustem. Decision rules for information discovery in
multi-stage stochastic programming. In Proceedings of the 50th IEEE
Conference on Decision and Control and European Control Conference (CDCECC), pages 7368–7373, 2011.
Wolfram Wiesemann!
www.doc.ic.ac.uk/~wwiesema
[email protected]
© Copyright 2025 Paperzz