Lecture 2: Static single-resource capacity control

Mathematical Models of Revenue Management
Lecture 2: Static single-resource capacity control
Fedor Nikitin
Saint-Petersburg State University
The model
I
Fare vector: f = (f1 , f2 , . . . , fn ); fi > fj , ∀i < j.
I
Demand vector: d = (d1 , d2 , . . . , dn ) is random vector with some distribution.
I
Partitioned booking limits (controls)
p = (p1 , p2 , . . . , pn ), ∀i , pi ∈ Z+ ,
n
X
i=1
where K is capacity.
I
Revenue
R(ω, p) =
n
X
fi min{pi , di (ω)}.
i=1
I
Optimization problem
E [R(ω, p)] → max .
p∈P
I
Set of all admissible controls
(
P=
n
X
p ∈ Z+ pi = K
i=1
)
.
pi = K ,
Littlewood’s two class model rule
I
Two classes. n = 2.
I
Fares: f1 > f2 .
I
Capaicty is K .
I
Control is u and protection limits are (u, K − u).
I
Littlewood’s rule: choose maximum u such that
f2 ≥ f1 P(d1 (ω) ≥ u).
I
This rule is based on simple marginal analysis.
Excercise: give formal proof that this rule yields optimal solution.
Simplified model
I
Fare vector: f = (f1 , f2 , . . . , fn ) fi > fj , ∀i < j.
I
Demand vector: d = (d1 , d2 , . . . , dn ) is random vector.
I
Process occurs in n stage. From nth stage with realization of demand dn to the
first one.
I
At each jth stage we observe realization of demand dj and make decision on
control u.
I
Objective as before is to maximize revenue on average.
I
Constraints on controls ui are dynamic, i.e. change for each stage and depend
upon previous stages.
Dynamic Programming formulation
The problem is eligible to be solved by DP (Dynamic Programming).
State variable x is the amount of remaining capacity.
Let Vj (x ) be Bellman function at stage j with remaining capacity x .
Belllman’s equation is then
Vj (x ) = E
h
max
i
{fj u + Vj−1 (x − u)}
0≤u≤min{dj ,x }
with boundary condition
V0 (x ) = 0.
Stidham’s lemma
Definition
Function g : Z+ → R is concave if it has nonincreasing differeneces, i.e. the function
g(x + 1) − g(x ) is nonincreasing in x ≥ 0.
Lemma
Suppose g : Z+ → R is concave and f : Z+ → R be defined as
f (x ) =
max
{ap + g(x − a)}
a=0,1,...,m(x )
for any p ≥ 0 and m(x ) ≤ x . Then f (x ) is concave in x ≥ 0.
Solution of DP. Descrete case.
Suppose that demand and capacity are descrete.
By definition marginal value of value function is
∆Vj (x ) = Vj (x ) − Vj (x − 1).
Proposition
The marginal value ∆Vj (x ) of the value function Vj (x ) satisfy
I
∆Vj (x + 1) ≤ ∆Vj (x ).
I
∆Vj+1 (x ) ≥ ∆Vj (x ).
Solution of DP. Descrete case. Proof of proposition.
Proof. Because of boundary condition V0 (x ) = 0 for any x , V0 is concave.
For any dj the function
H(dj , x ) =
{fj u + Vj−1 (x − u)}
max
0≤u≤min{dj ,x }
is concave by Stidham’s lemma.
Then E [H(dj , x )] is concave as weighted sum with nonnegative weights of concave
functions.
∆Vj (x ) = Vj (x ) − Vj (x − 1) =
=E
h
i
{fj u + Vj−1 (x − u)} − E
max
h
0≤u≤min{dj ,x }
max
i
{fj u + Vj−1 (x − 1 − u)} =
0≤u≤min{dj ,x }
h
i
h
i
h
i
h
i
= E fj u 0 + Vj−1 (x − u 0 ) − E fj u 00 + Vj−1 (x − 1 − u 00 ) ≥
≥ E fj u 00 + Vj−1 (x − u 00 ) − E fj u 00 + Vj−1 (x − 1 − u 00 ) =
h
i
= E ∆Vj−1 (x − u 00 ) ≥ ∆Vj−1 (x ).
Solution of DP. Descrete case. Usage of proposition
The following holds
"
Vj+1 (x ) = Vj (x ) + E
max
0≤u≤min{dj ,x }
( u
X
)#
fj+1 − ∆Vj (x + 1 − z)
.
z=1
By proposition ∆Vj (x ) is decreasing in x .
Hence, maximum is achieved on u = u ∗ when the last time fj+1 ≥ ∆Vj (x + 1 − u) or
u = min{dj , x }.
Optimal policy is then
uj∗ (dj , x ) = min
n
min{dj , x }, max u fj+1 ≥ ∆Vj (x + 1 − u)
o
.
In practice we do not need information on realization of demand to calculate control.
We just set control according to rule
uj∗ (dj , x ) = max u fj+1 ≥ ∆Vj (x + 1 − u) .
EMSR-a heuristic
EMSR stands for Expected Marginal Seat Revenue.
It was invented by Peter Belobaba. Generalization of Littlewood’s rule.
I
Compute values ykj+1 based on rule
P(dk > ykj+1 ) =
I
fj+1
; j = 0, 1, . . . , n − 1; k = 1, 2, . . . , j.
fk
Compute values yj as
yj =
j
X
ykj+1 ; j = 1, 2, . . . , n.
k=1
I
Controls are
uj∗ = yj − yj−1 ,
j = 1, 2, . . . , n.
EMSR-b heuristic
Idea: apply Littlewood’s rule for given class and ”weighted average” class above.
I
Compute values
Sj =
j
X
dk , j = 1, 2, . . . , n.
k=1
I
Compute weighted average fares
f¯j =
Pj
fk E [dk ]
.
Pi=1
j
k=1
I
Calculate values yj as
P(Sj > yj ) =
I
E [dk ]
fj+1
, j = 1, 2, . . . , n − 1.
fj
Controls are
uj∗ = yj − yj−1 ,
j = 1, 2, . . . , n.