Appendices - VU Research Portal

Appendices
155
Appendix A
IS A SMARTER ALLOCATION RULE FOR
Pr(ACSR(K)) POSSIBLE?
The goal of this appendix is to describe one of the attempts for a smarter allocation
rule for Pr(ACSR(k)). As we will explain with this note, we believe that the potential
efficiency gain does not weight up against the computational effort of such an improved
smarter allocation rule. The described attempt follows the same line of proof as in [53].
For notational easiness, we replace in the following Dσjk (k) by D(k) and ẑnθ (θ) by ẑnθ .
a
Recall that the objective for the budget allocation problem is to maximize Pr(CS(k))
under the budget and the non-negative constraints. Define ∆ as the computational
budget, then the following approximated budget allocation problem is considered, where
the objective uses the Bonferroni version of the lower bound for Pr(CSR(k)) from
Remark 2.1,
Maximize
n ,...,n
θ1
θN
Subject to:

X 
X
1−
θ∈D(k)
X

θ0 ∈H(k)\{θ}


Pr L̃(θ) > L̃(θ0 ) 
nθ = ∆ , nθ ∈ {0, 1, 2, . . . } .
θ∈H(k)
Using the posterior distribution based on the simulation effort, the second summation
in the objective function can be rewritten as
X
Pr L̃(θ) > L̃(θ0 ) =
θ0 ∈H(k)\{θ}
X
Z ∞
ẑ
rnθ0
θ0 ∈H(k)\{θ}
X
=
−ẑn
θ
σ 20
σ2
θ
θ
nθ 0 + nθ
Z ∞
θ0 ∈H(k)\{θ}
δθ 0 ,θ
σθ 0 ,θ
t2
1
√ e− 2 dt
2π
t2
1
√ e− 2 dt ,
2π
where in the last equality two new variables are introduced for notational easiness
δθ0 ,θ := ẑnθ0 − ẑnθ ,
s
σθ0 ,θ :=
σθ20
σ2
+ θ.
nθ0 nθ
Furthermore, we relax the constraint that the variables nθ , θ ∈ H(k), should be nonnegative. Also we allow the variables to be continuous in order to make the analysis
Appendix A. Is a Smarter Allocation Rule for Pr(ACSR(k)) Possible?
158
applicable. Because of the relaxation the following problem is considered

X 
Maximize
n ,...,n
θ1
θN
θ∈D(k)
X
Subject to:
1−

Z ∞
X
δθ 0 ,θ
σθ 0 ,θ
θ0 ∈H(k)\{θ}


t2
1
√ e− 2 dt

2π
(A.1)
nθ = ∆ .
θ∈H(k)
The Lagrangian function Λ of problem (A.1) is
Λ=

X 
θ∈D(k)

Z ∞
X
1−
θ0 ∈H(k)\{θ}
1
√ e
2π
δθ 0 ,θ
σθ 0 ,θ
2
− t2




dt − λ 

X
nθ − ∆ ,
(A.2)
θ∈H(k)
where λ is the Lagrange multiplier. In order to find the critical values of Λ we have to
find the zeros of the gradient, i.e. ∇{∀θ∈H(k), nθ },λ Λ=0. This comes down to the following
first order conditions (where we make a distinction between designs from H(k) \ D(k)
and D(k) respectively, which is needed for the derivation of the partial derivatives later
on):
i)
∂Λ
∂nθi
= 0, θi ∈ D(k)
ii)
∂Λ
∂nθj
= 0, θj ∈ H(k) \ D(k)
iii)
∂Λ
∂λ
=0⇔
P
θ∈H(k)
nθ = ∆ .
Observe that the last condition is just the budget constraint from Problem (A.1). Let
us derive the partial derivatives of the first and the second condition, respectively. It
holds for θi ∈ D(k) that
Part I
z

}|
Z ∞
∂ 
∂Λ
= −
∂nθi
∂nθi θ0 ∈H(k)\{θi }
X
1
√ e
2π
2
− t2
δθ 0 ,θ
i
σθ 0 ,θ
i
{
dt −λ


(A.3)
Z ∞
X
∂ 
1 − t2 
−
e 2 dt ,
δθ ,θ 0 √
∂nθi θ0 ∈D(k)\{θi } σ i 0 2π
θ ,θ
i
|
{z
}
Part II
where Part I is obtained by choosing θi in the first sum and Part II follows when
choosing θi in the second sum in the Lagrangian function in (A.2), respectively. Part I
of (A.3) is equal to
∂
Part I of (A.3) = −
∂nθi
(
X
θ0 ∈H(k)\{θi }
δθ0 ,θi
1−Φ
σθ0 ,θi
=
X
φ
θ0 ∈H(k)\{θi }
=
X
θ0 ∈H(k)\{θi }
φ
δθ0 ,θi
σθ0 ,θi
!
δθ0 ,θi
σθ0 ,θi
!
∂
δθ0 ,θ
i
σθ0 ,θ
i
∂σθ0 ,θi
!)
∂σθ0 ,θi
∂nθi
δθ0 ,θi σθ2i
,
2n2θi σθ30 ,θi
159
where φ(·) and Φ(·) are the standard normal density-and distribution function respectively. Similar, it holds for Part II in (A.3) that
δθi ,θ0
Part II of (A.3) =
φ
σθi ,θ0
θ0 ∈D(k)\{θi }
!
X
δθi ,θ0 σθ2i
.
2n2θi σθ3i ,θ0
For notational simplicity we define q(θ1 , θ2 ) as follows
δθ1 ,θ2
q(θ1 , θ2 ) := φ
σθ1 ,θ2
!
1
2σθ31 ,θ2
.
Note that it holds q(θ1 , θ2 ) = q(θ2 , θ1 ). Using the above results and the definition of
q(·, ·), the partial derivative in (A.3) becomes as follows,
2
X
X
0
δθ0 ,θi σθ2i
∂Λ
0 δθi ,θ σθi
+
q(θ
,
θ
)
− λ.
=
q(θ0 , θi )
i
2
2
∂nθi
n
n
0
0
θ
θ
i
i
θ ∈D(k)\{θi }
θ ∈H(k)\{θi }
(A.4)
Splitting the first summation from (A.4) results in
∂Λ
=
∂nθi
X
q(θ0 , θi )
θ0 ∈H(k)\D(k)
X
δθ0 ,θi σθ2i
δθ0 ,θi σθ2i
0
+
q(θ
,
θ
)
i
n2θi
n2θi
θ0 ∈D(k)\{θi }
+
X
q(θi , θ0 )
θ0 ∈D(k)\{θi }
δθi ,θ0 σθ2i
− λ,
n2θi
and since it holds that q(θ1 , θ2 ) = q(θ2 , θ1 ) and δθ1 ,θ2 = −δθ2 ,θ1 this last result can be
written as, θi ∈ D(k),
X
δθ0 ,θi σθ2i
∂Λ
=
q(θ0 , θi )
− λ.
∂nθi
n2θi
θ0 ∈H(k)\D(k)
(A.5)
Following the same calculations as above, it holds that the partial derivative to nθj , θj ∈
H(k) \ D(k), of the Lagrange function is
X
δθj ,θ σθ2j
∂Λ
=
q(θj , θ)
− λ.
2
∂nθj
n
θ
j
θ∈D(k)
(A.6)
Using (A.6) in the second condition of finding the critical values for Λ, we obtain
∀θj ∈ H(k) \ D(k),
X
n2θj
∂Λ
=0⇔
q(θj , θ)δθj ,θ = λ 2 ,
(A.7)
∂nθj
σθj
θ∈D(k)
which becomes useful later on. The first condition of finding the critical values for Λ,
when using (A.5), can be rewritten ∀θi ∈ D(k) as
X
n2
∂Λ
=0⇔
q(θ0 , θi )δθ0 ,θi = λ θ2i ,
∂nθi
σθi
θ0 ∈H(k)\D(k)
since this last result holds ∀θi ∈ D(k),
⇔
X
X
θi ∈D(k) θ0 ∈H(k)\D(k)
q(θ0 , θi )δθ0 ,θi = λ
n2θi
,
σ2
θi ∈D(k) θi
X
160
Appendix A. Is a Smarter Allocation Rule for Pr(ACSR(k)) Possible?
interchanging the order of summations and using (A.7),
X
⇔
λ
θ0 ∈H(k)\D(k)
X n2θ
n2θ0
i
=
λ
,
2
σθ20
σ
θi ∈D(k) θi
rewriting gives, ∀θi ∈ D(k),
v
u
u
σθi t
⇔ nθi =
X
n2θ
n2θ0
−
.
σ2
σ2
θ∈D(k)\{θi } θ
θ0 ∈H(k)\D(k) θ0
X
(A.8)
Equation (A.8) shows how the number of simulations for a design in D(k) relates to
the number of simulations for other designs in H(k). Furthermore, equation (A.8)
corresponds with the original OCBA rule in [53] in case |D(k)| = 1, where |D(k)|
denotes the number of elements in D(k). Let us now concentrate on the relationship
between n2θi and n2θj for θi , θj ∈ D(k). It follows from (A.5) that ∀θi , θj ∈ D(k)
X
X
δθ0 ,θj σθ2j
δθ0 ,θi σθ2i
∂Λ
∂Λ
0
,
θ
)
q(θ0 , θi )
q(θ
=
⇔
=
,
j
∂nθi
∂nθj
n2θi
n2θj
θ0 ∈H(k)\D(k)
θ0 ∈H(k)\D(k)
this can, e.g., be rewritten as follows
vP
u
nθ
σθ u θ0 ∈H(k)\D(k) q(θ0 , θi )δθ0 ,θi
⇔ i = i tP
.
0
nθj
σθj
θ0 ∈H(k)\D(k) q(θ , θj )δθ0 ,θj
(A.9)
Lets focus on the relationship between nθi and nθj with θi , θj ∈ H(k) \ D(k). From
(A.6) it follows ∀θi , θj ∈ H(k) \ D(k),
X
X
δθj ,θ σθ2j
δθi ,θ σθ2i
∂Λ
∂Λ
q(θi , θ) 2 =
q(θj , θ)
=
⇔
∂nθi
∂nθi
nθi
n2θj
θ∈D(k)
θ∈D(k)
(
⇔
X
θ∈D(k)
δθj ,θ σθ2j
δθ ,θ σ 2
q(θi , θ) i 2 θi − q(θj , θ)
nθi
n2θj
)
(A.10)
= 0,
Lastly, for the relationship between nθi and nθj with θi ∈ D(k) and θj ∈ H(k) \ D(k) it
holds
X
X
δθj ,θ σθ2j
δθ0 ,θi σθ2i
∂Λ
∂Λ
0
=
⇔
q(θ , θi )
=
q(θj , θ)
∂nθi
∂nθi
n2θi
n2θj
θ0 ∈H(k)\D(k)
θ∈D(k)
δθj ,θ σθ2j
δθi ,θ σθ2i
q(θi , θ) 2 − q(θj , θ)
nθi
n2θj
(
⇔
X
θ∈D(k)
)
(A.11)
= 0,
In conclusion, stationary points of the Lagrangian function Λ must satisfy equations
(A.8), (A.9), (A.10) and (A.11). In [53] these conditions could be expressed as direct
relations between nθi and nθj so that Problem (A.1) could be approximately solved.
Many of our attempts to similarly express (A.8), (A.9), (A.10) and (A.11) in some
direct relations between nθi and nθj failed because the extra summation makes life hard.
For example, consider (A.10), in [53] they directly solved this expression by some extra
161
simplifications, which were justified by earlier calculations. The extra summation in
our case prevents us of following the same procedure and assumptions such as
δθj ,θ σθ2j
δθi ,θ σθ2i
∀θ ∈ D(k) : q(θi , θ) 2 = q(θj , θ)
nθi
n2θj
are needed to make the same analysis applicable. Unfortunately, these assumptions are
highly unrealistic. As a second example, observe that the relation between nθi and nθj
for θi , θj ∈ D(k) from equation (A.9) is not useful since the same values for nθi and
nθj are trapped in the exponential of q(θ0 , θi ) and q(θ0 , θj ). In order to obtain a useful
relation between nθi and nθj for θi , θj ∈ D(k) from equation (A.9) we must have, e.g.,
an assumption
∀θ0 ∈ H(k) \ D(k) : δθ0 ,θi ≈ δθ0 ,θj ,
i.e., all design in D(k) are nearly the same in performance. So that equation (A.9)
n
σ
reduces to nθθi = σθθi by furthermore assuming that the number of simulations spend
j
j
to a design in D(k) is significant larger than for a design in H(k) \ D(k). Obviously
such an assumption is also very unrealistic and would require, e.g., an extra algorithmic
procedure of updating D(k) to ensure that the assumptions are realistic.
In conclusion, many of our attempts failed because of the extra summation in the
accelerating stopping rule. Maybe there is some ingenious way of solving Problem (A.1),
but we believe that the extra research effort and possible extra assumptions needed in
finding such a smarter allocation rule is not worth the small amount of efficiency that it
would possible realize. In light of our given arguments in Chapter 2, we believe that the
combination of the OCBA and the accelerating stopping rule is already highly efficient.
HIDDEN SUDOKU PAGE
Congratulations, you found the hidden Sudoku page to kill some time! A Sudoku is a
puzzle on a 9 × 9 grid. It is filled with nine numbers running from 1 to 9. Each number
occurs only once in each row, each column and each of the nine 3 × 3 boxes. Initially
some of the numbers are shown. The goal is to fill in all missing digits.
Level 1
2
3
9
1
9
6
8
4
5
9
5
5
2
3
6
7
8
2
2
2
4
7
5
1
7
5
3
8
Level PhD
6
7
1
9
4
2
3
7
8
9
3
3
4
4
2
7
7
1
3
9
5
1
8
6
5
Appendix B
PROOFS OF JACKSON NETWORK
RESULTS
B.1
Proof of Theorem 3.2
(i): Assume first that all nodes are in up status (I = ∅). We start the proof with
evocation of the Subnetwork Argument from the proof of Theorem 7 in [172]. The
Subnetwork Argument guarantees that the subnetwork W develops as a Jackson network
where the source and sink represent {0} ∪ V . The corresponding queueing process
X̃ := ((X̃i (t) : i ∈ W ) : t ∈ R+ ) is a Markov process of its own. The traffic equations of
the described subnetwork W are given by
η̃i = λ̃i +
X
η̃j r(j, i),
i ∈ W,
where λ̃i := λi +
j∈W
X
µj r(j, i),
j∈V
so ηi = η̃i holds for all i ∈ W . According to Jackson’s theorem (see [107]), X̃ has the
unique stationary and limiting distribution
lim P (Xi (t) = ni : i ∈ W ) =
t→∞
ηi
1−
µi
Y
i∈W
!
ηi
µi
!ni
,
|W |
∀(ni : i ∈ W ) ∈ N0
,(B.1)
because ηi < µi for all i ∈ W holds. Thus, even if the subnetwork V of nodes with
infinite supply is not in equilibrium, the equilibrium on the subnetwork W of nodes
without infinite supply is preserved, if the initial distribution has the joint marginal
(B.1).
This joint queue length process X̃ is coupled with an availability process Y which
only depends on the interaction of the nodes in D ⊆ J˜ but not on their load. Whenever
a node in D breaks down, stalling occurs, so all nodes go into a warm standby and
all arrivals and services are interrupted until all nodes recur to the up status. The
|W |
network process (Y, X̃) is a Markov process on the state space P(D) × N0 . The
|W |
balance equations for the subnetwork W are for all (∅, nk : k ∈ W ) ∈ {∅} × N0 given
by

π(∅, nk : k ∈ W ) 
X
i∈W
=
X
λi +
X
µj r(j, i) +
X
j∈V
+
i∈W
µi (1 − r(i, i))1{ni >0} +
i∈W
π(∅, nk : k ∈ W \ {i}, ni − 1) · λi +
i∈W
X

X
∅6=I⊆D
µj r(j, i) · 1{ni >0} +
j∈V
π(∅, nk : k ∈ W \ {i}, ni + 1) · µi 1 −
X
X
j∈W
r(i, j) +
α(∅, I)
Appendix B. Proofs of Jackson Network Results
166
+
X
X
π(∅, nk ∈ W \ {i, j}, ni + 1, nj − 1) · µi r(i, j) · 1{nj >0}
i∈W j∈W \{i}
+
X
π(I, nk : k ∈ W ) · β(I, ∅) ,
(B.2)
∅6=I⊆D
|W |
and for all (I, nk : k ∈ W ) ∈ P(D) × N0
with I 6= ∅

π(I, nk : k ∈ W ) 

X
α(I, H) +
I⊂H⊆D
=
X
X
β(I, K)
∅6=K⊂I
π(K, nk : k ∈ W ) · α(K, I) +
X
π(H, nk : k ∈ W ) · β(H, I) . (B.3)
I⊂H⊆D
∅6=K⊂I
We have to show, that (3.13) solves these equations. In the following we denote
A(I) Y
π̂(I, nk : k ∈ W ) :=
B(I) i∈W
ηi
µi
!ni
|W |
for all (I, nk : k ∈ W ) ∈ P(D) × N0 , which is (3.13) before normalization, and plug it
into the above balance equations instead of π(I, nk : k ∈ W ).
In the first equation (B.2) the term
π̂(∅, nk : k ∈ W )α(∅, I) = π̂(∅, nk : k ∈ W )A(I) = π̂(I, nk : k ∈ W )B(I)
on the left-hand side is equal to the term π̂(I, nk : k ∈ W )β(I, ∅) = π̂(I, nk : k ∈ W )B(I)
on the right-hand side for each ∅ =
6 I ⊆ D. The remainder of (B.2) is the global balance
equation of a classical Jackson network which has the solution (see [107])
π̂(∅, nk : k ∈ W ) := π̂(nk : k ∈ W ) =
Y
i∈W
ηi
µi
!ni
.
Consider the second equation (B.3) for some fixed I 6= ∅. For any K ⊂ I, K 6= ∅,
the term
π̂(I, nk : k ∈ W )β(I, K) = π̂(I, nk : k ∈ W )
B(I)
A(I)
= π̂(∅, nk : k ∈ W )
B(K)
B(K)
on the left-hand side is equal to the term on the right-hand side
π̂(K, nk : k ∈ W )α(K, I) = π̂(K, nk : k ∈ W )
A(I)
A(I)
= π̂(∅, nk : k ∈ W )
.
A(K)
B(K)
Moreover, for any I ⊂ H ⊆ D the term
π̂(I, nk : k ∈ W )α(I, H) = π̂(I, nk : k ∈ W )
A(H)
A(H)
= π̂(∅, nk : k ∈ W )
A(I)
B(I)
on the left-hand side is equal to the term
π̂(H, nk : k ∈ W )β(H, I) = π̂(H, nk : k ∈ W )
B(H)
A(H)
= π̂(∅, nk : k ∈ W )
B(I)
B(I)
B.1. Proof of Theorem 3.2
167
on the right-hand side. The proof of (i) is finished by normalization, which is possible
because ηi < µi holds for all i ∈ W .
(ii): It is well known that ergodic Jackson networks have, in equilibrium, Poisson
departure streams from node i to the sink with rate η̃i r̃(i, 0), see [142, Example 7.1].
From the proof of (i), we know that the subset W behaves like an ergodic Jackson
P
network with unreliable nodes of its own with λ̃i := λi + j∈V µj r(j, i) and
η̃i r̃(i, 0) = ηi 1 −
X
r(i, j) = ηi r(i, 0) +
j∈W
X
r(i, j) .
j∈V
Hence, if the subnetwork W is in equilibrium, as long as all nodes are in up status,
departures to the sink from nodes i ∈ W are Poisson streams with rate ηi r(i, 0) and
departures from i ∈ W to any node j ∈ V are also Poisson streams with rate ηi r(i, j),
P
because a portion of r(i, j)/(r(i, 0) + j∈V r(i, j)) of the departure stream from node
i ∈ W is directed to j ∈ V .
(iii): Under the condition that all nodes j ∈ J˜ are in up status, we start the proof
with evocation of the M/M/1 Argument from the proof of theorem 13 in [172].
This argument leads to the conclusion, that if the subnetwork W is in equilibrium
and if r(i, i) = 0 holds, node i ∈ V behaves as a M/M/1− system of its own. The
corresponding queue length process X̂ is a birth-death process on state space N0 with
birth rates λ̂i = ηi and death rates µi .
This queue length process X̂ is here coupled with an availability process Y on P(D),
˜ where breakdown and repair of nodes only depend on the interaction of the
D ⊆ J,
nodes but not on their queue length. Whenever a node in D breaks down, stalling
occurs, so all nodes go into a warm standby and all arrivals and services are interrupted
until all nodes recur to the up status. The network process (Y, X̂) is a Markov process
on the state space P(D) × N0 . The balance equations are
πi (∅, ni ) λ̂i + µi 1{ni >0} +
X
α(∅, I)
∅6=I⊆D
= πi (∅, ni − 1) · λ̂i · 1{ni >0} + πi (∅, ni + 1) · µi +
X
πi (I, ni ) · β(I, ∅)
(B.4)
∅6=I⊆D
for all (∅, ni ) ∈ {∅} × N0 and
X
πi (I, ni )
α(I, H) +
I⊂H⊆D
=
X
∅6=K⊂I
X
β(I, K)
∅6=K⊂I
X
πi (K, ni ) · α(K, I) +
πi (H, ni ) · β(H, I)
I⊂H⊆D
for all (I, ni ) ∈ P(D) × N0 with I 6= ∅.
We have to show, that (3.14) solves these equations. In the following we set
A(I)
π̂i (I, ni ) :=
B(I)
ηi
µi
!ni
for all (I, ni ) ∈ P(D) × N0 as the non-normalized proposed solution density.
(B.5)
168
Appendix B. Proofs of Jackson Network Results
In the first equation (B.4) the term
π̂i (∅, ni )α(∅, I) = π̂i (∅, ni )A(I) = π̂i (I, ni )B(I)
on the left-hand side is equal to the term π̂i (I, ni )β(I, ∅) = π̂i (I, ni )B(I) on the righthand side for each ∅ =
6 I ⊆ D. The remainder of (B.4) is the global balance equation of
an M/M/1−system which has the solution
π̂i (∅, ni ) := π̂i (ni ) =
ηi
µi
!ni
,
since λ̂i = ηi holds.
Consider the second equation (B.5) for some fixed I 6= ∅. For any K ⊂ I, K 6= ∅,
the term
π̂i (I, ni )β(I, K) = π̂i (I, ni )
B(I)
A(I)
= π̂i (∅, ni )
B(K)
B(K)
on the left-hand side is equal to the term on the right-hand side
π̂i (K, ni )α(K, I) = π̂i (K, ni )
A(I)
A(I)
= π̂i (∅, ni )
.
A(K)
B(K)
Moreover, for any I ⊂ H ⊆ D the term
π̂i (I, ni )α(I, H) = π̂i (I, ni )
A(H)
A(H)
= π̂i (∅, ni )
A(I)
B(I)
on the left-hand side is equal to the term
π̂i (H, ni )β(H, I) = π̂i (H, ni )
B(H)
A(H)
= π̂i (∅, ni )
B(I)
B(I)
on the right-hand side. The proof of (iii) is finished by normalization, which is possible
from ηi < µi .
Lastly, the limiting probability (3.15) for unstable nodes with infinite supply follows
from the same arguments as in the proof of theorem 15 in [155].
B.2
Proof of Theorem 3.3
Consider the subset W of nodes without infinite supply. For any subset I ⊆ D of
broken down nodes, we have the following facts for the subset W \ I which remain in
force as long as I is unchanged:
• All service times of all up-nodes are exponentially distributed and the service
discipline at all nodes is FCFS.
• Routing of customers is Markovian: A customer completing service at node
i ∈ W \ I will either move to some node j ∈ W \ I with probability rI (i, j) or
P
leave the subnetwork with probability 1 − j∈W \I rI (i, j).
B.2. Proof of Theorem 3.3
169
• At each node i ∈ W \ I, we have external arrivals from the source which are
independent Poisson streams with rate λIi ≥ 0. Furthermore all arrivals from nodes
j ∈ V \I with infinite supply into nodes i ∈ W \I are independent Poisson streams
at rate µj rI (j, i), see Theorem 3.1. The sum of independent Poisson streams is a
Poisson stream, hence the arrival stream from the outside of the subset W \ I
P
into each node i ∈ W \ I is a Poisson process with rate λIi + j∈V \I µj rI (j, i).
• All service times and all interarrival times are independent of each other.
Let X̃ := ((X̃i (t) : i ∈ W \ I) : t ∈ R+ ) be the queueing process of this subnetwork. the
process is supplemented with a Markov process Y = (Y (t) : t ∈ R+ ) which describes
the availability status of the nodes and therefore gives information on how long the
network process on the subnet W \ I lives until it jumps to the next Markov process on
some randomly chosen subnet W \ K, K ⊆ D. Rerouting is according to the blocking
rs-rd regime (skipping, resp.). The balance equations of the joint availability-queue
|W |
length process (Y, X̃i : i ∈ W ) are ∀(I, ni : i ∈ W ) ∈ P(D) × N0
π(I, nk : k ∈ W ) ·
X i∈W \I
+
X
=
+
X
I
µj r (j, i) +
j∈V \I
µi (1 − rI (i, i)) · 1{ni >0} +
X
π(I, nk : k ∈ W \ {i}, ni − 1) ·
+
λIi
X
+
µj r (j, i) · 1{ni >0}
I
j∈V \I
X
π(I, nk : k ∈ W \ {i}, ni + 1) · µi 1 −
i∈W \I
+
β(I, K)
K⊂I⊆D
i∈W \I
X
X
α(I, H) +
I⊂H⊆D
i∈W \I
X
λIi
X
I
r (i, j) +
j∈W \I
X
π(I, nk : k ∈ W \ {i, j}, ni + 1, nj − 1) · µi rI (i, j) · 1{nj >0}
i∈W \I j∈W \I,j6=i
+
X
π(K, nk : k ∈ W ) · α(K, I) +
K⊂I⊆D
X
π(H, nk : k ∈ W ) · β(H, I) . (B.6)
I⊂H⊆D
We have to show that the distribution given by (3.16) solves equation (B.6) for all
|W |
(ni : i ∈ W ) ∈ N0 and all I ⊆ D. In the following we set
A(I) Y
π̂(I, nk : k ∈ W ) :=
B(I) i∈W
ηi
µi
!ni
|W |
for all (ni : i ∈ W ) ∈ N0 and all I ⊆ D, and consider equation (B.6) for some fixed
I ⊆ D.
For any K ⊂ I, K 6= ∅, the term
π̂(I, nk : k ∈ W )β(I, K) = π̂(I, nk : k ∈ W )
B(I)
A(I)
= π̂(∅, nk : k ∈ W )
B(K)
B(K)
on the left-hand side is equal to the term on the right-hand side
π̂(K, nk : k ∈ W )α(K, I) = π̂(K, nk : k ∈ W )
A(I)
A(I)
= π̂(∅, nk : k ∈ W )
.
A(K)
B(K)
Appendix B. Proofs of Jackson Network Results
170
Moreover, for any I ⊂ H ⊆ D the term
π̂(I, nk : k ∈ W )α(I, H) = π̂(I, nk : k ∈ W )
A(H)
A(H)
= π̂(∅, nk : k ∈ W )
A(I)
B(I)
on the left-hand side is equal to the term
π̂(H, nk : k ∈ W )β(H, I) = π̂(H, nk : k ∈ W )
B(H)
A(H)
= π̂(∅, nk : k ∈ W )
B(I)
B(I)
on the right-hand side. The remainder of (B.6) is
π̂(I, nk : k ∈ W ) ·
X λIi
+
i∈W \I
I
µj r (j, i) +
µj rI (j, i) · 1{ni >0} +
X
j∈V \I
X
π̂(I, nk : k ∈ W \ {i}, ni + 1) · µi 1 −
X
I
X
r (i, j) +
j∈W \I
i∈W \I
+
i∈W \I
i∈W \I
+
µi (1 − r (i, i)) · 1{ni >0}
I
X
j∈V \I
π̂(I, nk : k ∈ W \ {i}, ni − 1) · λIi +
X
=
X
π̂(I, nk : k ∈ W \ {i, j}, ni + 1, nj − 1) · µi rI (i, j) · 1{nj >0} .
X
i∈W \I j∈W \I,j6=i
With ηiI = λIi +
P
j∈W \I
π̂(I, nk : k ∈ W ) ·
ηjI rI (j, i) +
X ηiI
−
i∈W \I
X
=
P
j∈V \I
X
µj rI (j, i) (see (3.6)) this is equivalent to
ηjI rI (j, i)
ηiI
+
ηjI rI (j, i)
X
−
· 1{ni >0} +
j∈W \I
rI (i, j) +
X
π̂(I, nk : k ∈ W \ {i}, ni + 1) · µi 1 −
i∈W \I
X
i∈W \I
π̂(I, nk : k ∈ W \ {i}, ni − 1) ·
X
+
j∈W \I
i∈W \I
+
µi (1 − r (i, i)) · 1{ni >0}
I
X
j∈W \I
X
π̂(I, nk : k ∈ W \ {i, j}, ni + 1, nj − 1) · µi rI (i, j) · 1{nj >0} .
i∈W \I j∈W \I,j6=i
Under the required condition of either (3.7) and (3.8) in case of blocking rs-rd or (3.10)
in case of skipping holds ηi = ηiI for all i ∈ W \ I and all I ⊆ D for the respective
reduced traffic equations. Therefore from Lemma 3.1 or Lemma 3.2, respectively, this
is equivalent to
π̂(I, nk : k ∈ W ) ·
X i∈W \I
=
X
ηi −
X
ηj rI (j, i) +
j∈W \I
i∈W \I
π̂(I, nk : k ∈ W \ {i}, ni − 1) · ηi −
+
π̂(I, nk : k ∈ W \ {i}, ni + 1) · µi 1 −
i∈W \I
+
X
X
ηj r (j, i) · 1{ni >0} +
I
j∈W \I
i∈W \I
X
µi (1 − rI (i, i)) · 1{ni >0}
X
X
I
r (i, j) +
j∈W \I
X
i∈W \I j∈W \I,j6=i
π̂(I, nk : k ∈ W \ {i, j}, ni + 1, nj − 1) · µi rI (i, j) · 1{nj >0} .
B.2. Proof of Theorem 3.3
Plugging in π̂(I, nk : k ∈ W ) =
X X
ηi −
i∈W \I
I
A(I)
B(I)
ηj r (j, i) +
j∈W \I
Q
i∈W
ηi ni
µi
171
yields
µi (1 − rI (i, i)) · 1{ni >0}
X
i∈W \I
X
X ηi
µi
rI (i, j) +
· ηi −
ηj rI (j, i) · 1{ni >0} +
· µi 1 −
=
η
µ
j∈W \I
j∈W \I
i∈W \I i
i∈W \I i
X
X
ηi µj
+
µi rI (i, j) · 1{nj >0}
µ
η
i
j
i∈W \I j∈W \I,j6=i
X
⇔0=−
X
i∈W \I
X
X
X
X
µi
µj I
·
ηj rI (j, i) · 1{ni >0} +
ηi r (i, j) · 1{nj >0} .
ηi j∈W \I,j6=i
η
i∈W \I j∈W \I,j6=i j
n i
A(I)
ηi
Thus π̂(I, nk : k ∈ W ) = B(I)
solves the balance equations (B.6). The last
i∈W µi
step of proving (3.16) is by normalizing π̂, which is possible because ηi < µi holds for
all i ∈ W .
Q
Appendix C
PROOF OF THE TABOO MATRIX
REPRESENTATION OF THE DEVIATION
MATRIX
The goal of this appendix is to proof under slightly more general conditions that the
alternative deviation matrix expression from (4.9) from Chapter 4 holds true. Therefore
we need Lemma C.1 which will be used in the proof of the alternative expression given
in Theorem C.1. The proofs are based on Lemma 3.2 (p. 39) from [111].
Lemma C.1 (Based on [111]). Consider a Markov uni-chain with transition matrix P
with stationary distribution πP> . Let T = P − hσ > be a taboo matrix where h and σ >
are appropriate sized vectors. When there exists a matrix norm k · k such that kT k < 1
and πP> h 6= 0, then it holds that
σ > (I − T )−1 = πP> /(πP> h).
(C.1)
Proof. We may rewrite (C.1) as
(C.1) ⇔ σ > = πP> (I − T )/(πP> h)
inserting the expression T = P − hσ > and using that πP> P = πP> , resp.,
⇔ σ > = πP> (I − P + hσ > )/(πP> h)
⇔ σ > = πP> hσ > /(πP> h)
⇔ σ> = σ>
Theorem C.1 (Based on [111]). Consider a Markov uni-chain with transition matrix
P , stationary distribution πP> , ergodic projector ΠP and for which its deviation matrix
P
n
>
>
DP = ∞
n=0 (P − ΠP ) exists. Let T = P − hσ be a taboo matrix where h and σ are
appropriate sized vectors and let 1̄ denote an appropriate sized vector of ones. When
there exists a matrix norm k · k such that
(i) kT k < 1,
(ii) σ > (I − T )−1 1̄ 6= 0 (e.g., this holds true when T is non-negative and σ > is a
stochastic vector), and
(iii) πP> h 6= 0
174
Appendix C. Proof of the Taboo Matrix Representation of the Deviation Matrix
then it holds that
DP = (I − ΠP )
∞
X
T n (I − ΠP ),
n=0
note that when kT k < 1 the infinity sum is finite.
Proof. From the definition of DP it follows after some calculations that
DP (I − P ) = (I − ΠP )
since P = T − hσ >
⇔ DP (I − T ) = I − ΠP − DP hσ >
using that kT k < 1 gives
⇔ DP = (I − ΠP )(I − T )−1 − DP hσ > (I − T )−1 .
(C.2)
If we multiply (C.2) from the right with 1̄ we get
DP 1̄ = (I − ΠP )(I − T )−1 1̄ − DP hσ > (I − T )−1 1̄
the definition of DP shows that DP 1̄ = 0 so that rewriting gives
⇔ DP hσ > (I − T )−1 1̄ = (I − ΠP )(I − T )−1 1̄
so when σ > (I − T )−1 1̄ 6= 0 (note that this is a scalar)
⇔ DP h = (I − ΠP )(I − T )−1 1̄/(σ > (I − T )−1 1̄).
(C.3)
Inserting the result from (C.3) into (C.2) gives
(I − ΠP )(I − T )−1 1̄σ > (I − T )−1
σ > (I − T )−1 1̄
⇔ DP = (I − ΠP )(I − T )−1 (I − 1̄σ > (I − T )−1 /(σ > (I − T )−1 1̄))
(C.2) ⇔ DP = (I − ΠP )(I − T )−1 −
making use of Lemma C.1 which requires that πP> h 6= 0
⇔ DP = (I − ΠP )(I − T )−1 [I − 1̄πP> /(πP> h)/(πP> 1̄/(πP> h))]
using that πP> 1̄ = 1
⇔ DP = (I − ΠP )(I − T )−1 (I − 1̄πP> )
which ends the proof by noting that 1̄πP> = ΠP for Markov uni-chains.
Remark C.1. It can be shown that if P is a transition matrix of a Markov uni-chain
which is geometric ergodic (for a definition see, e.g., Section 5.1) the deviation matrix
expression from Theorem C.1 with taboo matrix T = P − hσ > holds true when σ > 1̄ = 1
and when there exists a matrix norm k · k such that kT k < 1.