Sojourn Times in Finite Markov Processes 1 Introduction

Sojourn Times in Finite Markov Processes
Gerardo Rubino and Bruno Sericola
IRISA
Campus de Beaulieu, 35042 Rennes Cedex, FRANCE
Abstract
Sojourn times of Markov processes in subsets of the nite state space are considered. We
give a closed form of the distribution of the nth sojourn time in a given subset of states. The
asymptotical behaviour of this distribution when time goes to in nity is analyzed, both in the
discrete and the continuous time cases. We consider the usually pseudo-aggregated Markov
process canonically constructed from the previous one by collapsing the states of each subset
of the partition. The relation between limits of moments of the sojourn time distributions
in the original Markov process and the moments of the corresponding holding times of the
pseudo-aggregated one is also studied.
Index Terms { markov processes, aggregation, sojourn times, transient analysis, performance evaluation, reliability modeling
1 Introduction
Consider a homogeneous irreducible Markov process X evolving in discrete or continuous time in
the rst case we will speak about a chain. We can distinguish a subset of states and then consider
the random variable \time spent by process X in the given subset of the state space in its nth visit
to the subset". This kind of subject is of interest in areas such as reliability or performability (2]),
when a subset of the state space corresponds, for instance, to a xed level of performance of the
system 6],1]. Since the processes considered here are irreducible, in such a context this means
that we are concerned with systems that have always recovery procedures, that is, models without
absorbing classes. The analysis of these random variables is the main topic of this paper.
1
The sojourn time of a process X in a subset of states will be an integer valued random variable
if X is a chain or a real valued one in the case of a continuous time process. The distributions
of these random variables are given in Sections 2 and 3 for the discrete time and continuous time
cases respectively. We also study the behaviour of these distributions when time goes to innity,
that is, for n ! +1 when considering the nth visit to the the given subset.
Let E = f1 ::: N g be the nite state space of the given process and let B = fB (1) ::: B (M )g
be a partition of E . Let F be the set of integers f1 2 ::: M g. To the given process X we associate the aggregated stochastic process Y with values on F , dened by: Yt = m () Xt 2
B (m) for all values of t (t 2 IN or t 2 IR). We easily deduce from this denition and the irreducibility of X that the process Y obtained is also irreducible but it need not be Markov, not
even homogeneous. In 3] and in 4], we can nd conditions under which the aggregated chain
Y (i.e. in the discrete time case) is also Markov homogeneous. Anyway, we can construct a homogeneous Markov process Z on the state space F canonically associated to the given process
X which will be called the pseudo-aggregated process of X with respect to the partition B. This
pseudo-aggregation is briey presented in Section 4 and the relation between the holding times of
Z and the corresponding sojourn times in X is analyzed. Section 5 presents an application to a
fault tolerant multiprocessor system (6]) and the last section contains some conclusions.
2 Sojourn times in the discrete time case
Let X = (Xn )n 0 be a homogeneous irreducible Markov chain with transition probability matrix
P . Denote by its initial probability distribution and by x its equilibrium probability distribution,
that is: x = xP , x > 0 and x1T = 1, where 1 denotes a row vector with all the entries equal
to the scalar 1, the dimension being dened by the context. Let B be a proper subset of the
state space E and denote by B c the complementary subset E ; B . We assume for simplicity that
B = f1 2 . . . Lg, 1 L < N . The partition fB B c g of E induces a decomposition of P into
four submatrices and a decomposition of and x into two subvectors:
1
0
c
P
P
P =B
@ B BB CA
PB c B P B c
= ( B B c )
We will need the two following elementary lemmas.
2
x = xB xBc
Lemma 2.1 The matrix I ; PB is invertible.
Proof. Consider a Markov chain on the state space B fag with transition probability matrix
0
1
T
P
(
I
;
P
)1
B
CA
P0 = B
@ B
0
1
P being irreducible, the states of B are transient and a is absorbing. Therefore:
lim P k (i
k!+1 B
j) = 0
8i j 2 B
which implies that I ; PB is invertible.
2
In the same way, the matrix I ; PBc is invertible.
Lemma 2.2 The vector xB verify xB = xB UB where UB = PBBc (I ; PBc );1 PBcB (I ; PB );1
Proof. The result follows directly from the following decomposition of the system x = xP :
xB = xB PB + xBc PBc B
xBc = xB PBBc + xBc PBc
by replacing in the rst equation the value of the vector xBc obtained from the second one. 2
Remark that in the same way, we have:
xBc = xBc UBc
where UBc = PBc B (I ; PB );1 PBBc (I ; PBc );1
Denition 2.3 We call \sojourn of X in B 00 any sequence Xm Xm+1 ::: Xm+k where k
1,
Xm Xm+1 ::: Xm+k;1 2 B Xm+k 62 B and if m > 0, Xm;1 62 B . This sojourn begins at time
m and nishes at time m + k. It lasts k.
Let Vn n 1 be the random variable \state of B in which the nth sojourn of X begins". The
hypothesis of irreducibility of the Markov chain X assures the existence of an innity of sojourns
of X in B with probability 1. It is immediat to verify that (Vn )n 1 is a homogeneous Markov
chain on the state space B . Let G be the L L transition probability matrix of this chain and vn
its probability distribution vector after the nth transition: vn = (IP(Vn = 1) ::: IP(Vn = L)) (we
denote by IP the probability measure). We have obvioulsy vn = v1 Gn;1. (Vn )n 1 is characterized
by v1 and G which are given in the following theorem.
3
Theorem 2.4 Matrix G and vector v1 are given by the following expressions:
(i) v1 = B + Bc (I ; PBc );1 PBc B
(ii) G = (I ; PB );1 PBBc (I ; PBc );1 PBc B = (I ; PB );1 UB (I ; PB )
Proof. (i) Let i 2 B c and j 2 B . We dene H (i j ) def
= IP(V1 = j=X0 = i) and let H be the
(N ; L) L matrix with entries H (i j ). The matrix H is the matrix of absorption probabilities
from B c to B , that is (see 3], page 53):
H = (I ; PBc );1 PBc B
Conditioning on the initial state, we then obtain, for every j 2 B :
IP(V1 = j ) = (j ) +
X
i2Bc
IP(X0 = i)H (i j )
This gives in matrix notations:
v1 = B + Bc (I ; PBc );1 PBc B
(ii) Let V1c be the random variable \state of B c in which the rst sojourn of X in B c begins".
In a similar way as previously, for every i 2 B and j 2 B c , we dene: R(i j ) def
= IP(V1c = j=X0 = i).
The L (N ; L) matrix R is the matrix of absorption probabilities from B to B c, that is:
R = (I ; PB );1 PBBc
So, for every i j 2 B and using the Markov properties if X , we have:
G(i j ) = IP(V2 = j=V1 = i) = IP(V2 = j=X0 = i)
=
=
=
X
k 2B c
X
k 2B c
X
k 2B c
IP(V1c = k=X0 = i)IP(V2 = j=V1c = k X0 = i)
IP(V1c = k=X0 = i)IP(V1 = j=X0 = k)
R(i k)H (k j )
2
This gives in matrix notation: G = RH
The chain (Vn )n 1 contains only one recurrent class, the set B 0 of the states of B directly
accessible from B c : B 0 = fj 2 B=9 i 2 B c P (i j ) > 0g. Without any loss of generality, we
4
will consider that B 0 = f1 ::: L0 g where 1 L0 L. We then denote by B 00 the set B ; B 0 . The
partition fB 0 B 00 g induces the following decomposition on matrices G and H :
0
1
0
G 0
G=B
@ 00 CA
;
H = H0 0
G 0
In the same way, the partition fB 0 B 00 B c g induces the following decomposition on P :
0
PB PB B PB B c
B
B
P =B
B
PB B PB PB Bc
B
@
0
00
PBc B
0
00
0
0
00
PBc
0
0
00
1
CC
CC
CA
Now, as for G, we have the following expression for G0 .
Theorem 2.5
G0 = (I ; PB ; PB B (I ; PB );1 PB B );1
(PB Bc + PB B (I ; PB );1 PB Bc )(I ; PBc );1 PBc B
0
0
0
00
0
00
00
00
00
0
00
0
Proof. The proof is as in (i) of Theorem 2.4. Therefore, we omit it. We give only the system
satised by matrices G0 , G00 and H 0 :
G0 = PB G0 + PB B G00 + PB Bc H 0
0
0
00
0
G00 = PB B G0 + PB G00 + PB
00
0
00
00
Bc H
0
H 0 = PBc B G0 + PBc H 0
0
By inferring G0 from this system we obtain the given expression
2
Note that the expression given in the previous theorem can reduce considerably the time necessary to compute G0 when L0 < L: instead of inverting a matrix of size L when computing G from
the formula of Theorem 2.4, we have here to invert two matrices of sizes L0 and L ; L0 .
We denote now by NB n for n 1 the random variable taking values in IN : \time spent by X
during its nth sojourn in B 00 . We have the following explicit expression for the distribution of NB n .
Theorem 2.6
8n 2 IN 8k 2 IN
IP(NB n = k) = vn PBk;1 (I ; PB )1T
5
Proof. First, we are going to derive the distribution of NB 1. Conditionning on the state in
which the sojourn in B begins, we obtain:
8i 2 B
IP(NB 1 = 1=V1 = i) =
Let k > 1 and i 2 B .
IP(NB 1 = k=V1 = i) =
X
j 2B
X
j 2Bc
P (i j )
P (i j )IP(NB 1 = k ; 1=V1 = j )
If we dene hk def
= (IP(NB 1 = k=V1 = 1) ::: IP(NB 1 = k=V1 = L)), we can rewrite these two
relations as follows:
hT1 = PBBc 1T = (I ; PB )1T
and for k > 1:
that is:
and
hTk = PB hTk;1
hTk = PBk;1 (I ; PB )1T
8k 1
IP(NB 1 = k) = v1 PBk;1 (I ; PB )1T
8k 1
If we consider now the nth sojourn of X in B , we have:
P
IP(NB n = k) = i2B IP(NB n = k=Vn = i)IP(Vn = i)
P
= i2B IP(NB 1 = k=V1 = i)IP(Vn = i)
= vn hTk
which is the explicit expression given in the statement.
2
Let us compute now the moments of the random variable NB n . An elementary matrix calculus
gives
IE(NB n ) = vn (I ; PB );1 1T
where IE denotes expectation. For higher order moments, it is more comfortable to work with
factorial moments instead of standard moments. Recall that the k-order factorial moment of a
random variable V , which will be denoted by FMk (V ), is dened by:
FMk (V ) def
= IE(V (V ; 1):::(V ; k + 1))
6
See that FM1 (V ) = IE(V ). The following property holds: if IE(V j ) = IE(W j ) for j = 1 2 ::: k ; 1
then
FMk (V ) = FMk (W ) () IE(V k ) = IE(W k ) ]
Then, by a classical matrix computation as we did for the mean value of NB n , we have:
FMk (NB n ) = k!vn PBk;1(I ; PB );k 1T
(1)
Let us consider now the asymptotical behaviour of the nth sojourn time of X in B .
Corollary 2.7 For any k 1, the sequence (IP(NB n = k))n 1 converges in the sense of Cesaro
as n ! 1 and the limit is vPBk;1 (I ; PB )1T where v is the equilibrium probability distribution
of (Vn )n 1 . Vector v is given by: v = (1=K )xB (I ; PB ) where K is the normalizing constant
xB (I ; PB )1T . The convergence is simple for any initial distribution of X if and only if G0 is
aperiodic.
Proof. The proof is an elementary consequence of general properties of Markov chains, since
the sequence depends on n only through vn and (Vn )n 1 is a nite homogeneous Markov chain with
only one recurrent class B 0. See that v = (v0 0) according to the partition fB 0 B 00 g of B . If we
consider the block decomposition of G with respect to the partition fB 0 B 00 g, a simple recurrence
on integer n allows us to write:
0
1
0n
G
0
Gn = B
@ 00 0n;1 CA
GG
0
G0n converges in the sense of Cesaro to 1T v0 and in the same way, G00 G0n;1 converges in the
sense of Cesaro to 1T v0 because G00 1T = 1T (recall that the dimension of vector 1 is given by the
context). The expression for v is easily checked by using Lemma 2.2. The second part of the proof
is an immediat corollary of the convergence properties of Markov chains.
2
Let us dene the random variable NB 1 with values in IN by making its distribution equal to
the previous limit:
IP(NB 1 = k) = vPBk;1 (I ; PB )1T for any k 1
Then, by taking Cesaro limits in expression (1) giving the factorial moments of NB n , we obtain
the limit given below which needs no proof.
7
Corollary 2.8 The sequence of factorial moments (FMk (NB n ))n 1 converges in the sense of Cesaro and we have:
n
1X
lim
FMk (NB l ) = FMk (NB 1 ) = k!vPBk;1 (I ; PB );k 1T
n! 1 n
l=1
The convergence is simple for any initial distribution of X if and only if the matrix G0 is aperiodic.
Remark: There is no relation between the periodicity of P and the periodicity of G0. That is,
the four situations obtained by combination of the two properties periodicity and aperiodicity of
each matrix are possible. It is immediat to construct four examples illustrating this.
3 Sojourn times in the continuous time case
Let now X=(Xt )t 0 be an irreducible homogeneous Markov process on the state space E =
f1 ::: N g. Let A be the innitesimal generator of this process, where A(i i) def= ; Pj6=i A(i j)
and dene (i) def
= ;A(i i) 8i 2 E . That is, (i) represent the output rate of state i. Let P be
the transition probability matrix of the uniformized chain (5]) of X . We then have the following
relation between the matrices A and P :
P = I + A=
where is the rate of a Poisson process fN (t) t 0g independent of the uniformized chain and
verifying: max((i) i 2 E ). We will denote respectively by and x the initial and the
stationary distributions of the process X .
Remark that x is also the stationary distribution of the uniformized chain of X (x = xP ). As
in the discrete time case, we consider a subset B of E and we conserve the notations for B , B 0,
B 00 , for the decomposion of P and the decomposion of A with respect to the partitions fB B cg or
fB0 B00 Bcg. A sojourn of X in B is now a sequence Xtm ::: Xtm+k , k 1, where the ti are the
instants of transition, Xtm ::: Xtm+k 1 2 B , Xtm+k 2= B and if m > 0, Xtm 1 2= B . This sojourn
begins at time tm and nishes at time tm+k . It lasts tm+k ; tm .
Let HB n be the random variable \time spent during the nth sojourn of X in B " and let NB n be
as in the previous section for the uniformized Markov chain of X . The hypothesis of irreducibility
of the Markov process X assures the existence of an innity of sojourns of X in B with probability
1. Dene Vn as in the previous case for the uniformized discrete time Markov chain.
;
;
8
Theorem 3.1
IP(HB n t) = 1 ; vn eAB t 1T
where vn is explicited in Theorem 2.4.
Proof. Let Tn , n 1, be the nth entrance time in the subset B . We have for every n 1,
+
X1
IP(HB n t) =
k=1
IP(N (Tn + t) ; N (Tn ) = k)IP(HB n t=N (Tn + t) ; N (Tn ) = k)
Since Tn is a stopping time, we obtain:
+
+
X1 ;t (t)k
X1 ;t (t)k
IP(HB n t) =
e k! IP(NB n k) =
e k! (1 ; vnPBk 1T )
k=1
k=1
= 1 ; vn e;(I ;PB )t 1T = 1 ; vn eAB t 1T
2
which is the explicit form of the distribution of HB n .
Remark that v1 and G can be also written as follows:
v1 = B ; Bc A;B1c ABc B
G = A;B1 ABBc A;B1c ABcB
The moments of HB n are then easily derived. We have:
IE(HBk n ) = (;1)k k!vn A;Bk 1T for any k 1
Corollary 2.7 has an equivalent in the continuous time case, with identical (omitted) proof.
Corollary 3.2 The sequence (IP(HB n t))n 1 converges in the sense of Cesaro as n !
1 and
the limit is 1 ; veAB t 1T where v is given in Corollary 2.7. The convergence is simple for any initial
distribution of X if and only if G0 is aperiodic.
As in the discrete time case, we dene the positive real valued random variable HB 1 by:
IP(HB 1 t) def
= 1 ; veAB t 1T . We can derive then the Cesaro limits of the k-order moments of
HB n and easily verify the following relation.
Corollary 3.3 For any k 1, the sequence IE(HBk n) n 1 converges in the sense of Cesaro and
we have:
n
1X
IE(HBk l ) = IE(HBk 1 ) = (;1)k k!vA;Bk 1T
lim
n! 1 n
l=1
The convergence is simple for any initial distribution of X if and only if the matrix G0 is aperiodic.
9
4 Pseudo-aggregation
Dene the mapping T : IRN
! IRM by: T:v = w where w(m) = Pi2B m v(i).
( )
4.1 The pseudo-aggregated process
Let X = (Xt )t 0 be a homogeneous irreducible Markov process with transition rate matrix A and
equilibrium probability distribution x. We construct the pseudo-aggregated homogeneous Markov
process Z = (Zt )t 0 from X with respect to the partition B by dening its transition rate matrix
A^ as follows:
8i j 2 F A^(i j) def= X Ph2xB((ki))x(h) X A(k l)
k2B(i)
l2B(j )
We have the same relation concerning the transition probability matrices of the uniformized
Markov chains X and Z of X and Z respectively, denoted by P and P^ respectively. That is:
8i j 2 F
P^ (i j ) def
=
X
P x(k)x(h)
P (k l )
l2B(j )
k2B(i) h2B(i)
X
If we denote by the initial distribution of X , the initial distribution of Z is T: where the
operator T has been dened at the beginning of this section. If the initial distribution leads to
a homogeneous Markov chain for Y , then Z and Y dene the same homogeneous Markov chain
(see 3] or 4]). The stationary distribution of Z is T:x as the following lemma shows.
Lemma 4.1 If z def
= T:x then we have: z P^ = z (i.e. z A^ = 0), z > 0 and z 1T = 1.
Proof. For 1 m M , the mth entry of zP^ is equal to:
M
M X
X
X
X
^
l=1
z(l)P (l m) =
=
x(i)
(
l=1 i2B(l)
N
X X
(
j 2B(m) i=1
j 2B(m)
P (i j )) =
x(i)P (i j )) =
the remainder of the proof is obvious.
X
N
X
i=1
j 2B(m)
x(i)
X
j 2B(m)
P (i j )
x(j ) = z(m)
2
In both discrete and continuous time cases, the following property holds.
Lemma 4.2 The pseudo-aggregated process constructed from X with respect to the partition B and
the pseudo-aggregated process obtained after M successive aggregations of X with respect to each
B (i) i 2 F , in any order, are equivalent.
10
The proof is a direct consequence of the construction of the pseudo-aggregated processes. It
is for this reason that in the sequel we will consider only the situation where B contains only
one subset having more than one state. That is, we will assume as in the previous sections, that
B = f1 . . . Lg where 1 < L < N and that B = fB fL + 1g . . . fN gg with N 3. The state
space of the pseudo-aggregated process Z will be denoted by F = fb L + 1 ::: N g. We will denote
also by B c the complementary subset fL + 1 . . . N g.
4.2 Pseudo-aggregation and sojourn times
We consider the pseudo-aggregated homogeneous Markov process Z constructed from X with respect to the partition B = fB fL + 1g ::: fN gg of E . Although the stationary distribution of X
over the sets of B is equal to the state stationary distribution of Z (Lemma 4.1), it is not the same
with the distribution of the nth sojourn time of X in B and the corresponding distribution of Nb n,
the nth holding time of Z in b, which is independent of n and will be then denoted by Nb . This
last (geometric) distribution is given by:
T xB PB 1T !k;1
x
(
I
;
P
)1
B
B
k1
IP(Nb = k) =
xB 1T
xB 1T
which is to be compared with the given expression of IP(NB n = k).
Observe that if there is no internal transition between dierent states inside B and if the
geometric holding time distributions of X in the individual states of B have the same parameter,
we have obviously identity between the dierent distributions of NB n , NB 1 and Nb . That is, if
PB = I with 0 < 1, we have:
IP(NB n = k) = IP(NB 1 = k) = IP(Nb = k) = (1 ; ) k;1
k1
More generally, if X is a lumpable Markov chain 3] over the partition B (i.e. the aggregated
chain Y is also Markov homogeneous for any initial distribution of X ) we have: PB 1T = 1T ,
(I ; PB )1T = (1 ; )1T , where 2 0 1. In this case, PBk;1 (I ; PB )1T = (1 ; ) k;1 , k 1, and
the distributions of NB n , NB 1 and Nb are identical.
We will investigate now the relation between moments (factorial moments) of NB n and Nb n.
We give an algebraic necessary and sucient condition for the equality between the Cesaro limit
of the k-order moments of the random variable NB n and the k-order moments of Nb .
11
Theorem 4.3 For any k 1:
"
#
n
1X
lim
FMk (NB l ) = FMk (Nb )
n!+1 n l=1
() 2
3
T !k ;1
k
;
1
x
P
1
B
B
4xB PB (I ; PB );1 1T = xB 1T
5
T
xB (I ; PB )1
Proof. For the geometric distribution of Nb, we have:
T k ;1
FMk (Nb ) =
k! xB PB 1
xB 1T
(xB (I ; PB )1T )k
Let us x k 1. Since from Corollary 2.7,
v = xxB(I(I;;PPB)1)T
B
we have:
"
lim
n
1X
n!+1 n l=1
B
2
3
T k;1
x
P
1
B
B
7
FMk (NB l ) = FMk (Nb ) () 64vPBk;1(I ; PB );k 1T = (xB 1T )
T k5
#
(xB (I ; PB )1 )
2
!k;13
T
k
;
1
x
P
1
() 4xB PB (I ; PB );1 1T = (xB 1T ) B B T 5
xB (I ; PB )1
which is the given condition
2
In the important case k = 1, the above condition is always satised. The following corollary
states this result. There is no need for a proof since it is trivial to check the condition for k = 1.
Corollary 4.4
n
1X
IE(NB l ) = IE(Nb )
lim
n!+1 n
l=1
In continuous time we have analogous results about the relation between properties of sojourn
times of the given process X and the corrresponding holding times of the pseudo-aggregated process
Z . Denote by Hb the holding time random variable for process Z .
T
where = ; xBxA1BT1
IP(Hb t) = 1 ; e;t
B
The continuous time version of Theorem 4.3 is then the following.
12
Theorem 4.5 For any k 1:
n
1X
k
lim
n!+1 n
l=1
h
IE(HB l ) = IE(Hbk ) () (xB (I ; PB )1T )k;1 xB (I ; PB );k+1 1T = (xB 1T )k
i
Proof. The k-order moment of the holding time Hb of Z in b can be written:
T !k
IE(Hbk ) = kk! = k!
Then, we have:
"
#
n
1X
lim
IE(HBk l ) = IE(Hbk )
n!+1 n l=1
()
xB (I ; PB )1T
; xBxAB 1B 1T
2
;
k
T
4
() v (I ; PB ) 1 =
k;1
xB 1T
xB (I ; PB )1T
xB (I ; PB );k+1 1T = xB 1T
thanks to the expression of v given in Corollary 2.7.
!k 3
5
k 2
As in the discrete time case, the given condition is always satised for rst order moments. The
following result corresponds to Corollary 4.4 with identical trivial verication.
Corollary 4.6
n
1X
IE(HB l ) = IE(HB 1 ) = IE(Hb )
lim
n!+1 n
l=1
5 Application
In 6], the authors consider a fault tolerant multiprocessor system with k buer stages. Processors
fail independently at rate and are repaired singly with rate . Buer stages fail independently at
rate and are repaired singly with rate . Processor failure causes a graceful degradation of the
system (the number of processors is decreased by one). The system is in a failed state when all
processors have failed or any buer stage has failed. No additionnal processor failures are assumed
to occur when the system is in a failed state. The model is represented by a homogeneous Markov
process with the state transition diagram shown in Figure 1. At any time the state of the system
is (i j ) where 0 i 2 is the number of nonfailed processors, and j is zero if any of the buer
stages is failed, otherwise it is one.
We assume the system starting with all its processors and buers operational, i.e., the initial state is the state (2 1) with probability 1. We are interested in the states of the set B =
13
'$'$'$
&%
&%
&%
'$
'$
'$
&%&%&%
B
2 -
(2 1)
k
6
?
(2 0)
k
(1 1)
6
?
(1 0)
-
k
(0 1)
6
?
(0 0)
Figure 1: Model of a fault-tolerant multiprocessor system
f(2 1) (1 1)g, in which the system is operational.
We take the following values of the dierent
rates: = 6:10;5 , = = 1=6, = 10;2 time is in hours (6]). Suppose the user wishes that the
dierent sojourns in B be, in the mean, as \long" as possible and that the only free parameter is
k, the memory size. It is clear that the sequence IE(HB n ) is decreasing when k increases, so, the
best value for k is k = 1. Since, on the other side, the system needs memory enough to perform,
we will look for the highest value of k such that the rst mean sojourn times be, say, at least 24
hours. Assume also that k must be a power of two. If we compute the expectations IE(HB 1 ),
IE(HB 2 ), . . . , for k = 4, we observe that they are all between 24.9 hours and 25 hours. The reader
can verify that G0 = G is aperiodic, so, the convergence of IE(HB n ) is simple. Moreover, v1 v
(kv ; v1 k1 = 253431=392266256 0:000646) that is why the sequence is almost stationary. If
the same computations are carried out for k = 8, we get: IE(HB n ) 12:5 hours for each n. The
distribution of HB n , when k = 4, is
IP(HB n t) = 1 ; qn eut ; evt + 21 eut + evt
where u, v (the two eigenvalues of AB ) and qn are given by
p15670
p15670
;
9259
+
50
;
9259
;
50
u=
v=
75000
75000
p
p
2280879 15670 10559625 n; 4910386711729 15670
qn = 1229362446304000 108626189
1
+ 1229362446304000
It can be also veried that the condition of Theorem 4.5 is not satised but, for instance, the
relative deviation between the two respective 2-order moments is about 1:8 10;5 per cent. All
14
the computations have been done with the MACSYMA package.
6 Conclusions
In this paper we investigate the sojourn time of a homogeneous nite Markov process in a given
subset of the state space in both the discrete and continuous time cases. In particular, analytical
expressions of the distributions of the nth sojourn or visit of the process to the subset are derived
and their asymptotical behaviour is analyzed.
When the system analyst is interested only in steady state behaviour, it is usual to replace the
original model by a \pseudo-aggregation" where the choosen subset is collapsed into a single state.
This is done for instance when the state space has too many elements. This pseudo-aggregation
is constructed such that the markovian property is conserved even if the real aggregation of the
original process is not Markov. The interest of this procedure is that the steady state probability
that the original process will be in the given subset is equal to the steady state probability that
the pseudo-aggregated one will be in the corresponding individual state. It is natural then to look
at the relations between sojourn times in the rst process and the corresponding holding times in
the pseudo-aggregation. We show that we have equality between the limit in the sense of Cesaro of
successive sojourn times expectations and the corresponding mean holding time in the associated
pseudo-aggregated process and that the equality is no more valid when greater order moments are
considered.
References
1] E. de Souza e Silva and H. R. Gail. Calculating cumulative operational time distributions of
repairable computer systems. IEEE Transactions on Computers, C.35:322{332, April 1986.
2] J. F. Meyer. On evaluating the performability of degradable computing systems. IEEE Transactions on Computers, C.29(8):720{731, August 1980.
3] J.G. Kemeny and J.L Snell. Finite Markov chains. Springer-Verlag, New York Heidelberg
Berlin, 1976.
15
4] G. Rubino and B. Sericola. On weak lumpability in Markov chains. Technical Report 387,
IRISA, Campus de Beaulieu, 35042 Rennes Cedex, France, 1987.
5] S. M. Ross. Stochastic Processes. John Wiley and Sons, 1983.
6] V. G. Kulkarni, V. F. Nicola, R. M. Smith, and K. S. Trivedi. Numerical evaluation of performability and job completion time in repairable fault-tolerant systems. In Proceedings IEEE
16-th Fault-Tolerant Computing Symposium, (Vienna, Austria), July 1986.
16