ON THE MOMENTS OF MARKOV
RENEWAL PROCESSES
by
Jeffrey J. Hunter
University of North Carolina
Institute of Statistics Mimeo Series No. 589
August 1968
This research was supported by the National Science
Foundation under Grant No. GU-2059.
DEPARTMENT OF STATISTICS
UNIVERSITY OF NORTH CAROLINA
CHAPEL HILL, N. C.
\'-..-
SUMMARY
Recently Kshirsagar and Gupta [5] obtained expressions for the asymptotic
values of the first two moments of a Markov renewal process.
The method they
employed involved formal inversion of matrices of Lap1ace-Stie1tjes transforms.
Their method also required the imposition of a non-singularity condition.
In
this paper we derive the asymptotic values using known renewal theoretic
results.
This method of approach utilizes the fundamental matrix of the
imbedded ergodic Markov chain.
Although our results differ in form from those
obtained by Kshirsagar and Gupta [5] we show that they reduce to their results
under the added non-singularity condition.
As a by-product of the derivation
we find explicit expressions for the moments of the first passage time distri-
L
butions in the associated semi-Markov process, genera1ising the results of
Kemeny and Snell [4] obtained for Markov chains.
\
'--'
I. INTRODUCTION
1.1
Outline
This paper is primarily concerned with the derivation of matrix expres-
sions for the asymptotic values of the first two moments of a Markov renewal
process.
This first section deals with the definitions, notation, and known
results of semi-Markov processes, Markov renewal processes and general renewal
processes.
In section 2 we derive matrix expressions for the moments of the
first passage time distributions of a semi-Markov process.
These results are
utilized in section 3 in determining the matrix expressions for the first two
moments of a Markov renewal process.
In the final section the results obtained
recently by Kshirsagar and Gupta [5] are compared with those obtained in this
paper.
1.2
Semi-Markov processes and Markov renewal processes
Consider a stochastic process which moves from one to another of a
finite number of states
AI' A
2
,
"',
Am
with successive states forming a
Markov chain, whose transition matrix is given by
P=[p .. ].
1.J
Furthermore,
the process stays in a given state a random length of time, "the wait", the
distribution function
Qij(')
of which depends on the initial state
well as the one to be visited next, A .
j
at time t, then
{Zt;t~O}
Ai as
Let us write Zt for the state occupied
is called a semi-Markov process.
Associated with
this process is a Markov renewal process which records at each time t the
number of times the Zt process has visited each of the Possible states up to
L
time t.
2
This descriptive definition can be formalised and for a more detailed
and extensive treatment the reader is referred to the papers by W. L. Smith
([11], [12]) and R. pyke ([7], [8]).
Let us define, a (possibly) defective probability distribution, F (') by
ij
=
Let
Then
i,j ... 1,2, ... ,m •
Pij rl ij (t)
.. [F ij (t)] .
!(. )
is a matrix valued function on (-00,00) with the following properties
(i)
(ii)
= 0
Fij (t)
m
t Fij (+00) = 1
j-l
for t<O
for l<i<m
Property (i) differs from that given by Pyke [7].
In this paper we permit
"instantaneous" transitions from state to state but we do make the additional
restriction that the "wait" random variables are not .all zero with probability
one.
1.3
General renewal processes
Let {X }, n .. 0,1,2 ••• be an infinite sequence of independent, non
n
negative random variables which are not all zero with probability one.
We
assume that Xo has a distribution function K(·) and that each X , for n>l,
n
has a distribution function F(.) which is not necessarily identical with K(.).
{X }, n
n
= 0,1,2, ••• is called a general renewal process.
Let 8_ 1 = 0; 8k =
XO+Xl+"'+~'
(k
= 0,1, ••• ), and for all
the random variable Nt as the greatest integer k such that
t~O,
Sk_l~t.
define
Thus Nt
is the number of renewals that will have occurred up to and including time t
in such a general renewal process.
Let v
r
= EX ra
moments exist.
(r
= 1,2, ••• ),
and ~
r
= EX r1
(r
= 1,2, .•• ),
provided these
•
3
V. K. Murthy, [6], genera1ising the results of W. L. Smith, [13], showed
that, under the assumption that F(.) belongs to a class of distribution
functions for which, for some finite k,its kth convolution has an absolutely
continuous component; the following results hold.
(i)
If
EN
(ii)
~2<00
and
=.L +
~l
t
~3<00
If
var Nt
v1<00, then as
r~_~)
f2~l ~l
and
v <00
2
'
+0
t~
(1.1)
(1).
then as t++OO
=
Murthy actually finds expressions for the first eight cumu1ants of Nt
for a general renewal process but there are some errors in his calculations.
Instead of using equation (1.2) we shall find it more convenient in the
latter parts of this paper to consider the second factorial moment.
From
Murthy [6], we have the following additional result.
If
(iii)
~
3
<00 and
EN (N +1)
t
t
v <00, then as
2
t~
= -2+ 2[..2:L-~J
~ 3 ~ 2
t2
~l
1
t
1
+0(1) •
Note that when K(x)
= F(x)
we have vi
= ~i
(1. 3)
and the results above reduce
to those for an ordinary renewal process, namely,
(1.4)
var N
t
',,--
=
(1. 5)
4
1.4
Markov renewal processes
Let I
n
denote the state of the system after the nth transition, (J
the initial state of the system).
denote the time spent in J
n-
Thus J
is one of 1,2, ... ,m.
n
1 before transition into J .
n
The two dimensional stochastic process
{(J
n
O
Let X
n
o=
We define X
,X ), n>O}
being
O.
defined on a
n
complete probability space such that
for all tE(-OO,OO) and
The process
p
{J , n~O}
n
is called a semi-Markov sequence.
is a Markov chain with transition matrix
= [Fij(+oo»).
[Pij)
Let S
n
{N , t > O}
t
l~k~m,
-
= nE X.•
i=O
We define integer valued stochastic processes
~
{N.(t), t>O}
where Nt
the number of times J
n
~
where N.(t) is
~
i for O<n<N •
-t
The vector stochastic process
N(t)
= {N 1 (t), N2 (t), ••• , Nm(t)} is
called a Markov renewal process and the process
{Zt' t>O}
•
where Zt
is called a semi-Markov process.
Let M.. (t) = E{N.(t)IZ = i}
~J
!i(t)
=
J
[M .. (t»).
~J
0
for
t~O
and zero elsewhere, and let
The functions M.. (t) are called the renewal functions and
~J
l!(t) the renewal function of the process (Pyke [8]) ; Le., M.. (t) is the
~J
expected number of visits of the process
{Zt'
t~O}
to the state A. up to
J
time t given the process started in state A.•
~
We shall also be interested in V.. (t)
~J
zero elsewhere.
Let
~(t)
~J
elsewhere.
process
J
for t>O and
= [Vij(t)].
Let G(t) = [G .. (t)] where G.. (t) = p{N
-
= var{N.(t)!Zo=i}
~J
j
(t»olz 0 = i}
for t>O and zero
Thus G.. (t) is the distribution function of the time that the
~J
{Zt' t~O} visits state A for the first time, starting in state Ai.
j
•
5
We shall consider only those processes for which
i,j
= 1,2, ... m.
G
ij
(+00)
=
1
for all
This ensures that "return to state A." is a recurrent event
J
and hence gives rise to a renewal process.
imbedded Markov chain,
Equivalently, we assume that the
{J , n>O} , is irreducible (i.e., each state in the
n
-
chain can be reached from every other state).
attention to ergodic chains.
This means that we restrict
We shall make the additional assumption of
aperiodicity. However, all the results presented in this paper hold for periodic chains with, in some cases, minor modifications.
The connection between the Gij(t) and the Fij(t) is given by the following
1ennna.
Lennna 1.1
(1. 6)
Proof:
Pyke [8], p. 1245.
6
II.
MOMENTS OF THE FIRST PASSAGE TIME DISTRIBUTIONS
FOR A SEMI-MARKOV PROCESS
2.1
General results
Let
(r)
m..
~J
(r)
lJ ij
r
.. (x)
= Joo x dG ~J
0-
r
.. (x)
= Joo x dF ~J
0( 0)
Note that
m..
We shall write
m..
~J
~J
,
r
= 0,1,2, ... , whenever these moments exist.
( 0)
= 1, and that lJ ij
= Pij
(1)
= m(1) and lJ ..
1j
~J
lJ..
~J
•
m
L
It is sometimes convenient to write
lJiO) = 1 and let
(r)
j=l
lJ ..
~J
Note that
lJi1) = lJ i .
m~:) (the rth moment of the first
The following lemma connects the
~J
(r)
lJ ij
passage time from state A. to state A.) with the
~
J
(P ij times the rth
moment of the distribution of the wait in state A. before transition to state
~
A. at the next step).
J
Lemma 2.1:
(r)
(2.1)
m ••
~J
provided the moments exist.
Proof:
From Lemma 1.1 we obtain
oox r dG .. (x) -- foox r dF .. (x) + L oox rJx d G . (u) d F .. (x-u)
Jo
k+j 0
0 u kJ
x ~J
~J
0
~J
J
Changing the order of integration
m~~)
= lJi;) +
k;jf~(J:XrdxFik(X-U»)duGkj(U)
lJi;) +
k~jJ~[J~(U+t)rdFik(t»)dGkj(U)
lJi;) +
k~jf~(f:s!o(:)UStr-SdFik(t»)dGkj(U)
•
7
,
'-....."
=
IJ,(r)
•
E (r){
+
1J
s=o s
L IJ (r-s) In.(s)}
".
k*j ik
kJ
and the result follows upon simplification.
Cor. 2.1.1:
(a)
m.. =
1J
(b)
m.. 1J
(2)_
(2.2)
+ IJ.1
L P'kIn. j
1 "k
k~j
(2)
(2)
Let us write
In particular let
~~r)' = (m;~), ..• , m~~»
(0)
=
-a
1J
per) = [(l-e j)IJ (r) ], r = 1,2, ••• ;
a
ij
a
Pa
\___
a
with the convention that
[p .. ].
Furthermore, define
Thus, P
for a = l, ••• ,m •
m' = m(l)~.
-a
Let" per) = [IJ~:)] for r = 0,1, ••• ,
1J
P =P
(2.3)
L PikIn. j + 2 L IJ i ,In. j + IJ i
.k
k+j J k
k~j
= [(l-oaj)P ij ]·
is the transition matrix P with the a
Also let
~
(r)'
th
column replaced by zeros.
(r)
(r)
= (IJ 1 ' ••• ,IJ m l.
Theorem 2.2
(I_P.)m(r) = ri (r)p(r-s)m~s) + ~(r)
J -j
S=l s j
-J
1
Proof:
(2.4 )
This result follows directly from equation (2.1) using the notation
established above.
Cor. 2.2.1:
(a) (I-P.)m.
J -J
(b) (I-P j
)~2)
Let us define
= ~ (1)
(2.5)
= 2P ( 1 ) m
j -j
M(r)
[m
+
_\.I (
2
)
(2.6)
(r)]. M = M(l).
ij'
,
(r»
A(r) = d1'ag (lI(r)
'"'I
,···,IJ m .
If A is a matrix [a . ] we denote by dA, the diagonal matrix [c .. a .. ].
1j
1J JJ
We shall also use the convention that I is the identity matrix, J is the
8
matrix whose elements are all unity, and
a is the null matrix; (all square
matrices of order m).
Note that
A(r)J
Theorem 2.3:
= p[M(r) _ M(r)] + r E1 (r)p(r-s)[M(s) _ M(s)] + p(r)J.
M(r)
S=1 s
d
(2.7)
d
Proof: This result follows directly from equation (2.1).
Cor. 2.3.1:
= P[M-dM] +
(a)
(2.8)
P[M(2)_ M(2)] + 2P(1)[M_ M]
(b)
2.2
p(1)J.
d
(2.9)
d
Solving for the moments using the theory of determinants.
In this section we outline a method for obtaining the
m~:) using
1J
equation (2.4).
We earlier made the assumption that
of an ergodic Markov chain.
vector
=
u'
=
[p .. ] is the transition matrix
1J
Under this assumption we can produce a solution
(u , ••. , urn)' such that~'
1
minants of I-P.
P
= P~',
in terms of the subdeter-
Let D. denote the determinant formed by striking out the i
th
1
rowan d 1.th co 1umn
0f
I -P.
Theorem 2.4:
A Markov chain, with transition matrix P, is ergodic if and only if
D.>O for i = 1,2, •.• ,m.
1
If
Proof:
u'
-
=
(D , ••• ,
1
Dm)
then
u'p
= u,•
This theorem is due to Mihoc and can be found in Frechet [2].
also stated in Barlow and Proschan [1], p. 129.
Let
{'
= (1,1, •.• 1),
and note that ~~' is a matrix.
It is
9
Theorem 2.5:
ul
=
If P is an ergodic transition matrix with period d, and
=- - - -
(u , ..• , u ) where ul..
1
m
then (a)
L
(b)
PL
(c)
U
I
= LP = L
is the unique probability vector satisfying
~P
=u
l
•
Barlow and Proschan, [1], p. 129.
Proof:
Theorem 2.6:
If P is an ergodic transition matrix, then
(a)
(2.10)
(b)
(2.11)
-1
to exist we require det(I-P ) ~ O. But evaluating this
j
th
determinant by considering the cofactors of the j
column, it is easily
Proof:
For (I-P )
j
seen that det(I-P )
j
= Dj>O (by ergodicity); and hence the results follow
from equations (2.5) and (2.6).
Let us examine equation (2.10) in a little more detail so as to find
explicit expressions for the m in terms of the co factors of I-P and I-P .
ij
j
Firstly, consider an arbitrary matrix
A
=
[a
Let A be the
ij
cofactor of the (i,j)th element of A (i.e., striking out the i th row and the
ij
].
jth column of A).
Let A+ = adj A = [A
then
A+A
= AA+
ji
];
= I det A.
From this above relation we have for each i,j
m
k~lakj~i
= 1,2, .•• ,m
m
= k~laikAjk =
0ij det A.
Now, let B be the cofactor of the (i,j)
ij
th
element of I-P, and let
B~j be the cofactor of the (i,j)th element of I-Pr •
10
Lemma 2.7:
det (I-P ,) =
J
B, ,
JJ
= D,>O.
J
Proof:
det (I-P j) =
=
m
L
k=l
[I-P']k' (cofactor of [I-Pj]kj)
J
m
J
j
L ok' B
k=l J kj
j
= B.. = B..
JJ
JJ
Theorem 2.8:
1
m
m,. = - L
1.J
B.. k
JJ =1
Proof:
(2.12)
From (2.10) and the definition of (I-P,) - 1,
J
m. = B1 adj(I-p,)~(i)
-J
jj
J The result follows by considering the i
th
row of m. and noting that
-J
j
[adj(I-Pj)]ik = Bki •
It is possible to carry this approach further and express the B~i
in
terms of the second order minors of I-P, but, since the method presented in
the next section, 2.3, gives us the solution in a compact form, we shall not
proceed any further with this above method.
Let
B = [B,,] = adj(I-P).
J1.
Since det(I-P) = 0 we can conclude that
B(I-P) = (I-P)B = 0
i.e.
BP
B
PB
Comparison of this result with statement (b) of Theorem 2.5, implies that
B = koL, where k
are unity.
o
is a constant determined so that the column sums of L
11
Let
L
=
[1
ij
], then, since
L ...
;U'
-'
If we make the
additional assumption of aperiodicity then the mean recurrence time, t , of
j
state A. in the ergodic Markov chain with transit!on matrix P, is finite for
J
all j and
u
j
=!R.
>0.
j
Theorem 2.9:
jL!tj = 1, we have that
Hence
Now, from Theorem 2.8,
m
jj
1
""-B
jj
and the result follows, noting that
2.3
Solving for the moments using generalised inverses.
In this section we obtain explicit expressions for M and M(Z) using
equations (2.8) and (2.9).
Throughout this section we make the assumption that P is the transition
matrix of an aperiodic, ergodic Markov chain and hence the remarks made in
Section 2.2 concerning the matrix L still hold.
Define
A
r
= ~
~! ~(r) = ~ 1 ~ (r)
i=1 j=1
1i
ij
i=1 R.
i i
(2.13)
The constant Al has been termed the "asymptotic mean increment" by
Keilson and Wishart, [3].
Lennna 2.10:
(a)
LP(r)L
"" ArL
(2.14)
(b)
LP(r)J
= ArJ
(2.15)
12
The result presented in Theorem 2.9 can be obtained by an alternative,
more direct, proof as follows.
Theorem 2. 11 :
(a)
m
jj
(b)
m.. (2)
= AIR- j
m 1
L
[A 2 + 2 L
k:¢j s .. l ]; s
]]
(2.16)
(1)
]
].Jsk mkj R- j
(2.17)
Premultiply equation (2.8) by L to obtain
Proof:
LM
= LM
-
L M + LP(l)J
d
Using equation (2.15), this reduces to
Taking the (i,j)th element of equation (2.18)
and hence equation (2.16) follows.
i.e.
Similarly, for equation (2.17), from equation (2.9)
L M(2)
d
= 2LP(1)[M- M] + A J
d
(2.19)
2
Taking the (i,j)th element of equation (2.19)
1
(2)
L ~ 0k.mj.
k k
J J
=2
(1
(1») (mkj-ok.m. j )
L L T].J
k s s sk
J J
and equation (2.17) follows.
From Theorem 2.11 we can conclude that the diagonal elements of Mare
easily obtained.
Furthermore, once M is found dM(2) will be easily deter-
mined.
Let us put dM
A £.'
1 J
= D, where D is a diagonal matrix with diagonal elements
Thus equation (2.8) can be rewritten as
(I-P)M
= p(l)J - PD •
13
Note that
det(I-p)
= 0, and hence (I-P) is a singular matrix. To solve
this equation for M by matrix methods we resort to the use of generalised
inverses.
We denote a generalised inverse of a matrix A by A-.
(A- is, in general,
not unique).
C. R. Rao ([9], [10])discusses the properties of generalised inverses
extensively.
The following lemma contains a selection of Rao's results,
sufficient for our needs.
Lemma 2.12:
AA-A = A.
(a)
A- exists, if and only if,
(b)
A necessary and sufficient condition for the equation
A X B = C to have a solution is
AA-CB-B
=C
in which case the general solution is
X = A-CB- + W - A-AWBBwhere W is arbitrary.
Proof:
(a)
Rao [9], p. 24
(b)
Rao [10], p. 269
Cor. 2.12.1:
A necessary and sufficient condition for the equation
to have a solution is
X
= A-C +
AA-C
=C
AX
=C
in which case the general solution is
(I - A-A)W
where W is arbitrary.
Proof:
Put B
=I
(in which case B-
= I)
in Lemma 2.12(b).
To apply this lemma, to obtain the solution of equation (2.8), we need
to find a generalised inverse of
I-P.
14
Theorem 2.13:
Z = [I-(P-L)]
If P is an ergodic transition matrix, then the matrix
-1
exists, and
(a)
PZ = ZP
(b)
Z5.-
= 5..
= ul
(d) (I-P)Z = I-L
(c)
Proof:
ulZ
Kemeny and Snell, [4], p. 100.
Cor. 2.13.1:
=J
(a)
ZJ
(b)
LZ = ZL = L
Proof:
Result (a) follows directly from (b) of Theorem 2.13, while result (a)
follows from (b) and (c) of the same theorem.
Kemeny and Snell refer to this matrix Z as the "fundamental matrix" for
the Markov chain determined by P.
However, it appears that its full impor-
tance has not hitherto been realised.
We show that it is a generalised
inverse of I-P, with the additional property that it has full rank.
Theorem 2.14:
Proof:
Z = [I-(P-L]-l is a generalised inverse of I-P.
Let (I-P)- = Z, then
(I-P)(I-P)-(I-P)= (I-P)Z(I-P)
= (I-L)(I-P)
I-P
(Theorem 2.l3(d»
(Theorem 2.5 (b) )
Thus Z satisfies the criterion to be a generalised inverse of I-P, as given
by Lemma 2.12 (a).
We shall make repeated use of the following easily proved results.
Lemma 2.15:
matrix, then,
Let A be any diagonal matrix and let X be an arbitrary square
15
(a)
d(XA) = (dX)A
(b)
1
d(XJ) = I d (XL)D
(c)
d (JA) = A
1
Theorem 2.16:
If the imbedded Markov chain of the semi-Markov process is
ergodic then
M=
[f
1
{Zp(l)L - J (Zp(l)L)} + I - Z + JdZ]D
d
where D is the diagonal matrix with diagonal elements
Proof:
(2.20)
A Q,..
1 J
Using Cor. 2.12.1, the general solution of
(I-P)M
= p(l)J
- PD
is given by
M = Zp(l)J - ZPD + [I - Z(I-P)]W
~_
(2.21)
where W is an arbitrary matrix, provided the consistency restriction
(I-P)Z(p(l)J - PD) = p(l)J - PD
Now
(I-P)Z(p(l)J - PD)
=
is satisfied.
(I_L)(p(l)J - PD)
= p(l)J
= p(l)J
(Theorem 2.13(d))
- PD - A J + LdM
(Lemma 2.10(a))
_ PO
(by (2.18))
1
and thus the required consistency condition is satisfied.
Furthermore, using the results of Theorem 2.13, it is easily seen that
I - Z(I-P)
=L
and hence equation (2.21) becomes
M = ZP (1) J - ZPD + LW
Let
W = [wij ], then [LW]ij =
1
~ ~k
wkj = b jj , say; i.e., an expression
independent of i.
Let B = diag (b 11
1
, •••
M = Zp(l) J - ZPD + JB
1
,b
mm
), then
(2.22)
16
2
Thus, instead of having m arbitrary elements of W there are only m,
the diagonal elements of B 1 •
These can be found using the results for dM,
Le., D.
From (2.22)
dM
= d(Zp(l)J)
- d(ZPD) - d(JB 1)
Using the results of Lemma 2.15 we have
=!
(Zp(l)L)D - d(ZP)D + B1
A1d
From this equation we have an expression for B1 •
D
Substituting for B 1 in equation (2.22) we obtain
M = Zp(l)J - ZPD + J[I + d(ZP)
_! d(Zp(l)L)]D
A1
Using Lemma 2.15(d) and equation (2.18) this above equation can be written as
M = [f1{Zp(1)L - Jd(Zp(l)L)} - ZP + Jd(ZP) + J]D
Using Theorem 2.13(d) in conjunction with Lemma 2.l5(d)
and equation (2.20) follows.
Theorem 2.16 presents a new result.
It generalises the results obtained
by Kemeny and Snell, [4], (p.79) for the mean first passage matrix of a
Markov chain.
The technique used by Kemeny and Snell is different from the
one presented here since they do not at any stage utilize the generalised
inverse method of solving linear equations.
that if all the
Qij(')
The following corollary shows
degenerate at one for all i,j; i.e., the semi-
Markov process degenerates to a Markov chain; then (2.20) gives the results
obtained by Kemeny and Stlell.
Cor. 2. 16 . 1:
If the semi-Markov process degenerates to Markov chain, then
the mean first passage matrix, M , for this ergodic Markov chain is given by
1
•
17
M = [I - Z + JdZ]D
1
(2.23)
1
where D is a diagonal matrix with diagonal elements
1
Proof:
For a Markov chain,
Also since
IJ
=1
i
IJ
ij
=
£ .•
]
= Pij' and hence p(l) = P.
it is easily seen that
1 [Zp(l)L - J (Zp(l)L)]
Al
d
d •.
]]
Al = 1.
Therefore
= ZPL - Jd(ZPL)
= ZL - Jd(ZL)
=L
=
- J L
d
0,
and thus equation (2.20) becomes (2.23).
Theorem 2.17:
If the imbedded Markov chain of the semi-Markov process is
ergodic, then
M(2)
d
=1
Al
[2{1
(LP(l)Zp(l)L)
Al d
d(LP(l)Z) + Al dZ}D + A 1]D
2
(2.24)
Proof:
From equation (2.19), substituting for M-dM,
L M(2) = 2LP(1)[1 Zp(l)L - 1 J (Zp(l)L) - Z + JdZ] D + A J
2
d
Al
Al d
-1
Since, from equation (2.18), L = A2J(dM) , we have
A J M(2)
1 d
=
[2LP(1){1 Zp(l)L - 1 J (Zp(l)L) - Z + J Z}D + A J]D
Al
Al d
d
2
Taking into consideration only the diagonal elements we have, with the
aid of Lemma 2.15, that
(2 )
Al dM
= [2{1
1
(LP (1 ) J) (ZP (1 )L)
Al d
d
(LP(l)Zp(l)L)
Al d
+ d(LP(l)J)dZ}D + A2 I ]D
From Lemma 2.10, LP(l)J
= AIJ,
and hence
d(LP(l)J)
= All.
Substitution of
this result gives equation (2.24).
Cor. 2.17.1:
If the semi-Markov process degenerates to a Markov chain, and
if M (2) is the second moment first passage matrix for this ergodic chain, then
l
18
M(2)
d 1
Proof:
= [2 (Z)D - I]D
d
1
For a Markov chain
p(2)
(2.25)
1
= P,
p(l)
Also
Zp(l)L
LP(l)Z
LP(l)Zp(l)L
ZPL
= ZL = L
= LPZ = LZ
L
2
= LPL = L
L
Thus equation (2.24) becomes
dM~2) =
=
[2{dL - dL -dL + dZ}D 1 + I]D 1
[I - 2 (dIJD 1
+ 2 ~ dZ)D 1] D1
which becomes equation (2.25) upon observing (i)D 1 :=
I
This result for ergodic Markov chains was found by Kemeny and Snell,
[4], p. 83.
Theorem 2.18:
[I - Z + J Z] M(Z) + 2[J (Zp(1» - Zp(l)]D
d d
d
+ 2[Zp(1)M - J (Zp(l)M] + Zp(2)J - J (Zp(2)J)
d
Proof:
(2.26)
d
We parallel the method used in Theorem 2.16.
Starting with equation
(2.9) the required solution is given by
=
M(2)
2Zp(1)[M _ M] + Zp(2) J - ZP M(2) + JB
d
d
(2.27)
2
where B is an arbitrary diagonal matrix which can easily be found since
2
dM(2) has already been determined.
From equation (2.27),
dM(2)
= 2 (Zp(1)M) - 2 (Zp(1»D + d(Zp(2)J) - d(ZP)dM(2) + B
d
2
d
B2 is now determined by this above equation and hence substitution for B2
in equation (2.27) gives, upon simplification,
M(2)
=
[J - ZP + J (ZP)]dM(2) + 2[J (Zp(1» - Zp(l)]D
d
d
+ 2[Zp(1)M - J (Zp(l)M)] + Zp(2)J - J (Zp(2)J)
d
d
•
19
and the result follows as in the proof of
Theo!~m
2.16.
Further simplification is possible in the case of a Markov chain and it
can be shown that equation (2.26) leads to the corresponding result found by
Kemeny and Snell, [4], p. 83.
20
III.
3.1
MOMENTS OF MARKOV RENEWAL PROCESSES
The aSymptotic value of the first moment
In this section we are interested in obtaining the asymptotic value of
the matrix
~(t)
=
[Mij(t)], where the
M. ,(t)
1J
are the renewal functions of
the Markov renewal process.
Since we have assumed that the imbedded Markov chain is irreducible we
have that "return to state A , given the process started in Ai" is a
j
recurrent event.
We shall make the additional assumption that this Markov
chain is aperiodic.
Under these assumptions we are able to make use of the results of
general renewal processes.
~
jj ~
(2)
1 · m , 2."__:__jj ~
M1'j(t)
m
jj
In particular, from equation (1.1) with
d v
a[:~~) l_-m::]j'+
2 2
mjj
we obtain
0(1)
(3.1)
m'
Jj
Kshirsagar and Gupta,[5], in their paper on finding the asymptotic value
of
~(t)
do not use this approach but instead use the Laplace-Stieltjes
transforms of the first passage time distributions Gij(t)'s and apply the
standard Tauberian arguments.
They found it necessary to consider many
special cases depending on the form of the matrices p(l) and P.
We shall
show later that the results obtained in this paper agree with those obtained
by Kshirsagar and Gupta.
Using the same matrix notation as earlier introduced, equation (3.1)
can be written as
(3.2)
21
The following lemmas collect together some useful results
Lemma 3.1:
( M)-I
(a)
(b)
Proof:
d
1
= -
AId
L
1
-1
=-L
Al
Result (a) follows direct from equation (2.16) while result (b)
J(dM)
follows from equation (2.18)
Lemma 3.2:
If X is an arbitrary square matrix, then
(a)
Jd(LX)
= LX,
Ld(LX) = LXdL
(b)
Jd(XL)
= JXdL,
Ld(XL)
= JX(dL)2
Theorem 3.3:
~(t)
=~
Al
Proof:
1- LP(I)Zp(I)L
A2
2A 2
1
-
! Zp(I)L - ! LP(I)Z + Z - I + 0(1)
A
1
1
A
1
(3.3)
Using the result of Lemma 3.l(b), equation (3.2) becomes
~(t)
~(t)
).
L + ~ L +
=
E-.. L + 1- L M(2) ( M)-1 - M( M)-1 + 0(1)
Al
2Al
d
d
d
Substituting for L M(2) from equation (2.19) we obtain
d
= E-.. L + __
1 __ {2LP(I)M( M)-1 - 2LP(I) + ~ L} - M(dM)-I + 0(1)
Al
2A 1
d
Al
(3.4)
Theorem 2.16 implies that
M( M)-I
d
=!
Al
Zp(I)L -
!
J (Zp(I)L) + I - Z + JdZ
Al d
Substitution of (3.5) into (3.4) gives
_!
~(t)
LP(I)J (Zp(I)L)
A2
d
1
_!
Al
LP(I)Z + ~ LP(I)J Z
Al
d
+ ~ L _! Zp(l\
2A 2
1
+
! J (Zp(1 \) - I
Al d
Al
+ Z - JdZ + 0(1) .
(3.5)
22
Application of Lemma 2.10, namely that
LP (l)J
='
1\
J
1 '
and simplification
yields the required result.
Computationwize, the calculation of
rather long.
~(t)
using equation (3.3) appears
However, this equation can be simplified by factorisation to
yield
Rather than consider the second moment directly we shall focus attention
on the
Wij(t)
= E{Nj(t)(Nj(t) + 1) Iz o = i}.
Should one be interested in Vij(t)
moment E{N~(t) Iz o
= var{Nj(t)lz o = i}
or the second
= i}, the results of this section together with those of
3.1 will suffice.
To determine Wij(t) we apply equation (1.3) with
~r = mj~)
and
(3.6)
Theoretically, to find an explicit expression for
is dM(r) (r
= 1,2,3)
and M(r) (r
~(t)
=
[Wij(t)] all we need
= 1,2).
In fact,
~(t)
.2
= t 2J {'(dM) -1}2 + 2t[J dM(2){ (dM) -1}3 - M{(dM) _1 }]
+ .3.J ( M(2»2{ ( M)_1}4 _ 2 J M(3){ ( M)-1}3_ 2M M(2){ ( M)-1}3
2
d
d
3d
d
d
d
+ M(2){( M)-1}2 + 0(1)
d
(3.7)
23
We shall determine explicit expressions for Al and A
2
and indicate how
to find a similar expression for A3•
Theorem 3.4:
where
(3.8)
(3.9)
Proof:
From equation (3.7)
Al =
J{( M)-1}2 =
d
=
1- L(
Al
1- L
A2
1
d
d
M)-I
L
(by Lemma 3.l(b))
(by Lemma 3.l(a))
Similarly, from equation (3.7)
A = 2J M(2){(dM)-1}3 - 2M{(dM)-1}2
2
d
(by Lemma 3.l(b))
A
• [~2LP(I)M( M)-I - 2LP(1) + ~ L} - 2M( M)-Il! L
Al
d
\
d
Al d
(by equation (2.16) and Lemma 3.1)
24
Using equation (3.5) for M(dM)-l,
[.L LP(J.)Zp(l)L _ .L LP(l) J (Zp(l\)
A2
A2
1
1
d
This expression can be simplified by using Lemma 2.10 and Lemma 2.15(d)
to give the required expression for A2 •
The expression for A3 involves a considerable amount of computation and
From equation (3.7) it is evident that an expression for
is omitted.
is required.
Actually it is sufficient to find only
L M(3)
d
and this is
easily expressed in terms of M and M(2) by premultiplying equation (2.7) by
L with r
= 3.
L M(3)
d
=
i.e.,
3LP(1) [M(2) - dM(2)] + 3LP(1) [M - dM] + A J.
3
Simplification of the expression for A3 can now be effected making use
of Theorems 2.16, 2.17, and 2.18.
25
IV.
COMPARISON OF RESULTS WITH THOSE
OBTAINED BY KSHIRSAGAR AND GUPTA
4.1
Summary of earlier results
Kshirsagar and Gupta, [5], showed that
~(t)
~ atH
~(t) ~
o + aH I + a(} ak z - a 1 )R o
I
+ 0(1)
(4.1)
2
t G1 + tG z + G + 0(1).
3
(4.2)
where
(i)
(ii)
adj(I - P + sP
Z
)
F
Ho + sH I + s Hz + ... + s
m-l
H_
= (detP(I»
s (6
1
+ s)(6
+ s) ••• (6
+
m-l
2
p(I)-l(I_P). Also
are latent roots of
6 ,6 2 , ••• ,6
,0
1
m-l
1
Thus
a1
a
(v)
o
0
==
I:
1
1/6 i
(4.6)
(4.7)
(4.8)
••• Sm-l
= am-l Idetp(1)
k H
(r
r 0
(4.9)
= 2,3, ... )
(4.10)
Z
(4.11)
Gl
==
a HOdHo
Gz
= 2a
G3
= 2a 2 {(21 ak2
2
{~l
(4.4)
m- l
= 1/6 16 2
a
(iii)
=
I: liS is j
z = i>j
am_ l
H p(r)H
s)
(4.5)
det(I _ P + sp(l»
(iv)
(4.3)
m 1
Under the assumption that pel) is non singular
det(I - P + sp(l»
where
(1)
+ (akz - 2a l )H odHo} - 2a dHo
2
-
2al )dHO + ~2+
21
(4.12)
a~3
liZ
+ (3a l - 2a2 - 3aal k2 - 3 ak 3 + 4 ak2 )H H }
od o
- 2a{
1
(2 ak2 - al )<lRo + dHl}
(4.13)
26
and
~
1
= H1 dH a + HadH 1
(4.14)
(4.15)
~2
[H p(2)H 1 H
a
1- d
a
+ H [H p(2)H ]
ad a
1
+ [H p(2)H ] H + H lH p(2)H ]
ad 1
a
1
a d a
4.2
(4.16)
Comparison of results
We show in this section that, under the same assumption that p(l) be
non singular, the results presented in Section 4.1 are the same as those
presented in Sections 3.1 and
3~2
6f this paper.
Since, for any matrix A,
A(adj A) = (adj A) = I detA
from equation (4.3) we obtain
(I - P + sp(l»(H
a + sH 1 +
= (H a + sH 1 + s 2 H2 + ...+
I det(I _ P + sp(l»
(4.17)
From equations (4.16), equating coefficients of sr we have
(a)
(r
= 0)
(b)
(r
= 1, ... , m-1)
(c)
(r
= m)
H - PH
o
a
H - PH + p(l)H
r-l
r
r
P(l)H
m-1
= Ho
From equations (4.17) and (4.18) we note that
HoP
H P + H P (1 )
r
r-l
H P (1 )
m-l
H
r
b
o
boI
(4.18)
= br I
(4.19)
b I
m
(4.20)
= det(I-P) = 0 and hence
From this result we can conclude that, for some constant k a
(4.21)
27
From equations (4.19) and (4.20), pre- and post-multiplication of each
equation by L gives
= HP(l)L
= br L
r-i
In particular when r
(r
= 1, .. ,m)
(4.22)
= 1 , k 0LP(l)L
The following theorem gives a recurrence relation that enables us to
determine H once H
is known.
r
r-i
Theorem 4.1:
For r
= 1,2, .•• ,m-l
(4.23)
Proof:
From equation (4.19) for any r
(I - P)H
r
H (I - P)
r
= b I - p(l)H
we have
r-i
(4.24)
- H p(i)
r-i
(4.25)
r
= br I
= 1,2, ••• ,m-l
We shall derive equation (4.23)
Let us assume that H
is determined.
r-i
for H using equation (4.24) together with Cor. 2.12.1 (Alternately, we
r
could use equation (4.25) with an appropriate form of Lemma 2.12 to obtain
the same result).
Since Z is a generalised inverse of. I - P we have, upon application of
Cor. 2.12.1 that
i.e.
H
r
Z(b I - p(l)H
) + [I - Z(I-P)]W
r
r-i
r
+ LW
H = b Z - Zp(l)H
r
r
r-i
r
(4.26)
We eliminate the arbitrary matrix W from equation (4.26) by premultir
plication by LP(l) to obtain
LP(l )H
r
b LP (l)Z _ LP(l )ZP{l)H
+ b i LW
r
r-i
r
kO
(from equation (4.22) with r
br+iL
(from equation (4.22»
= 1)
28
Thus,
LW
kOb +
k
(
()
k b
)
= - r 1 L + --o LP I) ZP I H
_ --..Q.....E. LP (I Z
r-l
b
bl
bl
r
l
Substitution in equation (4.26) gives the required result.
Cor. 4.1.1
(4.27)
Proof:
Put r = 1 in equation (4.23) and H
o
koL.
We have now expressed Ho and HI in terms of our known matrices L,
P
(I)
,and Z.
We now find relationships between
k ' b , b , a, k , a and
o
l
2
2
l
the A , A •
I
(i)
2
( r = 1 "2 .• ) , th us s i
L=-k
l H
LP (r)L = A\ L
nce
r
a a
(r)
HoP
Ho = ArkoH o ' Comparison with equation (4.10) shows that k
Ark O
r
From Lemma 2 . 10
for r = 2,3, ••.•
(ii) From equations (4.4) and (4.5) we have that
deter - P + spell) = s(detp(I»SISz ... a - (1 +
ml
= s(detp(l»
_1
[1 +
am_I
Thus, coefficient of s in deter - P + sP
(l)
~)
I
S(m~l ~
Ii
(1
+~
am-l )
z
1
) +s(~;-sa)"']
i>j i j
detp(l)
) = ----am_I
1
=-
a
(from equation (4.9»
= b l (from equation (4.17»
detp(l )
m- l _1 )
Also, coefficient of s2 in deter - P + spell =
- ( L
am_I
1 l3 i
a
=
bz
(from equation (4.6»
(from equation (4.17»
29
Coefficient of
a
= --z
(from equation (4.7))
= b3
(from equation (4.17))
a
=1
From equation (4.22) when r
(iii)
Lemma 2.10, LP(I)L
= AlL,
thus kaA
I
we have kaLP(I)L
= blL,
but by
= bl •
From these above results (i), (ii), and (iii) we have that
A
I
bl
1
=-=-'A
k
b
Also
al
aka'
a
2
b
z
=~ ,
a2
kz
=--
(4.28)
ka
3
(4.29)
=~
Lemma 4.2:
The expression (4.1) for
Proof:
~(t)
is the same as that given by Theorem 3.3.
~(t)
Equation (4.1) states that
I
= atR a + aR I + a(zak2
- al)R a -
Coeff. of t
I
+
0(1)
= akaL
=l:.
Al
Constant term
L
= aR I
(by equation (4.28))
(from Cor. 4.1.1)
which reduces to the required form using equations (4.28) and (4.29).
Lemma 4.3:
The expression (4.2) for
~(t)
is the same as that given by Theorem 3.4.
30
We only verify that G1 = Al and G
2
2
G1 = a. HOdH o
Proof:
=
A
2
.
(<:y.k o) 2LdL
Now from Cor. 4.1.1 and equations (4.28) and (4.29)
H
1
= k [.!.- LP (1) ZP (1 \
0
G
2
Thus
=
- ZP (1 \
- LP (1) Z + a L + A Z]
Al
1
1
2a. 2k 2 [1 LP(I)Zp(I)L L - Zp(I)L L - LP(I)Z L
d
d
.d
o Al
+ aLL + A Z L + .!.- L (LP(I)Zp(I)L)
1
dId
Al
d
- L (Zp(I)L) - L (LP(I)Z) + a L L + A L Z
1 d
1 d
d
d
+
A2
~
1
LdL - 2a 1 LdL] - 2a.k od L
Carrying out simplification using Lemma 3.2 (namely Ld(LX) = LXdL)
G
2
=
[~ LP(I)Zp(I)L L - Zp(I)L L - 2LP(I)Z L
Al
d
d
d
1
(1)
A2
2
L) + A ZdL + A1 Ld Z + ~ LdL] - ~ dL,
- Ld(ZP
1
1A2
The notation used by Kshirsagar and Gupta can
bE~
as required.
used to simplify some
of our earlier results as shown in the following lemma.
Computationally,
however, the expressions developed in sections 2 and 3 are easier to handle.
Lemma 4.4:
( )
2
2a1
2A 2
M 2 = [-- H - L + -Ad
A1 k o d 1
AId
1
Proof:
I] D
This is easily verified by substitution for dHl (obtained from
equation (4.27»
in equation (2.24).
REFERENCES
[lJ
Barlow, R. E. and Proschan, F. (1965).
ability New York: Wiley.
Mathematical Theory of Reli-
[2J
Frechet, M. (1950). Recherches Theoriques Moderne sur le Calcul des
Probabil i ties (Vol. 2, Second Edition), Paris: Gauthier-Villars.
[3J
Keilson, J. and Wishart, D. M. G. (1964). A central limit theorem for
processes defined on a finite Markov chain. Proc. Camb. Phil.
~., .§£, 547-567.
[4J
Kemeny, J. G. and Snell, J. L. (1960).
N. J.
Van Nostrand.
[5J
Kshirsagar, A. M. and Gupta, Y. P. (1967). Asymptotic values of the
first two moments in Markov renewal processes. Biometrika, 54,
597-603.
[6J
Murthy, V. K. (1961). On the general renewal process. Institute of
Statistics Mimeo Series No. 293, University of North Carolina.
[7J
Pyke, R. (1961).
properties.
[8J
Pyke, R. (1961). Markov renewal processes with finitely many states.
Ann. Math. Stat., ~, 1243-1259.
[9J
Rao, C. R. (1965). Linear Statistical Inference and its Applications,
New York: Wiley.
Finite Markov Chains, Princeton,
Markov renewal processes: Definitions and preliminary
Ann. Math. Stat., ~, 1231-1242.
[lOJ Rao, C. R. (1966). Generalised inverse for matrices and its applications
in Mathematical Statistics. Research Papers in Statistics,
Festschrift for J. Neyman, New York: Wiley.
[llJ Smith, W. L. (1955). Regenerative stochastic processes.
Soc. A., 232, 6-31.
Proc. Roy.
[12J Smith, W. L. (1958). Renewal theory and its ramifications.
Statist. Soc. B.,~, 243-302.
[13J
Smith, W. L. (1959). On the cumulants of renewal processes.
metrika, ~, 1-29.
J. R.
Bio-
© Copyright 2026 Paperzz