RENEWAL THEORY FOR MARKOV CHAINS ON THE REAL LINE
by
Robert W. Keener
Massachusetts Institute of Technology
and
University of North Carolina at Chapel Hill
ABSTRACT
Standard renewal theory is concerned with expectations related to
sums of positive i.i.d. variables,
n
S
n
=
I z.
i=l
1
We generalize this theory to the case where {S.} is a Markov chain on
1
the real line with stationary transition probabilities satisfying a
drift condition.
The expectations we are concerned with satisfy
generalized renewal equations, and in our main theorems, we show that
these expectations are the unique solutions of the equations they
satisfy.
~1S
1970 Subject Classifications:
Key Words and Phrases:
Primary 62L05; Secondary 62K20.
renewal theory, Markov chains, random walks.
This research was supported by a National Science Foundation
Fellowship, National Science Foundation Grants MCS78-0l18 and MCS78-0l240,
and Air Force Office of Scientific Research Grant AFOSR-75-2796.
2
1.
follows.
Introduction.
One method of describing renewal theory is as
Let {Si}i20 he a random walk with initial position SO= s.
For a function h, define the function
(X)
R(s)
E
[I
s i=O
h(S.)J
1
Then R(s) satisfies the renewal equation
In our generalization of renewal theory we let the sequence {S.}.>O
be
1 1_
a Markov chain on the real line with stationary transition probabilities.
This type of process will be called a generalized random walk, or GRW,
to distinguish it from an ordinary random walk in which the increments
or steps, Zi
= Si - Si_l' are i.i.d. random variables.
The starting
position of the GRW will be called s.
For a continuation set C, we define an extended Markov stopping
time
N
= infh: S.1
iC} .
The stopp1:ng set, S, will be the complement of C.
For a function h, we
will define the function R as
N
[I
s i=O
R(s) = E
h(S.)J
1
If R(s) exists, we can condition on the value of Sl and obtain the
following generalized renewal equation
(1. 1)
To insure that R(s) exists we restrict our study to GRW's satisfying
one of the following two conditions.
3
Cl:
There exist positive constants a and b such that for all
starting positions, s
E
C2:
s
for some constant k
[e
~
E ~,
-aZ ]
s;
we have
e
-b .
2 there exist positive constants
and M such that for all starting positions, s
E ~,
~O
we have
and
In Section 2 we study GRW's satisfying Cl and in Section 3 we study
GRW's satisfying C2.
Our main results in these sections are theorems
which give conditions under which R(s) exists and is finite, and show
that R(s) is the unique solution of the renewal equation (l.l) in an
appropriatc class of functions.
To obtain our main result in Section 3, we prove several theorems
of indcJlcndent interest, especially Theorem 3.1 which generalizes the
work of Brillinger (1962).
Thc generalization of renewal theory we develop has a statistical
application to the scquential design of experiments with two states of
nature.
In that problem, the sequence of log likelihood ratios behaves
as a GInV under either state of nature and the expected sample numbers
and operating characteristics are solutions of renewal equations.
more details see Keener (1979, 1980).
For
4
2.
First drift condition.
{s.}
drifts to
1
LEMM1\ 2.1.
Conditions Cl and C2 both imply that
To verify this for Cl we have the following lemma.
+00.
If the
GRW
satisfies Cl then
if
A< s .
'f'
'L,-
PROOF.
We can assume that s
= O.
By induction, Cl implies that
-as.
E [e
s
(2.1)
J]::; e -bj for all j
E
IN ,
and from this it follows that
p (S.::; A) ::; e
s
-bj+aA
J
.
Monotone convergence now implies
(X)
L
E [It{i: S. :':: A}] =
S
1·
j=O
(X)
p (S.::; A) ::;
s J
If A < 0, the sum may be taken from 1 to
If A
~
00,
L
1
1\
e
-bj+aA
.
j=O
giving the desired result.
0, the sum is
b
r ( a~)
+
aA
/ (l-e - b ) ,
exp(- r (j))b+aA)
where I(x) is the ceiling of x, i.e. the least integer
~
x.
This
expression is less than the desired result.
1\n immediate consequence of Lemma 1 is the following corollary.
If the GRW satisfies Cl and if] is an intervaZ of
COROLL1\RY 2.1.
Zength A-, then
a
-b -1
E [It{i: S.c]}] .:; [-1 A + (l-e )
]P [:Ii
s
1
)
S
s.t. S.1
E
]]
•
5
PROOF.
we use the
By Lemma 2.1, the result is obvious if s
~larkov
J.
E
If s i J,
property and condition on the first time the GRW
enters J to obtain the desired result.
The following theorem gives conditions under which R(s) is finite
and shows that R(s) is the only solution of the renewal equation (1.1)
which is bounded on finite intervals and has reasonable behavior as
s -+ ±oo.
-AS.
Let -the GRW satisfy CI and let {e
THEOREM 2.1.
J} be a
supermartingale for some A ~ a. Let h be a non-negative function such
-Ax
-Ax
that lS(x)h(x)/(I+e
) is bounded and IC(x)h(x)/(I+e
) is directly
Riemann integrable
1
Then R(s) is finite for all s E~.
a limit as x approaches
+00
and
_00..
If IC(X) has
then R(s) -&s the only solution of
the renewal equation (1.1) .. which is bounded on finite intervals and
satisfies
(2.2)
lim IC(s)R(s)/(I+e
s-+±oo
-As
)
==
0 .
To facilitate the proof of this theorem, we have the following
technical lemma.
Let {m.}.
7l be a sequence of positive constants
1 lE
LEMMA 2.2.
satisJIring
I
iE7l
m. <
00..
and let {n.}. 7l be positive l"andom variables
1
1
dependi.ng on a parameter s.
lE.
If there exist positive constants A and K
such that
E (n.)
s
1
~
K
for
i
~
KeA(i-s)
for
i < s ..
~
s
ISee page 362 of Feller (1966) for a discussion of direct Riemann
integrability as it relates to renewal theory.
6
then
, . \' n,m.(l+e -A(i-l) )] < (Xl ..
s iEZZ 1 1
r~[L
(2.3)
and
lim
(2.4)
s+±(Xl
_1_1_ _-,.-_ _
s .L
lEZZ
PROOF OF LEMMA 2.2.
l+e
-As
:: a •
From the bounds on E[n.]
we have
1·
E [no (l+e
s
n.m. (l+e -A(i-l) )
E [\'
-A(i-l)
1
)]
~
K(l+e
-A(s-l)
) .
lienee
I
iEZZ
m.E [no (l+e-A(i-l))] ~ K(l+e-A(s-l))
1
S
and (2.3) follows.
1
I
iEZl
m. < (Xl ,
1
To prove (2.4) we note that
[n.]
1
lim
-As ::: a for all i E ZZ .
s+±oo l+e
E
s
Applying dominated convergence for sums, the limit of the sum is equal
to the sum of the limits and (2.4) is established.
PROOF OF THEOREM 2. I .
For k E Zl define
J
k
[k-l , k)
and
n
k
::: # {j: S j
Define the extended stopping times
E
J k} .
7
-AS.
Since {e
J} is a positive supermartingale, an optional stopping
theorem for positive supermartingales (page 267 of Karlin and Taylor
(1975))
in~lies
that
e
-kA
.
P s (3" l : S i
E
J k) .
Corollary 2.1 now implies that
E[n ]
k
<;;
~_
b
(l_e-b)-l
+
if
k:2 s
(2.5)
We now define
m
k
sup lC(x)h(x)/(l+e
=
XEJ
-Ax
)
k
which implies that
sup 1C( x )11 ( X )
<;;
XEJ k
By the integrability condition,
I
iE21
-A(k-l))
m (1 +e
k
m. <
00,
and using (2.5) we see
l
that the conditions of Lemma 2.2 are satisfied.
We now observe that
(2.6)
R(s) will be finite if the expectation of the right-hand side of this
equation is finite.
Using Equation (2.3) in Lemma 2.2, we only need
show that E[h(SN)IW(N)] <
00
Let
M = sup h(x)/(l+e
-Ax
).
XES
Using the same optional stopping theorem for
supermartingales~
we have
8
and hence R(s) is finite.
We now show that R(s) satisfies (2.2).
Using (2.4) in Lemma 2.2,
and Equation (2.6), it is sufficient to show that
(2.7)
o .
lim
s->-±oo
If lim IC(s) = 0, then the
s-++oo
1, then the conditions imposed on h
We deal first with the limit as s -+ +00.
If lim IC(s) =
s-++oo
imply that there exists a constant M' such that when N < 00,
result is obvious.
By optional stopping we have
-AS
:::: lim M'Es[IIN(N)e
s-++oo
To verify (2.7) as s
N]:::: lim M' e
s-++oo
-s
=
0 .
we note that the result is obvious if
lim IC(s) = O. I f lim IC(s) = 1 then h is bounded on S which implies
s-+_oo
s-+-oo
(2.7) and completes our proof that R satisfies (2.2). To complete our
proof, we need to show uniqueness.
Since (1.1) and (2.2) are linear in
R we can assume without loss of generality that h
= O.
Let G be an
arbitrary function which is bounded on finite intervals and such that
(2.8)
and
lim IC(s)G(s)/(l+e
s-+±oo
(2.9)
We must show that G
=
O.
-As
)
=0 .
(2.8) and (2.9) imply that
9
IG(s) I
< K(l+e
-As
).
Iterating (2.8) gives
We now partition the line into the intervals (-CO,-A), [-A,A] and (A,CO)
and get
+ sUPIG(x)
X>A
I
Equation (2.1) now gives
IG(s) / :<; e- As sup eAxIG(x)
X<-A
If we now take A
=
I
+ K(e-a(S-A)-bk + eAhaCA-s)-bk) + sup/G(x)
X>A
Ik and let k
-+ co,
we have G(s)
o which
completes
the proof.
We will close this section by deriving bounds for the magnitude of
R(s) for certain functions h.
If we define the renewal measure for our
GRIV as
LJ (A) = E [#{j:<;N: S.EA}] ,
s
s
J
then R(s) can be expressed as the integral
R(s)
=
f
h(x)dLJ
s
I
10
From Corollary 2.1, we know that if J is an interval of length
-b -1
a
U (1) :s; -b A + (l-e)
s
~
then
.
Using this we can construct the following bounds for R(s).
Under the conditions of Theorem 2.1., if hex) = 0 for
THEOREM 2.2.
x < A and hex)
~s
(2.10)
integrable and non-increasing for x
R(s)
heA) (l-e
<:;
-b -1
)
~
A, then for all s
a (X'
+ b J A h(x)dx •
If hex) = 0 for x > A and hex) is integrable and non-decreasing for
x <:;
A, then for all
s
R(s)
(2.11)
PROOF.
h(A)(l-e
<:;
-b -1
a
)
+ b
fA h(x)dx .
_00
To establish (2.10), we use the fact that hex)
= 0 for
x < A and integration by parts to get
R(s)
=
J
:s; -
h(x)d(U ([A,X]))
s
a
f (bA(x-A)+(l-e
h(A) (l-e
-b -1
)
=
-f
U ([A,x])dh(x)
s
-b -1
) )dh(x)
a
(X'
+ b J A h(x)dx
Equation (2.11) can be established the same way.
Corollary 2.1 and Lemma 2.1 can be used to construct sharper
bounds for U (J).
s
These bounds can be used to construct an improved
bound on R(s) for a given s, but will not improve our global bounds.
11
3.
Second drift condition.
Our main goal in this section will be
to prove a theorem similar to Theorem 2.1 for GRW's satisfying C2
instead of CI.
rhis result is useful because C2 is often a weaker
condi tion than C1.
Unfortunately, C2 is more difficult to work with
than C1, and we need several preliminary results before we can prove
our main theorem.
The following lemma will be used in many of the
proofs to follow.
If E IX Ik
LEMMJ\ 3.1.
PRoor.
=
a then
By Taylor's theorem with remainder we have
E I 1 + XI k
=
s;
J\pplyin~;
k
M for some k > 2 and EX
S;
E[l + kX +
1 +
f~
JIa k(k-I)X 2 1 1
+ X - Xy ,k-2 ydy]
k-2
2
k(k_I)M (EII+X_Xy/k)--k-- ydy
the Minkowski inequality
s;
1 +
fa1
= (l+M)
k
2
k-2
k(k-I)M (1+(I-y)M)
ydy
- kM
The last inequality in the statement of Lemma 3.1 follows from Taylor's
theorem with Lagrange's form of the remainder.
Using Lemma 3.1 we now have the following result which bounds the
magnitude of k
th
absolute moments of terms in a martingale.
result generalizes a theorem due to Brillinger (1962).
This
12
TIIEQREM 3.1.
Let {S ,F } >() be a martingaZe satisfying
n n n-
sn IklF n ]
E[IS n+ 1 -
for some k > 2.
k
M
S
Then
Eis
k
aI
- S
n
S
(Myrn)
k
-'
uJhere
k-2
-2-
Y
PROOl:.
IJ(.:T e
We assume without loss of generality that M = 1 and
proceed by induction on n.
The first step is obvious because y
Let Z = Sn+l - Sn and S = Sn - SO.
Els n +l - salk
~
1.
Using Lemma 3.1 we have
=
E[E[IS + Zlk/Fn ]]
S
E[ISl k + k(k -1) (l+ISI)k-2]
S
Elsl k + k(k2-11(l+(Elslk-2)k-2)k-2
S
Elsl k + k(k -1) (1+(Elsl k )k)k-2 .
2
1
1
2
To complete the proof by induction we must show that
(yrn)k+ k(k+1) (l+yrn)k-2
2
To accomplish this we note that
a
=
(k-2) + In(k-l) - 2 In(y)
k-2
1n(k-l) - 2 1n(y)
~
-
=
In(k-1) + (k':"2)(ln(y)+~) - k 1n(y)
~
In(k-1) + (k-2)ln(y+l) - k In(y) .
y
+
13
Exponentiating this equation gives
(k-l) (l+y)
yk
k-2
1 .
:S
Now
k
~
2n
k(k-l)
2
1 +
(y+l/rn)k-2
k
Y
This implies
(rn+[
'1/
(rn
2:
y)k + k(k-l) (1+yrn)k-2
2
which completes the proof.
To state our next two theorems in their proper generality we will
replace C2 by the following condition for processes {S,},>O
where S.1 is
1 1_
measurable with respect to F. and {F.}. 0 is an increasing family of
1
1
12:
0-algebras.
C3:
For constants k
2:
2, lJ
O
> 0 and M > 0, we have for all i
E(Z.1+ lIF.)
1
2: lJ
~
o
and
E[lz.1+ 1 - E(Z.1+ lIF.)
IkIF.]
1
1
:S
k
M
where
Z.
1
S. - S. 1 .
1
1-
Our next result bounds the probability that a process satisfying
C3 drifts a given distance to the left.
0
14
If· {S.} sa-tisf1:es C3 and So = s > 0 then
TIlEOREM :>.2.
J
P[inf S.
~
0]
~
(l+as)
-k/2
~
J
j 2:0
lJhere
-1
a
PROOF.
2]10
1/ (k-l)
- 1] > 0 •
[(1+ (k+l)!vI)
M
This theorem is a generalization of a result in Karlin and
Taylor (1975, p. 275).
We assume without loss of generality that !vi
=
1.
Define f (x) as
(l+ax)
f(x)
1
-1
for
x 2: 0
for
x
~
0
A change of notation in Equation (4.1) of Karlin and Taylor (1975)
leads to
(3.1)
f(x) - fey)
provided y > O.
~
2
2
af (y)(y-x+a(x-y) ) for all x
E
R
This can be checked directly after noting that the
right-hand side has negative derivative for x < O.
random variable satisfying
EZ
and
Using (3.1) and Lemma 3.1 we see that for positive x
We now let Z be any
15
qf(x+Z)k/2]
Ik / 2 ]
<;
l:[\f(x+j)) + af 2 (x+p) (p_z+a(Z_j))2)
<;
f(x+p)k/2{(EI1+af(x+p) (p-Z) \k/2])2/k
+ (E[la 2f(x+p)(Z_U)2\k/2])2/k}k/2
s f(x+p)k/2{(E[11+af(x+p) (p-Z) ,k])l/k + a 2f(x+p)}k/2
<;
f(x+p)k/2{[1 + k(k-1) a 2f 2(x+p)(1+af(x+p))k-2]1/k
2
2
k/2
+ a f(x+p)}
<;
f(x+j))
k/2{ 1 + ---2-k-l a 2 f 2 (x+p) (l+af(x+p)) k-2
2
k/2
+ a f(x+j))}
<;
f (x+j))
k/2
k-1
2
{I + --2- a f (x+j)) (1 +a)
k-2
2
k/2
+ a f (x+p) }
{f(x+p)
k-1 a (l+a) k-2 + a]}k/2
fl (x+p) [(-2-)
=
{f(x+p)
k-1 [ (l+a) k-l - (1+a)k-2] + a] }k/2
fl (x+p) [ (T)
<;
k-1
k-1
k-1 _ 1]}k/2
{ f ( x+ j) ) - f' (x+ 1-1) [(-2-)
(( 1+ a)
- 1) + (1 + a)
For negative x, this result follows trivially, and applying "these
results it easily follows that {f(S.)k/2}. a is a non-negative
J
supermartingale.
which S
n
<;
O.
J2
We now define the Markov time T as the first n for
By an optional stopping theorem for non-negative
supermartingales (Karlin and Taylor (1975), p. 267) we have
16
The next lemma is needed to make use of Theorem 3.1.
:5. 2.
LEMMA
If EX
=
jJ 2: 0
and
I
I
E X - jJ k S 1
then for A >
0
P(X S -A)
PROOF.
By Lemma 3.1 we have
2: P(X-jJS-A)(A+l/A)k
~
P(X S -A) (A.+l/A)k .
Using this lemma we have the following bound for the expected number of
steps a process satisfying
Lf:fvIMA 3.3.
If
{Si}
C3
takes from an interval of length
satisfies
C3
and So
s and if we define
and
00
K
2
+
L
n=l
then
E[#{i: S.
1
E
J}]
S
K
S K(1+a(s_A))-k/2
/1Jhere a
1~S
PRoor.
(3.2)
defined as 1,n Theorem .3.2.
We begin by showing that
].10'
if
if
ASS "'
17
If we define
n
()
then
Is.1 -
s
n
L
+
E[Z·IF. 1] ,
i=l
1
1-
8.} is a martingale satisfying the conditions of Theorem 3.1.
1
J\pplication of the theorem gives
E[ Is
n
- 8
n
Ik ] : ;
r:-
(Myvn)
k
.
By condition C3 we know that
and using Lemma 3.2, we have for n
~
1
peS ::;s+]Jo) ::; pes -8 <::-(n-l)]JO)
n
n
n
r
<::
(Myvn + (n-1)P )
O
k
r
- k(n-l)pO(Myvn)
2 2
k
(Mylrl)
2 2 k
(M Y n + (n-1)
k-l
.
PO)
(3.2) now follows by monotone convergence and the theorem is
E(~ation
true if s
E
J.
If s i J we condition on the time the process first
enters J and obtain
E[lIb: S.
1
E
J}] <:: KP[21i S.t. S.
1
E
J)
J\pplication of Theorem 3.2 now finishes the proof.
Let C
k
be the set of non-negative measurable functions on R which
are bounded on [-1,00) and satisfy
r l TYT
dy sup
x<::y
_00
TlIEOREM 3.3.
hex)
< 00 .
!xlk-1
If hECk then there eX1:sts a measurab le function
g > h such that if {S. ,F.} satisfies C3.. {g (S.) , F.} is a supermartingale.
1
1
1
1
If hex) = a for sufficiently large x.. then g can be chosen so that
lim g(x)
x-Ko
=
O.
18
To facilitate the proof of this theorem we have
LEMMA 3.4.
Let g(x,ex) = (x-)a and let Z be an arbitrary random
variable such that
(3.3)
and
(3.4)
Then fox' 1 < a < k and k > 2..
Eg(s+Z) - g (s)
(3.5)
~
c/ (s+).10)
(3.6)
~
c
(3.7)
~
0
k-a
2
for
s
for
c
for
s
:2:
0
< s < 0
3
~
c
3
..
uJhere cl-' c 2 and c 3 depend on ).10" Iv! and k.
PROOF.
By ordinary calculus
+
for x E m and y Em.
Choosing x
expectations gives (3.5).
= Z - ).1 and
y
= s + ).1 and taking
To prove (3.7), we note that
Eg(s+Z)
~
Eg(s+).10+Z-).1)
(3.8)
2
k 3
Using Lemma 3.1 and assuming s + ).10 ~ M + ~ (k-l) 2 - we have
).10
19
which proves (3.7).
(3.6) holds because (3.8) implies
for
PROOF OF THEOREM 3.3.
, Ik-l
11(X)/ x
For
t
c
3
< s < 0 .
We assume without loss of generality that
is non-decreasing on (-00,-1) and that hex)
= 0 for x
~
-1.
< 1
(3.9)
h(t)
s 32
/t!k-1
f~
dx
f~/x
dyh(-y) /tyl-x/yk
e
because
Equation (3.9) implies that for t < -1
(3.10)
h(t) < ct
+ A(t)
where
c
:=:
32
f0
OO
dx
foo
l/x dyh(-y)y
-x-k
e
and
A(t)
=
32
f k-2
0
- k-l-x
dx(t)
fool/x
dyy
-x-k
h(-y).
e
Since A and c are non-negative, (3.10) holds for all t.
random variable satisfying (3.3) and (3.4).
Using Lemma 3.4 we see
that
E[c(t+Z-) + A(t+Z) - ct
Let Z be any
- ACt)]
S
B(t)
20
where
B(t)
for
()
for
+
c
for
------
(t+P )
O
k-l
> 0 .
t
Observing that
2-k
s: (l+]JO )
foo0
xdx
fool/x
dyy
-x-k
h( -y)
e
we see that B is directly Riemann integrable.
as in Theorem 3.2.
Choosing
0
=
].10/3
We now let f be defined
and using Lemma 3.2 we see that
there exists c > () such that for any random variable Z satisfying
EZ = P ~ Po and Elz - plk
(3.11)
<;
k
M we have
E[f(s+Z) - f(s)] < -E for -0
<;
S
<;
0 .
Since B(t) is directly Riemann integrable, there exist constants a.
1
such that
(3.12)
B(t) < a. for io
<;
1
t s: (i+l)o
and
a. <
00
1
Let
C (t)
a.
~ f(t-(i+l)o)
E
21
C is positive and (3.11),(3.12) imply that
E(C(t+Z)-C(t)) < -B(t)
Hence
{c(S~)+A(S.)+C(S.), F.}
1
1
1
1
is a positive supermartinga1e whenever {S.,F.} satisfies C3 and we are
1
1
done.
After so many preliminaries we can now establish our main result.
Let {S.} be a GRW satisfying C2 and let h be a
THEOREM 3.4.
non-negative
1
measu~able
function such that lS(x)h(x)
lC(X)h(X)/(1+(x-)k/2) is directly Riemann integrable.
finite for aU s
E
E
C and
k
Then R(s) is
If lC(x) has a limit as x approaches +00 and _00
JR.
"'
then R(s) is the only solution of the renewal equation (1.1) which is
bounded on finite intervals and satisfies
- k/2
lim 1 (s)R(S)/(1+(s)
) =
C
s->-±co
(3.13)
PROOF.
a .
This proof is similar to the proof of Theorem 2.1.
simplify notation we will assume without loss of generality that
We define
J
n
k
k
=
[k-l, k) ,
= It {j: S j
E
J k} ,
and
m
k
It
follows that
=
- k/2
sup 1 (x)h(x)/(1+(x)
)
C
XEJ
k
To
Va
1.
22
By the integrability condition,
I
iEll
m. <
We now observe that
(X)
1
N
(3.14 )
L
h (S. )
1
i=O
Now by Lemnw 3.3
<:
K(l
m+
(k-s)+l
JR
(k-s) (l+a(s-k))
-k/2
)
Prom this it follows that
is a bounded function of k and s which approaches zero as s -+ ±oo.
lIenee
and
lim
=:
a .
s-+±oo
Using (3.14), we see that R(s) will be finite provided
<
(3.15)
00
, and R(s) will satisfy (3.13) provided
lim lC(s)Es[h(SN)lm (N)]/(l+l
s-+±oo
m
k 2
(s) Isl / ) ::: 0 .
Using Theorem 3.3 we choose a function g(s) > h(s)l,,(s) such that
.~,
{g(Si)} is a non-negative supermartingale.
By optional stopping
<
If lim
s-)o+oo
lC(s)
00
1, then g can be chosen so that lim g (s)
s-++oo
O.
Hence
23
(3.15) holds as s
bounded
~
+00.
lim 1 (s)
jf
S-r_OO
C
(3.15) holds as s
~
_00 hecause l (s)h(s) is
S
1.
=
To complete our proof we need to show uniqueness.
without loss of genera l:i ty that h = O.
We can assume
Let G be an arbitrary function
which is hounded on finite intervals and satisfies
und
- k/2
lim 1 (s)G(s)/(1+(s)
)
(3.17)
S~±OO
We must show thut C(5)
(3.18)
C
= O.
=o .
Iterating (3.16) gives
IC(s)I::;E s [IG(S n )/].
Since h is zero and G is bounded on finite intervals, (3.17) implies
that there exists a constant K such that
(3.19)
Equutions (3.18) and (3.19) now give for x > 0
IG (s) I
::; E [K(1+(S-)k/2)1[_, 1](S )] + sup/G(x) /
s
n
n
A,A
+ sup ( IG(x)
x<-;\
x>;\
1 Ik/ 2
II x Ik/2 )E s [S
n
1(_00,;\) (Sn)]
(3.20)
::; K(1+;\k/2)p (S ::;;\) + sup/G(x)
s
We define 0
n
n
I
+ sup (IC(x)
x>;\
x<-;\
as
on = s
n
+
I
i=l
E
[z·IS.1- 1]
S· 1
.
Ilxl- k / 2 )
24
on
By condibon C2,
~
s + n and Theorem 3.1 gives
0·21)
We now take n large enough that 0
n
Isn I
<
Is n -
l)
n
I
rn.
>
Then for S
n
IG (s) I
rn
:'::K(l+nk/LJ)p(S -0:'::
S
n
n
->-
r
(Myvn)
k
.
= ;n- in (3.20) and obtain
sup
+
If we let n
-A we will have
implying
:s;
We now let A
<
(X)
-n-s)
+
sup /G(x)1
x>rn
IG(x) Ilxl-
k/2
{(Myrn)
k
p (S
x<-rn
s
- 0 :s; -rn - n - s)}
n
1/2
n
in this expression, we can use Lemma 3.2 and (3.21) to
conclude that (;(s)
= () and our proof
is complete.
REFERENCES
BRILLINGER, O. (1962).
A note on
~he
rate of convergence of a mean.
Biometrika 49 574-576.
FELLER, W.
An Introduction to Probability Theory and Its
(1966).
Applications,
~,
John Wiley and Sons, New York.
KAI{LlN, S. AND TAYLOR, II.M. (1975).
l~occsscs.
A First Cmwse Ln stochastic
2nd cd. Academic Press, New York.
KEENER, R.W. (1979).
Renewal theory and the sequential design of
experiments with two states of nature.
KEENER, R.W. (1979).
The solution of a renewal equation applied to
sequential analysis.
M.l.T.
Submitted to ComrflUnications
Technical Report 12, Dept. of Mathematics,
.
---.
------f),,,,, F"",,,',I)
SECURITY CLASSIFI'ATION OF THI'-, I'A',[- (II'!>""
,
UNCLASSIFIED
,
REPORT DO~UME~_~~!I~TN
P~GE
,
REPORT NIJMI3f R
.•_-- 4.
i
----------- ------_. - - -
READ INSTRUCTIONS
BEFORE COMPLETING FORM
_
(,OVI
ACCE';SICHJ NO
1
flECIPIENT'S CATALOG NUMBER
5
TYPE OF REPORT'" PERIOD COVERED
---
TITLE (und Sub/III,,)
TECHNICAL
Renewal Theory for Markov Chains on the Real
Line
~.-_ ..- - - - - -
-------f
6.
PERFOHMI~JG O'G.
B.
CONTFlACT OR GRANT NUMDER(,)
REPORT Nl,',;',ER
Mimeo Series No. 1281
.
AUTHOR(s)
NSF Grants MCS78-0118 and
MCS78-01240
AFOSR Grant AFOSR-75-2796
Robert W. Keener
~-PERFOR;:m«;OHG A NI-ZATI()-NI~AM~·A";;lDAc)l;RESS----_h--------h-
_h
"10-:-PfiOGRAM
t:...
E EM E NT.·P ROJ -ic-C::T,-TA"'SKAREA'" WORK UNIT NUMBERS
i-L-(::O-N-T-'
P-O-L-.L-I-N-G-OFF;-r::-E-NA;E~~ N D-Al,-D-H-E,,--~ _._-._-----_._----------17.. -REPC1R-T-O-A-TE - - - . - - - - - - - - - - 1
Air Force Office of Scientific Research
February 1980
T3~'NuM-8EFl OF PAGES
Bolling Air Force Base
'l
1-;- 4
Washington, DC
MON I TaRIN
24 pages
1'-f5~.~~~:~~':~~i~:A~~6N-;DOWN(;R-I':)1~iG----
. 16.-- DI S T-Ri:; UTI () N STATE
•
20332
G fiG [t, (:Y-N AM E-",---;;r;ORT;;s(it:i;if;'r""II;;;;;~"",/lh,~--Otii;;,)-''-;-:---Sf:'cu-RI'fvCL ASS. (of ti;;;;-;epori)---
MENT:;:;-i(i;;-;;i<~I;;;t)----_.-.----.----._-----_L_~_:H
E au L E
._..
_
Approved for Public .Re1ease -- Distribution Unlimited
...
_-----
17.
._-------_._-----
--'
-_
... __._-------".---'
OISTRIl:lUTION STATEMENT (ll( the l'IhSfrnc(
------._-_._.. _-_ ..- .._---------_._-----------_._-------
ill Bloch 20, Jf dJ/ffHNlt from Heport)
t"n(C'p'd
r----.------------------.--.--------·-.
.---·-------·--------- .-----------lB. SUPPLEMENTARY NOTES
h·
_
renewal theory, Markov chains, random walks
1--------------.---------------..----..---.-------.---..... - . --.----.. ----------------- ---.------20.
ABSTRACT
(C'OIl(/OlJ(' .m rO\'('!f$f.l
s{.-I<' Jf
tH~("('SS'1r.\·
l:fod idt'lltify hy block rlumlH!r)
StanJarJ renewal theory is concerned with expectations related to sums of
positive i.i.d. variables,
n
eI
S
n
DO
FORM
1 JAN "i3
1473
=
EDITION OF 1 NOV 65 IS OBSOl.ETE
I
Z.
i=l
1
UNCLASSIFIED
-----------.-._-----_._---------
SECURITY Cl.ASSIFICATION OF THIS PAGE (lI'h"n Dilts Entered)
------ --------
5ECUHITY CLA5SIFICATION OF THI~' PA,',!'(I"".II
j)"',,
UNCLASSIFIED
,---------->--.F"tnr"d)
I~
20.
We generalize this theory to the case where {Sj} is a Markov chain on the
real line with stationary transition probabilities satisfying a drift
condition. The expectations we are concerned with satisfy generalized
renewal equations, and in our main theorems, we show that these
expectations arc the unique solutions of the equations they satisfy.
A
..
•
•.
UNCLASSIFIED
StCI)J..!ITY CLASS!FICATf(\~ or-
f-.ll
flAGC-rWhuii f)n<'i l-iitr':Ju~d)
•
• I
© Copyright 2026 Paperzz