A UNIFIED LIMIT THEORY VIA BOOTSTRAP FOR
BRANCHING PROCESSES WITH
IMMIGRATION
,
SOMNATH DATTA and T.N. SRIRAM*
Department of Statistics
University of Georgia
Athens, GA 30602 U.S.A.
ABSTRACT
In this paper we consider bootstrap approximation to the sampling distribution of the
maximum likelihood estimator (m.l.e.) of the offspring mean m in a branching process
with immigration. A clever modification of the standard parametric bootstrap procedure
is shown to eliminate the invalidity of the standard bootstrap for the case m=1, as reported in Sriram (1992). Furthermore, the modified bootstrap is shown to provide valid
approximations for other values of m (=f:. 1) as well. Thus, in this example, the modified
bootstrap provides a unified solution while the form of the limit distribution of the m.l.e.
via classical asymptotic theory depends on m. It is argued that similar modifications will
be useful more generally.
AMS(1990) Subject Classifications: Primary 60J80,62G09.
Keywords and phrases: Bootstrap, branching processes with immigration, asymptotic va-
lidity, limit theory.
*A major portion of the work was done while the second author was visiting the Department of Statistics, University fo North Carolina at Chapel Hill.
1. Introduction.
Consider a branching process with immigration which can be defined recursively by the
following equation:
Zi-l
Zi
= L: ei-l,k + Yi,
i
= 1,2, ...
(1.1)
k=1
with Zo = 1. Here Zi is the size of the i-th generation of a population, ei-l,k is the offspring
size of the k-th individual belonging to the (i - 1)-th generation and
Yi is the number of
immigrants contributing to the population's i-th generation. Throughout the paper, we
assume that {ei-l,k} and {Yi} in (1.1) are independent sequences of LLd. nonnegative,
integer valued random variables (r.v.'s) with finite means m and A, and finite and positive
variances
0'2
and b2 , respectively.
Suppose that a sample {(Zi, Yi), i
= 1, ... , n} is available.
Then a natural estimator of
the offspring mean m and the immigration mean A are given respectively by
and
n
An
A
=n
LJ Yi.
(1.2)
-1 " ' "
i=1
It is also possible to estimate m and A based only on the partial information on {Zi} and
study their properties; see Heyde and Seneta (1972, 1974), Wei and Winnicki (1990) and
the references therein.
For statistical inference about parameter m, one may consider the pivot given by
n
Vn
= (L: Zi-l)i(m n
-
m).
(1.3)
i=1
Generally, knowing the distribution of the pivot permits forming confidence intervals, setting up tests of hypotheses about m etc. However, here, the form of the limit distribution
of Vn depends on m. More specifically, it is known that, as n
D
Vn -+
lV(O, 0'2)
{ {X(l) - A}/{!: X(t)dt}i
-+ 00,
if m
=V
i= 1
(1.4)
if m = 1,
where {X(t)} is a nonnegative diffusion process with generator Ah(x)
= Ah'(x)+( ~ )x0'2h"(x),
for h E C~[O, (0), and is obtained as a weak limit of the process Z[ntl/n, as n
?
-+ 00.
Here
cgo[O, (0)
is the space of all infinitely differentiable functions on [0,(0) which have com-
pact supports, and I and 1/ denote the first and the second derivative, respectivley. For the
result (1.4), see Sriram, Basawa and Huggins (1991) for the case m < 1 (subcritical) and
m = 1 (critical), and Wei and Winnicki (1989) for m
> 1 (supercritical).
In an attempt to approximate the sampling distribution of Vn , Sriram (1992) considered
the standard parametric bootstrap. However, it was shown that it does not lead to an
asymptotically valid approximation for the case m
= 1.
See Sriram (1992) for details.
Because of the failure of standard parametric bootstrap at m = 1, the investigation of the
same for other values of m (namely m
# 1) was not carried out in Sriram (1992).
In this paper, we propose a clever modification of the standard parametric bootstrap
and show that the modified procedure provides an asymptotically valid approximation to
the sampling distribution of Vn , not only for the case m
= 1, but also for the case m i= 1.
Thus, in this example, the modified parametric bootstrap serves as a unifying inference
tool.
The basic idea of modifying a standard bootstrap so as to provide a unified solution can
be used in other critical cases known in the literature as well. For example, it has been
shown by Basawaet al. (1991) and Datta (1992) that for autoregression a similar invalidity
results from the use of standard bootstrap, when the autoregressive parameter is ± 1. It
is possible to propose a similar modified bootstrap scheme for autoregression as done here
and establish its asymptotic validity for all values of the autoregressive parameter, but it
will be considered in a forthcoming article.
Bootstrap methods have received considerable attention since the pioneering work of
Efron (1979). A good survey of results for the LLd. setup is provided in a review article
by Babu (1989); one may also consult the recent book by Hall (1992). Bootstrap schemes
for various dependent models have been proposed by Bose (1988), Kiinsch (1989), Basawa
et al. (1989), Lahiri (1991), Athreya and Fuh (1992), Liu and Singh (1992), Politis and
Romano (1992), Datta and McCormick (1992 and 1993), among others.
2. The Bootstrap and Summary of Results.
For the purpose of bootstrap, we assume that the offspring and the immigration r.v.'s
have a power series distribution with respective probability mass functions (p.m.f. 's) given
by
u = 0,1, ... ,
and
QI/I[Y
= y] = b(y)4>71 /B(4)),
y
= 0,1, ...
(2.1)
Here, {a(u)} and {b(y)} are known nonnegative sequences, A( 9)
o < 9 < 9*
and B( 4»
= E~o b(y)4>71 for 0 < 4> < 4>*, where 9*
= E~=o a( u )9
U
for
and 4>* are the radii of the
two power series. Note that, under the parametric model (2.1),
mn
and ~n given in (1.2)
are maximum likelihood estimators of m and A, respectively; see, for instance, Bhat and
Adke (1981).
= Es(e) = 9A'(9)/A(9), A = EI/I(Y) = 4>B'(4))/B(4>)
= Vs(e) = 9(fJm/89), and b2 = VI/I(Y) = 4>(8A/84». Here 8 denotes a
It can be easily shown that m
and the variances
(72
partial derivative. Since 9,4>, (72 and b2 are all assumed to be positive, we have that m and
A are strictly increasing functions of 9 and 4>, respectively. Let m
where
I
= 1(9)
and A = g( 4»,
and 9 are known functions.
A (parametric) bootstrap procedure to approximate the sampling distribution of Vn in
(1.3) can be described as follows. Given a sample X n
= {(Zi,Y'i),i = 1, ... ,n}, estimate
the offspring mean m by some estimator mn based on X n and the immigration mean A by
~n defined in (1.2). Replace 9 and 4> in (2.1) by their respective estimates
4>n = g-I(~n)'
and
On = 1- 1 (m n )
Conditional on X n, let {e;,j} be a sequence of i.i.d. r.v.'s having p.m.f.
Pen and Pi*} be a sequence of i.i.d. r.v.'s having p.m.f. Q;Pn' The bootstrap sample
X:
= {(Z:, Yi*), i = 1, ... , n} is then obtained recursively from
Zi_l
Z; = L
et-l,k
+ Yi*,
i
= 1,2, ... ,
(2.2)
k=1
with Zo
= 1.
Define the bootstrap analogue of mn and Vn by
n
n
m*n = (~
" Z~ )-1 "(Z~
- y.)
~.
1-1
i=1
and
I
(2.3)
I'
i=1
n
V; = (2: Z:_I)~(m~ -
mn ),
(2.4)
i=1
respectively. The (conditional) distribution of V,: in (2.4) (given the original sample X n )
constitutes a bootstrap approximation to the distribution of Vn .
Note that, so far we have left the selection of the estimator mn for parametric bootstrap
quite arbitrary. Clearly, the natural choice for it is the m.l.e
of
mn
mn
in (1.2) itself. This choice
corresponds to the standard bootstrap mentioned in the introduction. However,
as mentioned earlier, with m n
= mn
Sriram (1992) showed that the conditional limit
distribution of the bootstrap pivot V,: does not coincide with the limit distribution of
Vn in (1.4), when m
= 1.
In other words, the asymptotic validity does not hold for the
standard parametric bootstrap. A deeper analysis shows that the main reason for its failure
= 1, the estimated distribution Pj -l(m n ) no longer serves as a good choice
is that, when m
for the bootstrap population, since
mn
does not converge to m fast enough.
In this paper we propose the following selection of m n which converges sufficiently fast
to m, when m
= 1, and still remains consistent for other values of m.
The idea behind
it is that of an adaptive (data-dependent) shrinkage towards m = 1 (similar in spirit to
the Hodges estimator, see LeCam (1953)). In order to describe our selection of
{'7n} be a non-random sequence satisfying n'7n -+
o.
Let 8n
= h(E?=1 Zi-d
positive function on [0,00) such that limz_oo(z/loglogz)~h(z)= 00,
ego h(z)
= z-i.
lim z _
mn , let
where h is a
oo
h(z) = OJ
For mn in (1.2), define
1 - 8n $ mn $ 1 - '7n
1 + '7n $ mn $ 1 + 8n
otherwise.
(2.5)
The role of 8n and '7n in the definition of m n is explained further in Remark 2.1 below.
The main result of this paper is that, with the above selection of m n , the (conditional)
distribution of bootstrap pivot
V,:
in (2.4) is asymptotically close to the distribution of the
pivot Vn given in (1.3). In order to describe it formally, we let P* denote the probability
corresponding to the bootstrap sample X:, conditional on the original sample X n .
Theorem 2.1. Consider the model given in (1.1) and assume (2.1). For Vn in (1.3)
and
V,:
in (2.4), where m n is given by (2.5), we have
sup IP{Vn $ x} - P*{V; $ x}l-+ 0 as
n -+ 00,
(2.6)
:z:
almost surely (a.s.), for all m
> O.
Remark 2.1. The construction of.m n ensures that it is within Tfn distance of 1 whenever
mn
is within 8n distance of 1 (an indication that the true m is one or very close to one).
A possible, perhaps the simplest, choice of '7n is zero. However, use of a non-zero '7n, eg.,
'7n
= (nlogn )-1 may be better for small sample properties of the bootstrap when m is very
close to one.
In order to prove Theorem 2.1 we obtain a series of results which are of independent
= 1, and m > 1.
(which in turn uses that of mn )
interest as well. Different techniques are used for the three cases m
In all the cases, properties of the estimator
mn
in (2.5)
< 1, m
are required. These are presented in the next section. Necessary limit theorems for array
of branching processes for the three cases are obtained in Section 4. Finally, Theorem 2.1
is proved in Section 5 using results obtained in Sections 3 and 4.
3. Properties of the Modified Estimator.
Let
mn
be the estimator given in (2.5). In this section we obtain a number of properties
of mn which will be useful in proving Theorem 2.1. First, we state and prove a law of the
mn
iterated logarithm (LIL) for
in (1.2), when m = 1. The LIL result will be used in the
proposition following the proof of the lemma. Note that for Lemma 3.1 and Proposition
3.1 below we do not require the distributional assumption in (2.1), hence we state the
necessary moment conditions.
= 1 in model (1.1) and EI6,11 2 +- < 00 for some s > O.
Lemma 3.1. Suppose m
for
mn
Then
defined in (1.2),
n
1
(3.1)
m n -1) -_ o«(loglogEi=1
"n
Z. Zi_l)l) a.s., as n --+ 00.
LJi=1 1-1
Proof. It will be shown below that conclusion (3.1) follows from Lemma 2 (result (2.4))
A
(
of Wei (1985). To this end, let
Zi-l
Wi
= (Zi -
Zi-l -li)
=L
(ei-l,k - 1)
k=1
(3.2)
= )Zi-l, and ei = Wi/Ui, ifui =F 0 and 0 otherwise. Clearly, {Ed is a sequence
of martingale differences with respect to the O'-field :Fn = O'{ei-l,k, l'i, 1 ~ i ~ n, k 2:: 1}.
where Ui
Moreover, by Lemma 2.1 of Lai and Wei (1983), there is a constant C such that
(3.3)
Hence, condition (1.2) of Lemma 2 in Wei (1985) holds. It only remains to check condition
(2.3) of Lemma 2 in Wei (1985), which amounts to showing that for some 0
n
Zn-l
= o«L Zi-lY)
i=l
II
a.s.,
as
n
--+ 00.
< c < 1,
(3.4)
It can be shown using arguments similar to the proof of Lemma A of Sriram, Basawa and
Huggins (1991) that for'Y
> 4,
n
lim P(Zn > 6('" Zi-l)'Y
L-
k ..... oo
n ~ k) = 0,
for some
(3.5)
i=l
for any 6 > O. This implies that for 'Y
> 4,
n
Zn = 0((2: Zi-l)'Y) a.s.,
i=l
n
as
Since E?=l Zi-l ~ E?:ll Zi-l we have from (3.6) that for'Y >
a.s. as n -+
00.
This yields (3.4). Hence, as n -+
(3.6)
-+ 00.
4, Zn = O((E?!ll Zi-l)"')
00,
which by the definition of mn yields the required result. •
Proposition 3.1. Consider the model (1.1). The following hold for the estimator
mn
in (2.5):
Case m
=1=
1:
mn
Case m
= 1:
a.s., as
-+ m
n -+
Suppose for some s > 0, EI6,112+. <
n(m n
-
1) -+ 0
n(m n
-
m)
)" -+
(3.7)
00.
00.
Then
a.s.,
as
n -+
00.
(3.8)
-+
0 a.s.,
as
n -+
00,
(3.9)
1
a.~.,
Case m > 1:
and hence
(m n
m
Proof. Let m
=1=
n
as
1. Note that (3.7) follows readily if we show for
mn
a.s., as
-+ m
n
(3.10)
-+ 00.
-+ 00
mn
in (1.2) that
(3.11)
and
P(m n
=1=
mn i.o.)
7
= O.
(3.12)
For (3.11), note that
n
mn -
m
n
= L(Zi -
mZi-l - Yi)/ L Zi-l
~l
-+
a.s.,
0
as
n
(3.13)
-+ 00,
~l
by the strong law of large numbers for martingales (Hall and Heyde (1980), [HHl hereafter,
Theorem 2.18) applied to the martingale sequence {L:?=l (Zi - mZi-l - Yi),
F n } for F n
defined in Lemma 3.1. As for (3.12),
P(m n
=I: mn i.D.)
~ P(lm n -
11 ~ an
~ P(lm n -
ml ~ 1m - 11- an
i.D.)
i.D.)
=0,
=I: 1, an ! 0 a.s. and (3.13) holds. Hence the case m =I: 1.
Let m = 1 and TJ > O. Then, there exists N such that nTJn < TJ for all n
because m
P(nlm n -
11 > TJ i.D.)
~ P(lm n -
~
N. Therefore
11> an i.D.)
(L:?=l Zi-d1 Im n - 11
an(L:?=l Zi-l)1
= P (
n
1
>
n
1
(log log L:i=l Zi-d-'
(log log L:i=l Zi-l)-'
=0
by the assumptions on an (see (2.5» and Lemma 3.1. Hence the case m
iooo)
= 1.
Finally, let m > 1. Observe that (3.9) follows from (3.12) provided
n(m n - m)
-+
0 a.s.
as
n
(3.14)
-+ 00.
For E E (O,!), write using the definition of mn that
n(mon
-
n
n
n
m) = [n/(L Zi-di-f][L(Zi - mZi-l - Yi)/(L Zi_l)!+fl·
i=l
i=l
i=l
(3.15)
Now (3.14) follows since the strong law for martingales (see [HH1, Theorem 2.18) implies
that E:=l (Zi - mZi-l -
Yi)/(E:':l Zi_d!+f -+ 0 a.s., and a result of Seneta (1970)
[also
see Heyde (1970), Theorem 3] implies that
m- n
n
L Zi-l
-+
(m -1)-lW
a.s.
as
n° -+
00,
(3.16)
i=l
where W is a positive random variable. It is easy to see that (3.9) implies (3.10). Hence
the proposition. •
4. Array of Branching Processes
Consider a general array of branching processes {Z~n)} given by
z(ra)
i-1
"c~n)
ZI~n) = L.."
~I-l,k
+ v.(n)
I
,
.
l
.L
= 1, 2,... ,
(4.1)
k=l
where, for each n, Z~n) = 1, {et'j)} is a sequence of LLd. r.v.'s with mean J.tn and variance
0-;, and {Yi(n)}
is a sequence of LLd. r.v.'s with mean An and variance b;j also, assume
that {dj)} and {Yi(n)} are independent. The following condition is assumed throughout
this section: For the model defined in (4.1)
(C-l)
J.tn
where m, A, and
(4.2)
-+ m,
0- 2 are all positive and finite.
n
Pn =
Define
n
(L Z~~~)-l L(Z~n) - Yi(n»)
i=l
(4.3)
i=l
and the pivot
n
lin = (L Z~~~)i(Pn - J.tn)'
(4.4)
i=l
In this section, we derive the limit distribution of lin for the cases m < 1, m = 1 and
m > 1 under (C-l) and other regularity conditions. Incidentally, the limit distribution of
lin when m = 1 in (C-l) has been derived by Sriram (1992) and it is stated next.
Theorem 4.1. For model (4.1), assume condition (C-l) in (4.2) with m = 1. Furthermore, assume that b; -+ b2 E (0,00) and for a real number
0:
(4.5)
Suppose, for any sequence {x n} such that xn -+
X
E (0, 00),
(4.6)
for all
€
> O.
Then for lin defined in (4.4),
(4.7)
where
=
v(a, A, ,,')
{1.'
X a(t)dt
r~ {Xa(l) -
A} - a
{1.'
Xa(t)dt} i
(4.8)
and {Xa(t)} is a nonnegative diffusion process with generator
Aah(x)
= axh'(x) + Ah'(x) + (1/2)x0'2h"(x),
hE
C~[O,
(0).
(4.9)
Proof. See Sriram (1992), Corollary 3.1. •
Next, we consider the case when J.'n -.. m < 1. First, we state (without proof) a L r
convergence for martingale arrays, which will be used in the theorem proved below.
Lemma 4.1. For each n ;::: 1, let {Uni, Qni, 1 ::; i ::; n} define a martingale difference
sequence. If
lim
(4.10)
M-+oo
then
n
1
n- EI
L Unil-.. O.
(4.11)
i=1
The above lemma can be proved using exactly the same arguments as in the proof of
Theorem 2.22 of [HH] with p = 1, even though the theorem in [HH] does not consider an
array.
For the process in (4.1) define a sequence of O'-fields :Fni' for each n ;::: 1, by
'r'
.rni
y;(n) 1
1 .
= 0' {&(n)
~1-I,j' I , ::; ::;~,
j ;::: I}.
(4.12)
Theorem 4.2. For model (4.1), assume condition (C-l) in (4.2) with m < 1 and also
that b~ -.. b2 E (0,00). Furthermore, assume that for some S > 0,
n ;::: 1. Then, for
lin
Eld~212+6::; B for all
defined in (4.4),
(4.13)
Proof. Use (4.3) and (4.4) to write
n
II
n
n
= ('"" Z~n) )-1 '""(Z~n) _
L..J 1-1
L..J I
i=1
(
Z~n)
1-1
_ y.(n»)
I
i=1
n
=
IL
,..n
n- 1 'L..J
" " z~n)
1-1
)
-1
n
n- i L..J
'""(Z~n) _
I
i=1
i=1
10
II.
,..n
z~n)
1-1
_ y.(n»)
I
•
(4.14)
Let
k
Xni
= n-! (Z~n) -
PnZ~~~ - Y;(n»)
Snk
and
=L
(4.15)
Xni.
i=1
Then, for each n :2: 1, {Snk, Fnk, k :2: I} defines a martingale sequence, so that {Snk, Fnk' k :2:
1, n :2: I} defines a martingale array with the conditional variance given by
n
V;n = LE(X;iIFn,i-d
i=1
n
= 0'2n n - l "LJ" Z~n)
1-1'
(4.16)
i=l
Use (4.15) and (4.16) to write V n = O'n(Snn/Vnn). Then, we will use the martingale array
CLT given in Corollary 3.2 of [HH] to prove (4.13). To this end, it suffices to check the
conditions of Corollary 3.2 of [HH] (also see Corollary 3.1 and Theorem 3.2 of [HH]). First
we show that V;n converges in probability. For this, observe that
n
n
n -1 ""(Z(n)
LJ i
-
Z(n)
Pn i-I
-
\)
-1 ""{(I
An = n
LJ
-
i=l
Pn
)Z(n)
i
+ Pn (Z(n)
i
-
Z(n»)
i-I
-
\}
An
i=1
n
Pn)n- 1 L
= (1 -
Z~n)
+ Pnn-l(z~n) -
Z~n») - An.
(4.17)
i=1
n
Let Uni = (Zf ) - PnZf~~ - An), then {Uni, Fni' i :2: I} defines a martingale difference
sequence for each n :2: 1. Furthermore,
EIUnil2 = E
(~(el~L _I"n) + (Yj(n) _ An))
2
1=1
2 by conditl'oning on
= 0'2E(Z~n»)
n
1-1 + bn'
J""n,l-l,
= O'~{p~-1 + An (1 + Pn + ... + p~-2)} + b~
'r::'.
(4.18)
by repeated conditioning and the fact that Zan) = 1. Now use (C-1) in (4.2) and that
Pn -+
m
< 1 to get
sup sup EIUn il 2
n
< 00.
(4.19)
l:Si:Sn
This implies condition (4.10) of Lemma 4.1. Hence, by Lemma 4.1,
n
n- 1 ""(Z~n)
LJ I -
Il
rn
Z~n)
- An ) ~ 0
1-1
i=1
11
as
n -+
00,
(4.20)
and therefore (4.20) holds in probability as well. Also, since
we have by (C-1) that
(4.21)
Using (C-1) once more, (4.17), (4.20) and (4.21) we get
n
n- 1
L Z~n) .!... >./(1 - m)
n
as
(4.22)
-+ 00.
i=1
Hence, from (4.16), (4.21) and (4.22)
(4.23)
Let ."
= (72 >./(1 -
m). Since." is a constant, condition (3.21) of Theorem 3.2 in [HH] can
be dropped (see Remarks after Corollary 3.1 in [HHl). Now, it only remains to show for
Xni defined in (4.15) that for all
f
> 0,
n
L
E{X~iI{lxnil>E}IFn,i-d .!... 0
i=1
as
n -+
(4.24)
00.
For this, it suffices to show that
n
L E{IX iI + IF ,i_d .!... 0
n
2 6
n
as
n
(4.25)
-+ 00,
i=1
where 6 is as in the hypothesis of the theorem and 6 E (0,2). But, by a result of Chow
and Teicher (1978) [see Corollary 10.3.2, p. 357] there exists a constant K6 > 0 such that
zSn)
i-l
E{IZ~n) - J1.nZ~~l- Yi(n)1 2 +6 IFn,i_d
= E{I L
(e~~i.k
- J1.n)1 + IFn,i_d
2 6
k=1
<
K 6 IZ~n)
11 + 6 / 2 Elt(n) _
12 +6 •
1-1
"-l,l,-n
Il
Hence, by the ~sumptions that Ele~~212+6
:::; B
for all n ~ 1, and J1.n -+ m < 1 we have
for some constant B 1 that
n
""
L...J E{IX nl'1
i=1
n
2 +6
1:F,n,1'_ 1 } <
- B 1 n-(1+6/2) ""
L...J IZ~n)
1-1 11+6/2
i=l
as
1?
n
-+ 00,
(4.26)
because, for 6 E (0, 2), EIZ~~~ 11+ 8/2 ~ E(Z~~~)2 and it is possible to show using (C-1)
in (4.2) and the assumption m
< 1 that
E:=l E(Z~~~)2 = O(n), as n ~ 00.
Hence, the
theorem follows from Corollary 3.2 of [HH], (4.23) and (4.26). •
Finally, consider the case when J.'n ~ m > 1. For this case, assume further that
{etj)}
and {¥i(n)} defined in (4.1) satisfy the following: For each n ~ 1, the r.v.
d~l has a power series distribution with p.m.f.
P9 n ,
has a power series distribution with p.m.f.
QtPn'
and
1;(n)
(4.27)
where P'n and QtPn are as defined in (2.1) with (J and <p replaced by (In and <Pn, respectively.
Let F9n and G tPn be distribution functions associated with P9n and QtPn' respectively. Also,
let F9 and G tP be distribution functions associated with P9 and Q tP defined in (2.1).
Theorem 4.3. For the model (4.1), assume condition (C-1) in (4.2) with m > 1.
Furthermore, assume that (4.27) holds and
n(J.Ln - m)
~
0
as
~ 00.
n
(4.28)
Then, for V n defined in (4.4),
For the proof of Theorem 4.3, we construct an array process {i~n)} and a process
{Zil
on a common probability space, in the following way: Let {Ui,j} and {Vi} be two sequences
of uniform (0,1) r.v.'s, all of which are independent and defined on a common probability
space. Define, for F9 n , G tPn , F9 and G9 above,
i(n)
~i,j
i,j
~
= F-1(U)
9
i,j,
n
y.(n)
I
= G-1(V.')
tPn
(4.29)
I,
1. Here, for a distribution function H, H-l(w) = inf{x: H(x) ~ w}, for 0 < w < 1.
Now, define ifn) by
-(n)
Zi_l
= 'L...J
"
Z-~n)
I
k=l
i~n)
~I-l,k
+ y-.(n)
I'
13
.
Z
= 1, 2, ... ,
(4.30)
•
-(n)
WIth
Zo
= 1. Also, define Zi
by
Zi-l
Zi
=
E {i-l,k + ~,
(4.31)
i = 1,2, ... ,
k=1
with Zo
= 1.
Observe that, by (4.27) and the above construction, {Z~n)} has the same
distribution as {Z~n)} defined in (4.1) for each n ~ 1, and
{Zil
has the same distribution
as {Zi} defined in (1.1) (with the assumption (2.1)). Define a sequence of O'-fields by
(4.32)
By a result of Seneta (1970), there exists a positive r.v. WI such that
m -n Zn -+ WI
a.s. as
n -+
(4.33)
00,
for Zn defined in (4.31).
Next, we state two lemmas for the array {Z~n)} in (4.30) which will be used in the proof
of Theorem 4.3 below.
Lemma 4.2. For the process {Z~n)} in (4.30), assume that JLn
as n
-+ 00.
-+ m
> 1 and An
-+
A
Furthermore, assume that (4.28) holds. Then, for WI in (4.33),
u-nz-(n)
~ W
n
1
as
rn
Moreover,
n
-+ 00.
(4.34)
as
(4.35)
n
-n ~
JLn
L- Z-(n)
i-I
P
--.
(
m -
l)-IWI
n -+ 00.
i=1
Lemma 4.3. For the process {Z~n)} in (4.30), assume that condition (C-1) in (4.2)
holds with m > 1. Then, as n
-+ 00,
n
- u Z-(n)._y-(n).
)/(Z-(n).+l)i]
( m-1)i~m-;/2[(Z-(n).
Ln-J+I,-n n-J
n-}+l
n-}
D
N(O
---+,0'
2) •
;=1
Lemma 4.2 shows that a result similar to (4.33) holds for the array {z~n)} in (4.30) as
well. The proof of Lemma 4.2 is quite non-trivial and is given in the Section 6. Lemma 4.3
14
is an array version of Corollary 3.3 of Wei and Winnicki (1989) and can be proved using
the same arguments as in their paper. Hence we omit its proof.
The proof of Theorem 4.3 is given next. The method of proof of Theorem 4.3 is similar
to that of Theorem 3.5 of Wei and Winnicki (1989) for the model (1.1), although here one
needs to work harder because of the array structure.
Proof of Theorem 4.3. For the process {Z~n)} in (4.30), let
n
iJ;n
n
= (L Z~~~)-I L(Z~n) i=1
and
fi(n»)
i=1
n
(~Z-(n»)l()
V n = L.J i-I 1 Jln - Jln .
(4.36)
i=1
Since {Z~n)} and {Z~n)} defined in (4.1) have the same distribution for each n ~ 1, we
have that lin has the same distribution as V n defined in (4.4). Therefore, it suffices to show
that
(4.37)
To this end, let ~n) = (ZIn) - JlnZI~~ - fi(n»). Use (4.36) to write
n
lin
n
= (L Z~~~)-! L
i=1
=
€~n)
i=1
(t. Z!~l)-' {t. [(z!~l
+W?
+ 1)1 -
(m(i-l)W,j1]
t, m(i-l)/2~n) /(z!~l +
~n)/(z!~l + 1)'
1)' }
(4.38)
where WI is the r.v. defined in Lemma 4.2 above. Observe that (4.28) implies
(Jln/m)n
--+
1 as
n --+
00.
(4.39)
Therefore, by (4.39), (4.35) and (4.38) it suffices to show that
n
L
[(Z~~l + 1)1 - (m(i-I)Wt}!] ~n) /(Z~~~ + I)! = op(m%)
(4.40)
i=1
and
n
(m
-1)1
L m-(n-i+I)/2€~n) /(Z~~l + l)i .E.. N(O, (12).
i=1
(4.41 )
But, (4.41) follows from Lemma 4.3 by setting n - i
+1 =
j and summing over j. As for
(4.40), use Zi in (4.31) to rewrite the left side of (4.40) as
n
+L
m(i-l)/2{[(Zi_l
+ 1)/m(i-l)]1 -
Wl1 F~n) /(Zf~~
+ 1)1
i=l
= (1)+(11),
say.
(4.42)
Consider (II) first. Apply Cauchy-Schwarz inequality to get
1
1
1(11)1 $ A~BJ
where
(4.43)
n
An =
L m(i-l)/2{[(Zi_l + 1)/m(i-l)]1 -
W 11}2
i=l
and
n
Bn =
2: m(i-l)/2[e~n)]2 /(Zf~~ + 1).
(4.44)
i=l
By (4.33) [or Heyde (1970), Theorem 3] we have that
n
An = 0(2: m(i-l)/2) = o(m i ) a.s.
i=l
Moreover, from the definition of ~n) and (4.30) we get by conditioning w.r.t.
that
(4.45)
(Ii in (4.32)
n
E(B n ) $ O'~
2: m(i-l)/2 = O(m i ),
(4.46)
i=l
since
0'; -+ 0'2.
Hence
(4.47)
From (4.43), (4.45) and (4.47),
1(11)1 = op(mi).
(4.48)
As for (1) in (4.42), use Cauchy-Schwarz inequality once again to get
1(1)1 $
en1BJ,
1
1R
(4.49)
where B n is as in (4.44) and
n
Cn =
L m(i-l)/2{[(Z~~~ + 1)/m(i-l)]1 -
[(Zi-l
+ 1)/m(i-l)]1}2.
(4.50)
i=1
Now, choose n large enough such that
Ix - yl,
> 1. By the inequality
p'n
1v'X"'+l - JY"'+l12 :5
for x,y ~ 0, arguments similar to (6.3) in Section 6 and that E(Zi) :5 Km i for
some K > 0,
n
EICnl :5 L
m-(i-l)/2 EIZ~~~ - Zi-ll
i=1
:5
n
i-2
i
i=1
;=0
;=1
n
L m-(i-l)/2{ cn(L p.~) + dn(L p.~-1 E(Zi-;-I))}
n
:5 cn(p.n - l)-I[(p.n/m) V l]n-l L m(i-l)/2 + Km-1dnn[(p.n/m) V l]n-l L m(i-l)/2
i=1
i=1
(4.51)
because (4.39) holds,
Cn
-+
0 and ndn
-+
0 (see (6.6) in Section 6). Here x V y denotes
max(x,y). Therefore, by (4.49), (4.47) and (4.51) we have
(4.52)
Assertion (4.40) now follows from (4.42), (4.48) and (4.52). From (4.40), (4.41) and (4.38),
we have that (4.37) holds and hence the conclusion of the theorem. •
5. Asymptotic Validity of Bootstrap.
Proof of Theorem 2.1. Given a sample realization of {(Zi,Yi),i = 1,2, ... ,n},
observe that the bootstrap process {Zi} defined in (2.2) is an array process defined in
(4.1) with
P.n
=mn,
An = ~n
and
b2n
--
Var*(Y*)
1 ,
(5.1)
where 171 n is defined by (2.5) and ~n is as in (1.2). For the power series distributions in
(2.1), we have already seen that m
= f(B), A = g(</»,u 2 = Bf'(B) and b2 = </>g'(</», where f
and 9 are appropriately defined strictly increasing and smooth functions (see the paragraph
17
below (2.1)). From this, since
mn -+ m a.s., for all m > 0 (see Proposition 3.1, displays
(3.7) and (3.8» and 'xn -+ A a.s. we have that for each w E {m n -+ m and 'xn -+ A} ,
(C-1) in (4.2) is satisfied for
{Zn
m > O.
in (2.2) for all
(5.2)
Rest of the proof is divided into two cases.
= 1:
Case m
For this case, we will apply Theorem 4.1. By (3.8) of Proposition 3.1
we have that n(m n
-
1) -+ 0 a.s.. Let 0 = {m n -+ 1,'xn -+ A, and n(m n
-
1) -+ OJ.
= 1. For each w E 0, the first two conditions of Theorem 4.1 and
(with a = 0) are easily satisfied. Moreover, it is not hard to show that
Then, clearly P(O)
condition (4.5)
n-iE*lei,l -
mn(w)1 3
-+
0 as n -+
00,
which implies condition (4.6) of Theorem 4.1 (see
Sriram (1992), display (4.7), for instance). Therefore, by Theorem 4.1, for each w E 0,
sup IP*(V; $ x) - P(V(0,A,0'2) $ x)l-+ 0 as
-00<2:<00
n -+
(5.3)
00,
where v is defin~d in (4.8). But, v(O, A, 0'2) has the same distribution as V defined in (1.4).
Hence, (2.6) follows from (1.4) and (5.3) for the case m
Case m
1: 1:
= 1.
Here, we will apply Theorem 4.2 for the case m < 1 and Theorem 4.3 for
the case m > 1. For the case m < 1, let 0 1
= {m n -+ m,'xn -+ A}.
Clearly, P(Ol)
= 1 and
as before all the conditions of Theorem 4.2 are satisfied for each w E 0 1 (note that it is
easy to show that E*(ei,1)3 $ B, for some B > 0, for all n ~ 1). Hence, (2.6) follows easily
from Theorem 4.2 for the case m < 1. For the case m
and n(m n
-
m) -+
OJ.
> 1, let
Note that from (3.9) we have that n(m n
O2
-
= {m n -+ m"xn
-+ A
m) -+ 0, if m > 1 (see
Proposition 3.1). For w E O2 , argue as before and use Theorem 4.3 to conclude that (2.6)
follows for the case m > 1. Hence the Theorem 2.1. •
6. Proof of Lemma 4.2.
Consider the processes defined in (4.30) and (4.31). In view of (4.33), for the assertion
(4.34) it suffices to show that
(6.1)
~(n)
Recall that Eel,l
~
= /-In, E6,1 = m and let
(6.2)
Now use (4.30), (4.31), conditioning w.r.t. {gil defined in (4.32) and (6.2) to write
lR
n-l
n-l
$ C!l(L.,; JJ~) + JJnEIZo
""".
n
-(n)
;=0
n-l
".- Zo I + dn "
L.,;"J,l~E(Zn-i-l)
-
;=0
n
= cn(L JJ~) + dn L JJ~-l E(Zn-i),
;=0
-(n)
where we used Zo
= Zo
= 1.
(6.3)
;=1
By (4.39) (recall that (4.28) implies (4.39» and EZ- n
=
O(m n) we have that
+ 11 - (m/J,lntIE(Zn/mn)
Znl + 0(1).
EI(z~n)/ J,l~) - (Zn/mn)1 $ JJ;;n Elz~n) - Znl
= JJ;;n Elz~n) -
Since J,ln -+ m > 1, let n be large enough such that J,l;;l
EZn-; $ Klm n-; for some K l > 0,
-
1. But, from (6.3) and since
-n""" J,ln;-IE(Z-
n
JJn-nElz-(n)
n
<
(6.4)
Z-n I -< Cn """
L.,; J,ln-; + dnJJn
;=1
n
n- i )
L.,;
i=l
n
~
1
$ CnLJ,l;;; +dnKI(m/J,lntm- L(J,ln/m)i- 1
~I
~I
$ (J,l;;1 -l)-l cn
+ KI(m/J,lntm- 1 [(J,ln/m) V 1t (nd n )
-+0
by J,ln
-+
m > 1, (J,ln/m)n
-+
(6.5)
1, and provided we show that
Cn -+
0 and ndn -+ 0 as
We will show below that ndn -+ 0 as n
Cn
-+
0 as n
-+ 00.
-+ 00.
n
-+ 00.
(6.6)
Similar, but simpler arguments show that
To this end, observe that by (4.28) it suffices to show that
dn = O(IJ,ln - ml)
as
n -+
00.
(6.7)
Recall the definitions of t~~l and tl,1 from (4.30) and (4.31), respectively. Let POn [tl~l =
i) = a(i)6~/A(6n) = Pi(6n ) and PO[el,l = i) = a(i)6 i /A(6) = Pi(6) for i ~ O. Also, for
i
~
0, let
i
i
qi(6n) = L Pi(6 n ),
qi(6) = LP;(6)
;=0
;=0
1~
(6.8)
and
Then, by the definition of Fi",.l and Fi 1 in (4.29) we have that
00
=
L
Iqi(8n ) - qi(8)1
i=O
00
=L Ir i(8n) -
(6.9)
ri(8)1·
i=O
Since J.&n
= Ee~~2 = f(8 n )
and m
= Ee1,1 = f(8)
where f is a strictly increasing and
smooth function (see Section 2), in order to show (6.7) it suffices to show that
limsuPn-+0018n - 81- 1
00
L Ir i(8n) -
ri(8)1 <
(6.10)
00.
i=O
Now, as n -..
16 = (8 -
8n -.. 8 since J.&n -.. m. Let
00,
a,8 + a)
a be
a small positive number such that
E (0,8*). Recall that 8* is the radius of convergence of
Then, there exists N such that 8n E 18 for all n
~
N. Let n
~
2:::0 a(u)8 u •
N. Since r i(8) defined in
(6.8) is a smooth function of 8, by the Mean value theorem
(6.11)
Now, since ri(8)
= E;i+1 Pi(8) we have that
r~(8)
=
00
L
[A(8)]-2 {A(8)j8i - 1a(j) - A'(8)8i a(j)}
i=i+1
= [A(8)]-1
00
L
j8i - 1a(j) - [A(8)]-2 A'(8)
i=i+1
00
L
8i a(j)
i=i+1
which implies that with
8 = 8 + a,
A*
= 80E[8-6,8+6]
max
IA(80 )1- 1
and B*
= 80E[8-6,8+6]
max
IA'(80 )/A2 (8 0 )1
00
max
80E[8-cS,9+cS]
Ir~(80)1 ~ A* "'" j(8)i- 1a(j)
.~
,=1+1
?O
00
+ B*
"'" (8)i a(j).
.~
,=1+1
(6.12)
Therefore, from (6.12) and (6.11)
00
19n
-
00
Ir i(9n )
1
91- L
-
ri(9)1 $ A* L
i=O
00
L
j(8)i- 1 a(j)
+ B* A(8)Es(el,d
i=O ;=i+l
00
= A* Lj2 a(j)8i - 1
+ B* A(8)Es(el,d
;=1
<
00.
The assertion (6.10) now follows from above arguments. Hence, ndn
-+
0, as n
-+ 00.
Thus, (4.34) follows from (6.1), (6.4), (6.5) and (6.6).
As for assertion (4.35), write
n
J.L;;n
I:
n-l
zfn) = J.L;;n[I:(Zf n) -
i=1
n-l
Zi)] + (m/J.Ln)nm-n I: Zi.
i=O
Then, by (4.34) and since (m/J.Ln)n
-+
(6.13)
i=O
1, it suffices to show that
(6.14)
For each 1 $ i $ n, argue as in (6.3) to get
n
J.L;;n
L
n
EIZf ) -
Zil
$ CnJ.L;;n
i=1
-+
n
i-I
n
i
i=1
;=0
i=1
j=1
I:(I: J.L~) + dnJ.L;;n I: I: J.L~-1 E(Zi_j)
(6.15)
0,
by arguments similar to (6.5) and (6.6). Hence the assertion (4.35) and the Lemma. •
Acknowledgements
The second author would like to thank the Department of Statistics, University of North
Carolina at Chapel Hill for providing financial support during his visit there in Fall 1992.
REFERENCES
Athreya, K.B. and Fuh, C.D. (1992). Bootstrapping Markov chains. In Exploring the
Limit" of Bootstrap, R. LePage and L. Billard, Eds, 49-64, Wiley.
Babu, G.J. (1989). Applications of Edgeworth expansions to bootstrap - a review. In
Statistical Data Analysis and Inference, Y. Dodge, Ed, 223-237, North Holland.
11
Basawa, LV., Mallik, A.K., McCormick, W.P. and Taylor, R.L. (1989). Bootstrapping
explosive autoregressive processes. Ann. Statist. 17, 1479-1486.
Basawa, LV., Mallik, A.K., McCormick, W.P., Reeves, J.H. and Taylor, R.L. (1991).
Bootstrapping unstable first order autoregressive processes. Ann. Statist. 19, 1098-1101.
Bhat, B.R. and Adke, S.R. (1982). Maximum likelihood estimation for branching processes with immigration. Adv. Appl. Prob. 13, 498-509.
Bose, A. (1988). Edgeworth correction by bootstrap in autoregression. Ann. Statist.
16, 1709-1722.
Chow, Y.S. and Teicher, H. (1978). Probability Theory: Independence, Interchangeability, Martingales. Springer, New York.
Datta, S. (1992). On asymptotic properties of bootstrap for unstable AR(l) processes.
Tech. Report 92-29, Depart. of Statist., Univ. of Georgia, Athens.
Datta, S. and McCormick, (1992). Bootstrap for a finite state Markov chain based on
LLd. resampling. In Exploring the Limits of Bootstrap, R. LePage and L. Billard, Eds,
77-97, Wiley.
Datta, S. and McCormick, W.P. (1993). Regeneration based bootstrap for Markov
chains. Canadian J. Statist. (to appear).
Efron, B. (1979). Bootstrap methods: another look at the jackknife. Ann. Statist. 7,
1-26.
Hall, P. (1992). The Bootstrap and Edgeworth Expansion. Springer, New York.
Hall, P. and Heyde, C.C. (1980). Martingale Limit Theory and Its Application. Academic, New York.
??
Heyde, C.C. (1970). Extension of a result of Seneta for the super-critical Galton-Watson
process. Ann. Math. Statist. 41, 739-742.
Heyde, C.C. and Seneta, E. (1972). Estimation theory for growth and immigration rates
in a multiplicative process. J. Appl. Prob. 9, 235-256.
Heyde, C.C. and Seneta, E. (1974). Notes on "Estimation theory for growth and immigration rates in a multiplicative process." Appl. Prob. 11, 572-577.
Kiinsch, H.R. (1989). The jackknife and the bootstrap for general stationary observations. Ann. Statist. 17, 1217-1241.
Lahiri, S.N. (1991). Second order optimality of stationary bootstrap. Statist. Probab.
Let. 11, 335-341.
Lai, T.L. and Wei, C.Z. (1983). Lacunary and generalized linear processes. Stoch. Processes Appl., 14, 187-199.
LeCam, L. (1953). On some asymptotic properties of maximum likelihood estimates
and related Bayes estimates. Univ. of California Publications in Statist. 1, 277-330.
Liu, R.Y. and Singh, K. (1992). Moving blocks jackknife and bootstrap capture weak
dependence. In Exploring the Limits of Bootstrap, R. LePage and L. Billard, Eds, 225-248,
Wiley.
Politis, D. N. and Romano, J. P. (1992). A general resampling scheme for triangular
arrays of a-mixing random variables with application to the problem of spectral density
estimation. Ann. Statist. 20, 1985-2007.
Seneta, E. (1970). A note on the supercritical Galton-Watson process with immigration,
Math. Biosc. 6, 305-311.
Sriram, T.N. (1992). Invalidity of bootstrap for critical branching processes with immi-
gration. Under revision.
Sriram, T.N., Basawa, LV., and Huggins, R.M. (1991).
Sequential estimation for
branching processes with immigration. Ann. Statist. 19, 2232-2243.
Wei, C.Z. (1985). Asymptotic properties of least-squares estimates in stochastic regression models. Ann. Statist., 13, 1498-1508.
Wei, C.Z. and Winnicki, J. (1989). Some asymptotic results for the branching process
with immigration. Stochastic Processes. Appl. 31, 261-282.
Wei, C.Z. and Winnicki, J. (1990). Estimation of the means in the branching process
with immigration. Ann. Statist. 18, 1757-1773.
?4
© Copyright 2026 Paperzz