Sen, Pranab Kumar; (1976)Invariance Principles for the Coupon Collector's Problem: A Martingale Approach."

*Work partially supported by the Air Force Office of Scientific Research,
A.F.S.C., U.S.A.F., Contract No. AFOSR-74-2736.
,
INVARIANCE PRINCIPLES FOR THE COUPON COLLECTOR'S PROBLEM:
A MARTINGALE APPROACH*
by
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1097
December 1976
INVARIANCE PRINCIPLES FOR THE COUPON COLLECTOR'S PROBLEM:
A MARTINGALE APPROACH*
•
Pranab Kumar Sen
University of North Carolina
ABSTRACT
For the coupon collector's problem, invariance principles for the partial sequence of bonus sums after
times to obtain the bonus sum
t
n
coupons as well as for the waiting
are studied through a construction of a
triangular array of martingales related to these sequences and verifying
the invariance principles for these martingales.
This approach provides
simpler and neater proofs than given in Rosen (1969, 1970) and strengthens
his convergence of finite dimensional results to those of weak invariance
principles.
AMS 1970 Classification Nos:
60F05, 60G45, 62D05.
Key Words and Phrases: Bonus Sums, coupon collector's situation, finitedimensional distributions, Gaussian functions, invariance principles, martingales, tightness, waiting times.
*Work partially supported by the Air Force Office of Scientific Research,
A.F.S.C., U.S.A.F., Contract No. AFOSR-74-2736.
-2-
1.
Consider a sequence
{~,
INTRODUCTION
N ~ l}
of Coupon co Hector's situations
(1.1)
where
aN(s)
and
PN(s) (>0)
also a (double) sequence
are real numbers and
{INk' k ~ I}
l:~=lPN(S) = 1. Consider
of (row-wise) independent and identi-
cally distributed random variables (i.i.d.r.v.), where for each
N,
(1. 2)
Let then
(1. 3)
(1.4)
Then,
situation
in
is termed the bonus sum after
ZNn
~~.
n (~O) ,
If the
aN(s)
and, for every
t
n
coupons
are all non-negative,
~
0,
in the collector's
ZNn
is non-decreasing
let
(1. 5)
Then,
UN(t)
is termed the waiting time to obtain the bonus sum
coupon coZZector's .situtation
•
t
in the
nN.
For an arbitrary positive integer
inf N- l
< lim sup N- l
0 < lim
~
n lN - N~
n bN <
00
b
,
and
{n
< .•. < n } satisfying
lN
bN
under certain regularity conditions,
Rosen (1969) has established (by very elaborate analysis) the asymptotic
(multi)normality of (the standardized form of)
{ZNn , .. "ZNn };
lN
bN
in a
-3-
follow up ([6]) ~ he has studied parallel results for
{UNt}.
The object of
the present investigation is to propo?e and formulate an alternative approach
to this problem based on the weak convergence of a suitably constructed martingale sequence associated with the
ZNn.
This provides a simpler, shorter
and neater proof of the aforesaid normality and also an invariance principle
{ZNk' k s n}
for the partial sequence
se.quence of waiting times.
Section 2.
as well as for the corresponding
The basic regularity conditions are outlined in
Asymptotic normality of
ZNn
is established in Section 3 with
the aid of some recently developed martingale central limit theorems, and
the multi-normality extension is presented in Section 4.
{ZNk' k s n}
devoted to the (weak) invariance principle for
results for the sequence
{U ' t
Nt
~ O}
Section 5 is .
are studied in Section 6.
remarks are made in the concluding section.
2.
PRELIMINARY NOTIONS
Note that by (1.2)-(1.4), for every
(2.1)
(2.2)
Let us denote by
•
(2.3)
and allied
N(~l),
k
~
1, EY
NO
=0
A few
-4-
(2.4)
n~O
•
-l,N I
Ir ,
~r = N £s=l ~(s)
(2.5)
for
r = 1,2,3,4 .
We assume that
(2.6)
(2.7)
(2.8)
Note that
xe
-x
::; e
-1
,
'Ij
x >0
and for
0::; x::; 1, 0::; e
-nx
- (I-x)
n
2 -nx
::; nx e
.
Hence, from (2.2) and (2.3), we have
N
I<PNn -<PNn I = II S=l aN (s)(s)[e
- {l-P (5) }n]
N
I
N
-npN(s)
-1
::; IS=ll~(s)lpN(s){nPN(s)e
}::;e M1~1' V n~O, N~l.
(2.9)
In fact, if the
that
-npN(s)
e
aN(s)
-x (l-e -x )::; x, V x
are all non-negative then
~
0,
<P
Nn
~ <P
Nn
.
Also, noting
we obtain from (2.4) that
(2.10)
::;nI~=la~(s)PN(S) ::;nM1AN2=0(n~2),
•
Further, using the fact that for
1
,n-1 -kx
x(l-~x)l.k=Oe
,
0 < x::; 1, (I-e
we obtain that for
N>~M1'
V
-nx
n~l, N~l
) = (I-e
.
-x ,n-1 -kx
)l.k=Oe
>
-5-
(2.11)
-1,
2
-l,N
2
Now, by (2.6), (2.7) and (2.8), ~21.{s:PN(S»e:/N}aN(s)PN(S)~ AN21.s=laN(s)PN(s)-e:
-~ (n+l)p (s)
"e:>0, and noting that for PN(s) >e:/N and n/N>n>O, l-e
N
~ c(e:,n»O,
we obtain from (2.11) that if
o<
(2.12)
lim inf N- l
N~
n
~ lim sup N-ln <
N~
00
,
Thus, we have
then
(2.13)
when (2.6)-(2.8) and (2.12) hold.
We are primarily concerned here with the limiting behavior of the partial
sequence
dN~l (ZNk -<!>Nk;
k
~ n).
Since
d~~aN(s),
1
~s~N
remain invariant under
any scalar multiplication, we may without any loss of generality set
(2.14 )
for every
Then, by (2.9), (2.13) and (2.14), we have
d~~{l~~~nl<!>:k-¢Nkl} +
we may equivalently consider the partial sequence
•
N .
d~~ (ZNk -<!>Nk;
k
0,
~ n).
so that,
In the
remaining of this section, we consider a basic lemma, to be used repeatedly in
subsequent sections.
u = 1, ...
(2.15)
,p(~2)
Let
QNk = PN (INk)' k
~ 1
and let
gNu (Y
Nk
, QNk) ,
be such that
sup M
<
N -~,3
00
'
-6-
(2.16)
•
Note that (2.15) and (2.16) insure that
(2.17)
::;; ~ , 5 where
Lemma 2.1.
~ "5 ~ ~ 3M
- 4 .
-~,
Under (2.6), (2.15) and (2.16), for every
p
(2.18)
-<vp~n,
p
EITgNu(Y NV ,QNV ) =ITEgNu(YNV ,QNV )
u· u=l
u
u
u=l
u
+
(2.20)
O(N
-1 p-1
V[gN1 (YNV ,QNV )] = O([N
1
Proof.
0=V <V < ...
O 1
1
-1 2
MN,4[MN,3 v N MN,4])'
-1
~
'
5]
A
[(N
-1
M
N, 4)
2
])
•
We shall only prove (2.18); the others follow by similar arguments.
Note that
(2.6))
,
by (2.6) and (2.16).
Similarly, for each
u(= 1, ... ,p),
-7-
•
where, by (2.6) and (2.16), the first term on the right hand side of (2.22)
is
0(N-l~,4)' The product of the p factors of the first erm in (2.22)
involves
NP terms where as (2.21) involves
NP - N[P]
N[P]
terms; by (2.15) and
oMI'-l'N- P).
N,3 -~,4
Hence, the proof follows from .(2.21)-(2.22). For p=2, NP -N[P] =N
N
2
- (VI +V 2 )P N(s)
-2
and IS=lgNl(aN(s),PN(s))gN2(aN(s),PN(s))PN(s)e
= O(N ).
(I~=llgNl (aN(s),PN(s))gN2(aN(s),PN(S))I] = 0(N- 2 0MN,5)' so that (2.19)
follows on parallel lines.
Q.E.D.
(2.16), the contribution of these
terms is
OeM
The main theorem of this section if the following.
Theorem 3.1.
Under (2.6)-(2.8) and (2.12),
(3.1)
Proof.
Unlike the approaches of Baum and Billingley (1965) and Rosen
(1969, 1970), our proof rests on a construction of a (triangular array of)
martingales related to
{Y
,
Nj
, j::;; k}, k ~ 1,
{ZNn}'
and let
every
N, B
Nk
(3.2)
X(n) = Y (l+Q )k-l
Nk
e
Nk
Nk
(3.3)
B
NO
is non-decreasing.
-nQ
Nk
BNk be the sigma-field generated by
be the trivial signma-field. Then, for
Let
For every N,
n(~l),
, QNk = PN(INk) , k
~~
we define
1 ; X(n) = 0 ,
NO
(n)
N
-npN(s)
k-1
(n) = 0
~Nk = Is=laN(s)PN(s)e
[l+PN(S)], k ~ 1 , ~NO
'
-8-
and consider the sequence
(3.4)
It
x~n) = X(n) _ E(x(n) IB
uNk
Nk
-~k
Nk-l
)
=(x(n)_~(n))+\k-lX(n)Q
Nk
Nk
L.v=O Nv
Nv
(l+Q
)k-V
Nv'
k~l; i(n) =0
NO· .
Then, on denoting by
(3.5)
"'(n) '\k "'(n)
SNk =L.j=OXNj , k~O
and
"'(n) _ '\k /:"(n)
>
SNk -L.j=O"'Nk ' k-O,
we obtain from (3.2)-(3.5) that
(3.6)
(3.7)
(3.8)
so that for every N, n(~l), {'s~~), B ; k ~ O}
Nk
. (2.6) and (3.6), it readily follows that
is a martingale.
From
bounded, with probability one.
\s(n)_z +¢ I is
Nn
Nn Nn
Thus, by (2.13), (2,14) and the above, we
conclude that for every e: > 0,
there exists an
so that under (2.6) and (2.12), we have from (3.7) that
NO (e:) ,
such that under
(2.6)-(2.8) and (2.12),
(3.10)
P{d,""l Is(n)_z +¢ I >e:} = 0, V N~NO(e:) .
Nn
Nn
Nn Nn
-9-
Consequently, i t suffices to show that under (2,6) - (2.8) and (2,12),
L(.d-Nn1'S(n))
Nn
(3.11)
+ N(O 1)
.,.
Now, for the martingale-difference array
I I (n)1
\"'(n)I<\
XNk
- YNk + E;;Nk
(3.12)
where
(n)
I~Nk
N
Nn
),
!.:
l~k~n,
IYNk I ~ 1~:~N IaN (s)
Al so,
by (2.7), (2.13) and (2.14).
. ! . : !.:
NO(E),
IQNV '
!.: .~
N
LS=llaN(s)lpN(s) ~M;A~2"'M;, Vk~l.
an
YNv
I ::;;Ls=llaN(s)lpN(s) ~M;AN2"'M1'
!.:
O(N 2 ) = O(d
,k-1,
+ l.v= 1
{d~~X~~); k ~n}, by (3.4),
E
> 0,
there exists
such that
p{ l~k~n
max d- 1 Ix(n) I > 1 =
Nn Nk
Ef
(3.13)
the above equation also insures that
(3.14)
max d-1\X(n)1
Nn Nk
l~k~n
is uniformly bounded in
L norm.
2
Further, by (3.4)-(3.8),
and some routine
steps leads us to
(3.15)
Thus, by virtue of (3.13)-(3.15) and (3.8), we are in a position to use
the recently developed centra1,limit theoresm by Dvoretzky (1972), Scott
(1973) and McLeish (1974), and to prove (3.11), it suffices to show that
(3.16)
where
d- 2{,N
Nn L.k=l
[len) _E(l(n))]}
Nk
Nk
g0
'
=
(L~:iIYNVIQNV) ~
Finally,
Hence, for every
I
(3.17)
k
~
1 •
By steps similar to those after (3.12), it follows that the
all bounded with probability
1,· while, by (2.13)-(2.14),
are
O(n
-1
).
Hence, to prove (3.16), it suffices to show that
max ICov(l(n) l(n) \
Nk ' Nq
(3.18)
Defining
l~k<q~n
gNu (YNV,QNV)
as
Y~VQNVe
-+
0
as
(or
!.:
M{),
-+
00
-2nQ
NV(1+QNV)2(k-l)
it readily follows that (2.15)-(2.16) hold with
~,4 ~Ml
n
(or
M , 3 = 0(1)
N
and hence, (3.18) can be proved directly by repeated use of
(2.18)-(2.20) for the individual terms in the expansion of
~~)~~) by (3.17).
Q.E.D.
We may remark that intuitively one may attempt to work with the alternative construction:
YNk = YNk - E(YNk IBNk _l ),
k ~ 1 'Y
NO
= O. Then, one would have
(3.19)
Whereas the asymptotic normality of
~(n)
lines as in
generally
SNn'
!.:
0
p
(n 2 ) ,
-l~
d n ZNn
N
may be proved. along the same
the second term on the right hand side of (3.19) is not
so that this particular construction may not be very
helpful for the desired normality of
-1
dNn(ZNn-¢Nn)'
-11-
4.
CONVERGENCE OF FINITE DIMENSIONAL DISTRIBUTIONS
For an arbitrary positive integer b,
tive integers
{n
lN
< ••. < n
bN
} where
o < lim inf N- l
(4.1)
N+
consider a sequence of posi-
00
n lN
<
lim suPN-l
<
N + 00
nbN
00
•
We are basically interested in the asymptotic multinormality of
(4.2)
when (2.6)-(2.8) and (4.1) hold, and we impose (2.14) without any loss of
generality.
Let us define
0))
._tN = ( (kq,
N k Jq=1 , ... , b by
N-1 d2
,k=q(=l, ... ,b)
Nn
kN
-1 N
2
-n NPN(s)
-nkNPN(s)
(l-e
)
N Ls=laN(s)e q
o
=
kq,N
-1
(N
-nkNPN(S)) ( N
. -nqNPN(S))
-N nkN Is=laN(s)PN(s)e
Is=laN(s)PN(s)e
,
(4.3)
Recalling that a vector 1 N is '" Nb (!:N' IN)
is '" N1 (OJ~'lN~)' we have the following.
if for every ~ ~ Q,
Theorem 4.1. Under (2.6)-(2.8) and (4.1)
(4.4)
L(N
~
[ZN
n jN
-epN
n jN
~
' 1 ~ j ~ b]) + NbC~'L.t..l) •
.- "".
.6' (1N-~N)
-12-
Defl"ne
Proof.
•
X(k) ,
Nk
"(n)
(4.5)
X
=
Nk
{ 0,
s~~) = s~~)
Note that
ceeding as in
1
"'x(n) "'s(n) k>O
Nk' Nk~-~
as in Section 3.
Let then
O:::;k:::;n,
S(n)
Nk
X
= \k
(n)
l.v=O N
'
k >0
-
•
k>n;
for
k:::; n
(3.9)-(3.10)~
s~~) = s~~)
and
for
k > n.
Thus, pro-
under (4.1), it suffices to show that
(n" N)
N-~(§N J , 1:::; j :::; b) is '" Nb(£,kN) " For this purpose, consider an
n
jN
arbitrary linear compound (where .6 ~ 0)
-~" (n jN ) _ -~ n bN
b
Lj =l Aj N
(4.6)
* _ b
where
X
Nk
SNn
jN
,,(n jN )
-Ij=lA/Nk
E(X~kIBNk_1) = 0, 'I k ~ 1,
b
,,(n jN ) _ -~ n bW'*
- N Lk=l Lj=l Aj XNk
) - N Lk=l XNk'
k ~ 1.
Note that by (3.2), (3.4) and (4.5),
and, we may virtually repeat the proof of Theorem
3.1 [viz., (3.12)-(3.18)] with
are omitted.
say,
"'(n)
X
Nk
being replaced by
"*
XNk;
the details
Q.E.D"
The prescribed martingale approach provides a solution, considerably
shorter and neater than the one given in Rosen (1969).
5.
For an
define
AN INVARIANCE PRINCIPLE FOR THE BONUS SUM PROCESS
arbit~ary
T(O<T<oo),
let
J= [O,T],
and for
O:::;x:::;y:::;T,
-13-
and let
YN(x~y) =YN(Y~x)
it can be shown that
of
(x~y)
2
for
O~y~x~T,
is (uniformly in
YN(x~y)
(2.l0)-(2.ll)~
Proceeding as in
N)
a continuous function
Let us then define
EJ .
(5.2)
o
It follows from (5.1) and (5.2) that
YN(x~y) ~
that there exists a positive constant
M
i
2
~
and, more-
sup
0
N YN(x,y)
Let
K«oo) ,
depending only on
T and
in (2.6)-(2.8), such that
(5.3)
{~N
=
[~N(x),
~
XEJ]}
KIy-x I , V(x,y)
E
J2 .
be a sequence of (independent) Gaussian
where EyN(x) = 0 and EyN(x)yN(y) = YN(x~y), 'f (x,y)
422
0
222
E[sN(Y) - sN(xl] = 2(E[sN(Y) -sN(x)] ) = 2[YN(x,y)] ~ 2K (y-x) ,
functions on
Then
'f(x,y)
SN
'f (xly) e J
by using (2.6)-(2.8) and (2.14), it follows by some standard steps
over~
the
01
2
EJ,
J,
so that by the Ko1mogorov existence theorem, for every
belongs to the space
Now, for every
N~
C[J],
with probability
E
2
J .
N,
1.
consider the sample process
W = {W (x), x
N
N
E
J} ,
where
(5.4)
[s]
being the largest integer
~s.
Then,
W belongs to the space
N
D[J]~
endowed with the Skorokhod J -topo1ogy.
l
Theorem 5.1.
Under (2.6)-(2.8) and (2.14),
equivalent in
law~
in the J - topology on
1
{W } and
N
D[J].
{sN}
are aonvergent-
-14-
Proof.
{W }
N
It
We need to show that
(a) the finite dimensional distributions of
are convergent equivalent ~o the corresponding ones of
(b) that
{W }
N
is tight.
by (1.4) and (5.5),
and
Now, (a) follows readily from Theorem 4.1.
WN(O) = 0,
with probability
(b), i t suffices to show that for every
0:
{~N}'
a < 0 < T and an interger NO'
£
1,
> a and
V N.
Hence, to prove
n > 0,
such that for every
x
Also,
there exists a
€
J
N ~ NO'
and
p{sup[IWN(y) -wN(x)l: x~y~ (x-c) va] >£1 < no/T
(5.5)
Suppose that in (5.4), we replace
£'
E
J,
and
WN.Then, proceeding as in (3.9)-(3.10),
> 0,
lim sup p{sUPI W (x) _ W (x)
N+oo
XEJ
N
N
(5.6)
"'[Nx]
SN[Nx]' x
by
'"
denote the resulting process by
it follows that for every
ZN[N] - ¢N[Nx]
Hence, i t suffices to prove (5.5) with
I>£
}
=
a
'"
W replacing
N
WN'
Towards this
note that
Since
{S~~),
[insuring that
B ;
Nk
k~
is a martingale, by (3.12), (3.14) and (3.19)
l}
(I~~~] [~~) -E£.~~)] J/d~ [Nx] ~
1, V
X E
J],
we are in a posi-
tion to use Theorem 2 of Scott (1973) and this, along with (2.13) and (2.14),
implies that for every
I
NO'
(5.8)
such that for
£
>0
N~NO'
max
p{n-oN~k~n
Also, if we choose
0 (>0)
and
n= [Nx],
n > 0,
XEJ
N-~ls(n)_s(n)1
Nn
0: 0<0 < M and an
there exist a
Nk
so small that
>k
2 £
} < 1
2T
o~ < 1,
n~
u
•
then for
[n-k]
~
oN,
-15-
(n-k){l~:~N PN(S)}SOH1 <1~
Hence~
by (2,6).
n~k~ (n-oN)
for
AO,
we
obtain from (3.7) that
k
\ [
(Y Q ) E
(Y Q )]
N -~(~.s(k)_~s(n))
= i=l
L. gn k"
N"'
N"1 - gn k" l" . N"1 N'1
Nk
Nk
"l
1
(5.9)
1
where
1 sis k S n S Nt •
Note that by (2.6)-(2.7), for every
nSNt,
(2.15)-(2.17) hold with
~,3
max I
I
= O(N -1 (n-k)N -~lSssN
aN(s) ) = [O(l)][(n-k)/N],
and
~,5
= N-2 (n-k) 2 -0(1).
!.:2
M
= N- (n-k)-0(1)
n,4
Hence, by (2.19)-(2.10) and (5.9)-(5.10), we have
(5.11)
V k: NT
M*«OO)
where
~
does not depend on
n
~
o.
k
~
(n- N) v 0
1
By (5.11) and Theorem 12.2 of
Billingsley (1968, p. 94), we conclude that for every
(5.12)
and
K*
we choose
p{n-oNsksn
max
N-1S(k)_s(n)I>!.:}<K*
Nk
Nk
2£
does not depend on
0(>0)
of (5.12) i s s
~
£
so small that
no/T.
and
0<
o.
-
-2 0 2
n£ 212K*T,
K*<oo
£,
For every
n S TN, t < 00,
£
,
> 0, n > 0
T < 00
,
so that the right hand side
From (5.8) and (5.12), we obtain that
"
~
and this completes the proof of (5.5) (for
and
W ).
N
Q.E.D.
-16-
6.
INVARIANCE PRINCIPLE FOR THE WAITING TIME PROCESS
We define the waiting times
all the
..
k(~O).
aN(s)
UN(t) , t
~
0
as in (1.5).
are assumed to be non-negative, so that
-1
N A ~ 1,
N2
so that
0::;
k
\u : ; ~2
~ 1.
are all uniformly asymptotically (as
N~l
defini tion
ZNn::;
the domain
t::; NA .
1
sequence
V n ~ 1,
~ = {~(x): 0::; x::; NA}
<P , n
Nn
We assume that
~(x)
d
aN(x) = dx ~(x) =
N -+ (0) negligible.
Further, by
~N
I
~
0
as in (2.3) and introduce the
by letting
<PN~ (x) = x,
Note that by (2.3) and (6.2),
t.
in
so that in (1.5), we are only interested in
Let us define
(6.2)
(6.3)
"
0(1) ,
Note that by (2.14) and (2.7),
1
is
lim inf A
N -+ 00 --~n ~ Al > 0 •
(6.1)
~
ZNk
Here also we make the assumptions (2.6)-(2.8) as well as the con-
vention (2.14) i. e.,
k
Note that here
0::; x < NA 1
is non-decreasing and
1
aN(s)PN(s)e
-aN(X)PN(SJ-
,
s=l
Thus, by steps similar to those in (2.10)-(2.11), it can be shown that under
(2.6)-(2.8), (2.14) and (6.1), for
continuous functions of
u
(6.5)
and
Let us also define
yN(x,y)
,
x=Nu, 0<u<A , aN(Nu)
1
and
and, further,
aN (Nu) = 0e (N -1 );
0 e = exact order •
as in (5.1) and set
~(Nu)
are
-17-
(6.6)
•
Then, note that by (6.5), for every
•
(6.7)
N+oo
u
E:
(0,A )
1
v ( Iv I ~ K) ,
as
,
= va'N (Nu)
+~
v
2 k
. k
N2OC~(Nu + 6N~)
.
N
Further, by (6.2) and (1.5), for every
k
(6.8)
P{UN(Nu) > ~(Nu+N~)}
v
(0<,6<1)
and
=P{ZNaN(Nu+N~V)
=
N +
E:
(0,A ),
1
1
P{N-~[ZNaN(Nu+N~)-~N~(NU+N~V)] ~N-~[NU-~N~(NU+N~V)]}
k
= p{WN (N- 1aN (Nu+N~))
where
u
~ Nu}
1
as
and fixed
W (')
N
is defined by
\f u E:
00,
(0 ,AI)'
~
-v},
by (6.2),
(5.4), and by (6.7),
-1
k
N [aN(Nu+N~)-aN(Nu)] +
0
Hence, by Theorem 5.1 [viz., (5.5)], the right hand
side of (6.8) is convergent equivalent to
On the other hand, for every finite
•
v
and
u
E:
(O,A ),
1
by (6.7) and (6.8),
k
P{N-2[UN(Nu)-~(Nu)]/SN(Nu»v}
(6.10)
=p{UN (Nu) >aN (Nu) +Nk2SN (Nu)v}
1
=P{UN(Nu»~(NU+~~(NU)V/aN(NU)+O(l))}
'" p{W (N
N
-1
k
_k
aN (Nu+N ~SN (Nu) I ~ (Nu) +0 (1)) ~-vSN (Nu) IaN (Nu) +0 (N 2)}
-18~
Now, by (5. 1) and (6.4), (6 . 6)
by (2.10) and (5.1), it is
~
0(1).
Hence, by the convergence equivalence of
•
(6.8)-(6.9), the right hand side of (6.10) is convergent equivalent to
..
(6.11)
1
!<:
P{WN(N- ~(UN))/Y~(XN(Nu),
+
1
I2IT
f-V exp(- ~ t 2 )dt
1 foo
2
= I2IT
exp(., ~ t )dt
.
_00
by Theorem 5.1.
~(Nu)) ~ -v}
,
V
Hence, we conclude that
(6.12)
Let
J* = [0, T*]
for some
0 < T* < AI· and consider a sequence of sto-
chastic processesw~ = {W~(u), UEJ*},
N~ 1,
where
(6.13)
Let us define
Y (.,)
N
as in (5.1) and
(XN(')
as in (6.2), and let
(6.14)
Since
(XN(x)
is a monotonic transformation of
x,
defining
{sN}
as in
after (5.3), by transformation of the time-parameter, we obtain a sequence
{SN}
•
and
with
Es~(u) = 0
Under (2.6)-(2,8), (2.14) and (6.1), for every
T* <AI' {W
of Gaussian functions where
EsN(u)sN(V)
Theorem 6.1.
and
Proof.
{sN}
= yN(u,V),
sN = {sN(u), u E J*}
'V U,VEJ* .
N}
are convergent equivaZent in law.
For every given
m(~l)
and
0
~
u < •••. < u m ~ T* ,
1
virtually repeating
the steps (6.8)-(6.11), but working with the vector case, it follows that
-19-
(6.15)
•
Also, recall that by (1.5)
(6.16)
ZNU (t) ~ t·
N
•
where by (2.7) and (2.14),
<
ZNU (t) + YNUN (t) + l ' \f t ~ 0 ,
N
~~iIYNkl ~ l~:~NlaN(s) I = o(N!z) •
I
sup N- 1
'
N!z{UEJ*
ZNUN(NU)-U
(6.17)
I} £ 0
as
N+
00
Hence,
•
On the other hand, by Theorem 5.1 and (6.2),
sup I -1
I ~ 0,
UE J * N ZNaN ()U U
(6.18)
as
N+
00
•
By (6.5), (6.17) and (6.18), it follows by some standard steps that
(6.19)
Having proved this, we may proceed along the lines of the proof of Theorem
17.3 of Billingsley (1968, p. 149) and show that the weak convergent equivalence of
{C;;~}.
{W } and
N
{C;;N}
in Theorem 5.1 implies the same for
{W*}
N
and
Q.E.D.
7.
,
SOME CONCLUDING REMARKS
Parallel to (6.12), Rosen (1970) has obtained the asymptotic normality
-,
along with the asymptotic equivalence of
of
2
SN(Nu)
and
Var[UN(Nu)].
aN (Nu)
and
EUN(Nu)
as well as
However, his conditions (2.7)-(2.8) are some-
what more restrictive than ours and our Theorem 5.1 provides us with Theorem
-20-.
6.1 under no entra conditions·- Rosents approach presumably encouters considerable difficulties in this respect.
•
•
Secondly, in his Theorems 2 and 3,
Rosen (1969) has studied the convergence of f.d.d.'s of
{W } to those of
N
{~N}
under additional conditions [viz .• his (5.4), (5.10)] which insures
that
Y (x,y) -+- y(x,y)
N
as
N -+-
00,
V (x.y)
E
J2 •
Under his setup. using
{W } (by using Theorem
N
12.2 of Billingsley (1968)) provided one assumes (in our notations) that for
r/2
some r > 2, ~/AN2 = 0 (1), V N ~ NO' In our approach, these additional
his Lemma 3.17, one can also prove the tightness of
conditions do not appear to be necessary.
Moreover, contrasted with his
Hilbert space approach, the present one appears to be more elementary too.
Finally, through the pioneering work of Rosen (1972a,b), the coupon collector's problem occupies a prominent place in the development of the asymptotic
theory of finite population sampling (without replacement) with varying probabilities.
It is hoped that the results of the present paper will lead to
more work in this potential area.
)
REFERENCES
•
[1]
Baum, L,E., and Billingsley, P. (1965). Asymptotic distributions for
the coupon collector's problem, Ann. Math, Statist, 36, 1835-1839 .
[2]
Billingsley, p, (1968),
New York.
[3]
Dvoretzky, A, (1972). Central Limit theorems for dependent random variabIes. Pr>oa. 6th BepkeZey Symp. Math. Stat. Pr>ob. ~, 513-535.
[4]
McLeish, D.L. (1974). Dependent central limit theorems and invariance
principles. Ann. Pr>obabiZity ~, 620-628.
[5]
Rosen, B. (1969). Asymptotic normality in a coupon collector's problem.
Zeit. Wahpsah Vepw Geb. ~, 256-279.
[6]
Rosen, B. (1970). On the coupon collector's waiting time.
Statist. ~, 1952-1969.
[7]
Rosen, B. (1972). Asymptotic theory of successive sampling with varying probabilities without replacement, I and II. Ann. Math. Statist.
43, 373-397; 748-776.
[8]
Scoot, D.J. (1973). Central limit theorems for martingales and for processes with stationary increments using a Skorokhod representation
approach. Adv. AppZ. Ppob. ~, 119-137.
Convepgenae of ppobabiZity Measupes. John Wiley,
Ann. Math.