Laws of large numbers for ratios of uniform random

Open Math. 2015; 13: 571–576
Open Mathematics
Open Access
Research Article
André Adler*
Laws of large numbers for ratios
of uniform random variables
DOI 10.1515/math-2015-0054
Received May 25, 2015; accepted September 3, 2015.
Abstract: Let fXn ; n 1g and fYn ; n 1g be two sequences of uniform random variables. We obtain various
strong and weak laws of large numbers for the ratio of these two sequences. Even though these are uniform and
naturally bounded random variables the ratios are not bounded and have an unusual behaviour creating Exact Strong
Laws.
Keywords: Almost sure convergence, Strong law of large numbers, Weak law of large numbers, Slow variation
MSC: 60F05, 60F15
1 Introduction
In this paper we examine laws of large numbers for ratios of uniform random variables. It turns out that when we
examine the distribution of uniform random variables the ratios are not integrable. Moreover, they are barely without
a finite first moment, much like the St. Petersburg distribution, see [4] and [6]. In this paper we will show how to
establish strong and weak laws of large numbers for these types of ratios.
The two sequences of uniform random variables are fXn ; n 1g and fYn ; n 1g. In Section 2, the random
variables fXn ; Yn ; n 1g whose ratios Xn =Yn we consider are i.i.d. In Section 3, we examine order statistics from
a sample of size two from the same uniform distribution. Then, in Section 4 we examine two different types of
uniform random variables. The surprising twist is that in every case the distribution of these ratios belong to a Pareto
family, where these ratios are not integrable, causing classical strong laws to fail while the classical weak laws aren’t
affected at all. This is why we need to obtain our Exact Strong Laws, see [1]. An Exact Strong Law is a almost
sure limit of normalized weighted sums of random variables that have either mean zero or no mean at all. In certain
situations we can make that almost sure limit to be a nonzero constant.
As usual, we define lg x D log .maxfe; xg/ and lg2 x D lg.lg x/. Also we use the symbol C to denote a generic
positive real number that is not necessarily the same in each appearance.
2 U(0,p) vs. U(0,p)
Let fXn ; n 1g and fYn ; n 1g be independent sequences of U.0; p/ random variables. Here we let Rn D Xn =Yn .
In order to obtain the density of R, let Z D Y . Then the density fXY .x; y/ D p 2 I.0 < x; y < p/ transforms to
Rp
fRZ .r; z/ D zp 2 , where 0 < rz < p and 0 < z < p. Therefore, if 0 < r 1, then fR .r/ D 0 zp 2 dz D 1=2
R p=r
and if r > 1, then fR .r/ D 0
zp 2 dz D r 2 =2. These random variables do not have a finite first moment,
hence the strong laws associated with them are not typical. Here we must obtain weighted strong laws in order to
*Corresponding Author: André Adler: Department of Mathematics, Illinois Institute of Technology, Chicago, Illinois, 60616, USA,
E-mail: [email protected]
© 2015 André Adler, licensee De Gruyter Open.
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License.
572
A. Adler
obtain a finite nonzero limit. This is surprising given the fact that both fXn ; n 1g and fYn ; n 1g are sequences
of bounded random variables.
Theorem 2.1. If Xn and Yn are i.i.d. U(0,p) random variables, then for all ˛ > 2
PN .lg n/˛ Xn
1
nD1
nYn
lim
D
almost surely:
N !1
2.˛ C 2/
.lg N /˛C2
Proof. Let an D .lg n/˛ =n, bn D .lg n/˛C2 , cn D bn =an D n.lg n/2 and Rn D Xn =Yn . We use the usual
Khintchine-Kolmogorov Convergence Theorem argument, see [3], page 113. We partition our sum into the following
three terms:
PN
PN
ERn I.Rn cn /
nD1 an Rn I.Rn cn /
nD1 an Rn
D
bN
bN
PN
PN
an ERn I.Rn cn /
an Rn I.Rn > cn /
C nD1
:
C nD1
bN
bN
The first term converges to zero almost surely, using Kronecker’s lemma, since
1
X
cn 2 ERn2 I.Rn cn / D
nD1
1
X
cn 2
nD1
D
1
X
Z1
Zcn 1
1
dr
r 2 dr C
2
2
1
0
cn 2
nD1
1
1
X
X
cn 1
1
1
C
C
< 1:
cn 1 D C
6
2
n.lg n/2
nD1
nD1
The second term converges to zero almost surely, using the Borel-Cantelli lemma, since
1
X
P fRn > cn g C
nD1
1 Z1
X
r
2
1
X
dr D C
nD1cn
cn 1 D C
nD1
1
X
nD1
1
< 1:
n.lg n/2
As for the third term
1
ERn I.Rn cn / D
2
Z1
1
rdr C
2
Zcn
0
nD1
dr D
1
1
1
C lg cn lg n:
4
2
2
1
Thus
PN
1
r
an ERn I.Rn cn /
bN
1
2
.lg n/˛C1
n
.lg N /˛C2
PN
nD1
!
1
2.˛ C 2/
concluding the proof.
While the only strong law that we can establish for our random variables is unusual our weak law has a lot more
freedom. Even though these random variables don’t have a finite first moment, we can still use the Degenerate
Convergence Theorem, see [3]. In fact, we include a slowly varying function as a multiplicative factor in both the
summands and norming sequences. We let L.x/ be any slowly varying function, see [8]. Similar weak laws can be
found in [2].
Theorem 2.2. If L.x/ is any slowly varying function, then for all ˛ > 1
PN n˛ L.n/Xn
1
P
nD1
Yn
!
as N ! 1:
˛C1
2.˛ C 1/
N
L.N / lg N
Proof. Here, we let an D n˛ L.n/, bn D n˛C1 L.n/ lg n and Rn D Xn =Yn . From the Degenerate Convergence
Theorem, which can be found on page 356 of [3], we have for all > 0
N
X
nD1
P fan Rn > bN g C
N
X
Z1
nD1b =a
n
N
r
2
dr N
C X
an
bN
nD1
Laws of large numbers for ratios of uniform random variables
573
P
˛
CN ˛C1 L.N /
C
C N
nD1 n L.n/
˛C1
D
D ˛C1
!0
lg N
N
L.N / lg N
N
L.N / lg N
where we used a theorem that applies to sums containing slowly varying functions, which can be found on page 281
of [5]. Similarly, the variance term in the Degenerate Convergence Theorem is bounded above by
bN
Z1
Z =an N
an2 ERn2 I.Rn bN =an /
1
1 X 2 1
2
r
dr
C
dr
D
a
n
2
2
2
2
bN
bN
nD1
0
1
!
PN
PN
N
˛
C X 2 bN
CN ˛C1 L.N /
C
C nD1 an
C nD1 n L.n/
2
˛C1
D
D
D ˛C1
! 0:
an
an
bN
lg N
N
L.N / lg N
N
L.N / lg N
bN nD1
PN
nD1
Our truncated first moment is
bN
Z =an
1
1
1
rdr C
r 1 dr D C
lg.bN / lg.an /
2
4
2
0
1
1
1
D C
.˛ C 1/ lg N C lg L.N / C lg2 N ˛ lg n lg L.n/ :
4
2
P
Let’s now examine the six terms of b1N N
nD1 an ERn I.Rn bN =an /:
1
ERn I.Rn bN =an / D
2
Z1
PN
N
˛
1 X
CN ˛C1 L.N /
C
nD1 n L.n/
˛C1
D
! 0;
an D
˛C1
4bN
lg N
4N
L.N / lg N
N
L.N / lg N
nD1
N
N
N
X
X
.˛ C 1/ lg N X
.˛ C 1/ lg N
˛C1
˛
an D
n
L.n/
D
n˛ L.n/
2bN
2N ˛C1 L.N / lg N
2N ˛C1 L.N /
nD1
nD1
nD1
!
!
˛C1
N
L.N /
1
˛C1
D ;
˛C1
2
2N ˛C1 L.N /
N
N
X
C lg L.N / N ˛C1 L.N /
lg L.N /
C lg L.N /
lg L.N / X
˛
an D
n L.n/ D
! 0;
2bN
lg N
2N ˛C1 L.N / lg N
N ˛C1 L.N / lg N
nD1
nD1
N
N
X
C lg2 N N ˛C1 L.N /
lg2 N X
lg2 N
C lg2 N
˛
an D
n L.n/ D
! 0;
˛C1
˛C1
2bN
lg N
2N
L.N / lg N
N
L.N / lg N
nD1
nD1
N
N
X
˛ X
˛
˛N ˛C1 L.N / lg N
˛
n˛ L.n/ lg n D
an lg n D
;
˛C1
2bN
2.˛ C 1/
2N
L.N / lg N
2.˛ C 1/N ˛C1 L.N / lg N
nD1
nD1
and finally
PN
an lg L.n/
D
2bN
nD1
Therefore
˛
nD1 n L.n/ lg L.n/
˛C1
2N
L.N / lg N
N ˛C1 L.N / lg L.N /
lg L.N /
D
! 0:
2.˛ C 1/ lg N
2.˛ C 1/N ˛C1 L.N / lg N
N
1 X
1
an E.Rn I.Rn bN =an / !
bN
2
nD1
concluding this proof.
PN
˛
1
D
2.˛ C 1/
2.˛ C 1/
574
A. Adler
3 First two order statistics from U(0,p)
Let X.1/ and X.2/ be the two order statistics from a sample of size two from U(0,p). So X.1/ is the minimum of
these two and X.2/ is the maximum. Define S D X.2/ =X.1/ . In order to obtain the density of S , let Z D X.1/ . The
joint density of X.1/ and X.2/ is fX.1/ X.2/ .x1 ; x2 / D 2p 2 I.0 < x1 < x2 < p/. This transforms to fSZ .s; z/ D
R p=s
2zp 2 , where 0 < z < zs < p. Therefore fS .s/ D 0 2zp 2 dz D s 2 I.s > 1/. Next, we keep picking pairs of
independent random variables from U(0,p) and taking the ratio of their two order statistics. We call these sequences
Xn.1/ and Xn.2/ .
Theorem 3.1. If Xn.1/ and Xn.2/ be independent pairs of order statistics of a size two random sample from a U(0,p)
distribution, then for all ˛ > 2
.lg n/˛ Xn.2/
nXn.1/
.lg N /˛C2
PN
nD1
lim
N !1
D
1
.˛ C 2/
almost
surely:
Proof. Let an D .lg n/˛ =n, bn D .lg n/˛C2 , cn D bn =an D n.lg n/2 and Sn D Xn.2/ =Xn.1/ . As in the proof
of Theorem 2.1, we are using the Khintchine-Kolmogorov Convergence Theorem, the Kronecker lemma and the
Borel-Cantelli lemma. The partition in this case is
PN
nD1
PN
an Sn I.1 Sn cn / ESn I.1 Sn cn /
bN
PN
PN
an Sn I.Sn > cn /
an ESn I.1 Sn cn /
C nD1
C nD1
:
bN
bN
an Sn
nD1
D
bN
The first term converges to zero almost surely since
1
X
cn
2
ESn2 I.1
1
X
Sn cn / D
nD1
cn
2
nD1
Zcn
ds 1
X
cn 1 D
nD1
1
1
X
nD1
1
< 1:
n.lg n/2
The second term converges to zero almost surely since
1
X
1 Z1
X
s
P fSn > cn g D
2
cn 1 D
As for the third term
Zcn
ESn I.1 Sn cn / D
s
1
1
X
nD1
nD1
nD1cn
nD1
1
X
ds D
1
< 1:
n.lg n/2
ds D lg cn lg n:
1
Thus
PN
nD1
an ESn I.1 Sn cn /
bN
.lg n/˛C1
n
.lg N /˛C2
PN
nD1
!
1
.˛ C 2/
concluding the proof.
We follow this up with a weak law that is comparable to our Theorem 2.2. In all of our weak laws, i.e., Theorems
2.2, 3.2 and 4.2, the corresponding strong law fails. Hence these theorems are optimal, we only have convergence in
probability. Almost sure convergence fails in each of those theorems.
Theorem 3.2. If L.x/ is any slowly varying function, then for all ˛ >
n˛ L.n/Xn.2/
Xn.1/
N ˛C1 L.N / lg N
PN
nD1
P
!
1
˛C1
1
as N ! 1:
575
Laws of large numbers for ratios of uniform random variables
Proof. Here, we let an D n˛ L.n/, bn D n˛C1 L.n/ lg n and Sn D Xn.2/ =Xn.1/ . Once again we are using the
Degenerate Convergence Theorem, which can be found on page 356 of [3]. So, for all > 0
N
X
P fan Sn > bN g D
nD1
Z1
N
X
2
s
ds nD1
nD1b =a
n
N
P
N
˛
C X
C N
nD1 n L.n/
an D ˛C1
bN
N
L.N / lg N
CN ˛C1 L.N /
C
D
! 0:
lg N
N ˛C1 L.N / lg N
The variance term is bounded above by
PN
nD1
N
an2 ESn2 I.1 Sn bN =an /
1 X 2
D
an
2
2
bN
bN
nD1
PN
D
bN
Z =an
1
N
1 X 2 bN
ds 2
an
an
bN nD1
˛
nD1 n L.n/
N ˛C1 L.N / lg N
!
PN
D
nD1
an
bN
CN ˛C1 L.N /
C
D
! 0:
lg N
N ˛C1 L.N / lg N
So our truncated first moment is
bN
Z =an
ESn I.1 Sn bN =an / D
s
1
ds D lg.bN /
lg.an /
1
D .˛ C 1/ lg N C lg L.N / C lg2 N
˛ lg n
lg L.n/
Of these five terms, the two that have nonzero limits, when combined with our sequences an and bN are
N
N
N
X
X
.˛ C 1/ lg N
.˛ C 1/ lg N X
˛C1
an D ˛C1
n˛ L.n/ D ˛C1
n˛ L.n/
bN
N
L.N / lg N
N
L.N /
nD1
nD1
nD1
!
!
˛C1
N
L.N /
˛C1
D1
˛C1
N ˛C1 L.N /
and
N
˛ X
bN
an lg n D
nD1
Therefore
N
X
˛
˛
˛N ˛C1 L.N / lg N
˛
D
:
n
L.n/
lg
n
˛C1
˛C1
˛C1
N
L.N / lg N
.˛ C 1/N
L.N / lg N
nD1
N
1 X
an E.Sn I.1 Sn bN =an / ! 1
bN
nD1
1
˛
D
˛C1
˛C1
concluding this proof.
4 U(0,p) vs. U(0,q)
Let fXn ; n 1g be i.i.d. U.0; p/ random variables and fWn ; n 1g be i.i.d. U.0; q/ random variables. By letting
Yn D pWn =q and and noting that fYn ; n 1g are i.i.d. U.0; p/ random variables we can apply Theorems 2.1 and
2.2 to obtain our last two results.
Theorem 4.1. If Xn U.0; p/ and Wn U.0; q/ are independent, then for all ˛ >
.lg n/˛ Xn
nWn
.lg N /˛C2
PN
lim
N !1
nD1
D
p
2q.˛ C 2/
almost
2
surely:
576
A. Adler
Proof. Observing that
.lg n/˛ Xn
nWn
.lg N /˛C2
PN
nD1
.lg n/˛ pXn
q nYn
.lg N /˛C2
PN
D
nD1
;
the result follows from Theorem 2.1.
Likewise, we conclude with our weak law in the same setting. For further comparisons of weak and strong laws one
should see [7].
Theorem 4.2. If L.x/ is any slowly varying function, then for all ˛ >
n˛ L.n/Xn
nD1
Wn
N ˛C1 L.N / lg N
PN
P
!
p
2q.˛ C 1/
1
as N ! 1:
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Adler A., Exact strong laws, Bulletin of the Institute of Mathematics Academia Sinica, 2000, 28, 141-166
Adler A., Rosalsky A., Volodin A., Weak laws with random indices for arrays of random elements in Rademacher type p Banach
Spaces, Journal of Theoretical Probability, 1997, 10, 605-623
Chow Y.S., Teicher H., Probability Theory: Independence, Interchangeability, Martingales, 3rd ed., Springer-Verlag, New York,
1997
Feller W., An Introduction to Probability Theory and Its Applications, Vol 1., 3rd ed., John Wiley, New York, 1968
Feller W., An Introduction to Probability Theory and Its Applications, Vol 2., 2nd ed., John Wiley, New York, 1971
Klass M., Teicher H., Iterated logarithmic laws for asymmetric random variables barely with or without finite mean, Annals
Probability, 1977, 5, 861-874
Rosalsky A., Taylor R.L., Some strong and weak limit theorems for weighted sums of i.i.d. Banach space valued random elements
with slowly varying weights. Stochastic Analysis Applications, 2004, 22, 1111-1120
Seneta E., Regularly Varying Functions, Lecture Notes in Mathematics No. 508, Springer-Verlag, New York, 1976