Csorgo, Sandor and Dodunekova, Rossitza; (1989).Limit Theorems for the Petersburg Game."

No. 1799
8/1989
LIMIT THEOREMS FOR THE PETERSBURG GAME
SANDOR CSORGO
I
and ROSSITZA DODUNEKOVA
We determine all possible subsequences
suitably centered and normalized total gain
distribution as k -
00,
2
{ndk'::l
of the positive integers for which the
in
Petersburg games has an asymptotic
Snk
nk
and identify the corresponding set of limiting distributions. We
also solve all the companion problems for lightly, moderately, and heavily trimmed versions
of the sum
Snk
and for the respective sums of extreme values in
Snk'
I. INTRODUCTION
A virtually complete and unified theory of the asymptotic distri Lution of sums of order
·e
statistics has been recently worked out in the three papers [3,4,51 with 121 augmenting and
rounding off the study in [3]. (See, however [7], and also the survey 16].) Although a number
of ad hoc examples are scattered in these papers to illustrate various interesting phenomena,
we felt that there was a need for a didadic type of a paper that would illustrate all the
limit theorems in the above articles on a single, sufficiently complex but still manageable
example which, most importantly, is interesting in its own right. Our attention was drawn
to the famous Petersburg game by an interesting paper of Martin-Lof 110]. Since the" timehonored" Petersburg paradox, as Feller (18], X. 4) describes it, has been around for 276
years (since the publication of the second edition of Montmort's book in 1713), it is clearly
of sufficient interst in itself, and it turns out that it also satisfies the other criteria.
I
Parti:dlly supported by the Hungarian N:..tional Foundation for Scientific Rese:..rch, Granh
1808/86 and 457/88
2 Pati:dly supported by the Ministry of Culture, Science :..nd Edu'cation of Bulgaria, Contract
No. 16848-F3.
1
The 'paradox' has been originally posed by Nicolaus Bernoulli to Montmort in 1713,
and the English translation of the nice account of it by Daniel Bernoulli in 1738 is given by
Martin-Lor [10]. For a recent historical account and references see Shafer Ill]. Following
Feller and Martin-Lor, and hence doubling the gain of the Bernoullis, the Petersburg game
consists in tossing a fair coin until it falls heads; if this occurs at the k-th throw the player
receives 2k ducates. Hence if X is the gain at a single trial we have P{X
k
= 2k }
= 2-
k,
= 1,2, ... , and
F(x)
(1.1 )
,x < 2,
= P{X ::-; x} = { 0
1 - 2lLl:.gX]
,x;::::
2,
where, and throughout in this paper Log stands for the lugarithm to the base 2,
max{j : j integer, j $ y} is the usual integer part function and we shall neeed
ly J
[y 1
min{j : j integer, j ~ y}, y E IR..
Let X], X 2 , ••• be independent copies of X, the gains at the first, second, ... trials, so
that Sn = Xl + ... + X n is the total gain in n consecutive Petersburg games. Feller proved
that
(1.2)
where
Sn/(nLogn)---+pl
---+ p
as
n-.oo,
denotes convergence in probability and, denoting by
---+ D
convergence in distri-
bution, Martin-Lof proved that
(1.3)
where, if ~r, r = 0, ±1, ±2, ... , are independent Poisson random variables with E f).r = 2- r ,
-00
V = L(~r
00
- 2-")2" + L ~,.2",
r=O
r=1
and hence, with i denoting the imaginary unit,
(1.4)
2
e·
for any t E IR. The limit theorem (1.3) then forms a basis for Martin-Lor to develop a
premium formula "which clarifies the 'Petersburg paradox' ", at least when the game is
played in blocks of the size 2 k , k
= 1,2, .. 0' and
is very favorable for the casino.
At this point two questions arise. The result in (1.3) means that the Petersburg
distribution is in the domain of partial attraction of the infinitely divisible distribution of
V. Since V is obviously not a stable random \'ariable, it follows from (1.3) and Theorem 10
in [2] that F belongs to the domain of partial attraction of continuum many different types
of infinitely divisible distributions and hence there are just as many principle clarifications
of the paradox. How different are these from one another? The second question is more
practical: What to do if
Uk
games are played, where 2 k -
answers to these questions are
indica~ed
1
<
Uk
<: 2 k ? Some qualitative
by the results for the whole slims 8'tk in Section
2. It turns out, in particular, that the two questions are more or less the same.
Since the Petersburg game is exciting exactly'
·e
gains in the total gain
Snk'
~ ... ~
X n,n be the order statistics pertaining to the
sample Xl, ... ,X n' Together with the full sums
Snk
of the occasional large single
it is interesting to see the int!uence of these large gains on
the limiting distribution. Let X 1,n
lightly trimmed sums
b{~cause
(m)
=
Snk'
in Section 2 we also investigate the
Lj~~m Xj,nk' where m
2:
1 is a fixed integer, while in
Section 3 we deal with moderately trimmed sums Snk (mk), where mk
as k
--+ 00,
and heavily trimmed sums Sn(n -
lt1nJ),
where 1/2 <
also interesting to see what happens to the extreme sums
Snk - Snk
--+ 00
t1 <
but Tnk/71k
--+
0
1. But then it is
(mk)' This is done in
Section 4.
The unified method
In
[3, 4, 51 and [21 is generally based on the quantile function
Q(s) = inf{x: F(x) :2: s}, 0 < s < 1, which from (1.1) presently is
(1.5)
Q(s) = 2 r
if
1
1
1 - -r -1 < s < 1 - -r,
2 2
-
T
= 1,2, ... ,
and we may put Q(O) = Q(O+) = 2, or
(1.6)
Q(1 - u) = 2 rLog (l/u)1,
3
0< u
:s 1.
In each of the following three sections we first state all the results of that section, with some
discussion when appropriate, and the proofs occupy the remaining part of the section.
2. FULL AND LIGHTLY TRIMMED SUMS
The first result restricts the choice of meaningful normalizing and centering sequences.
L:j: ~m Xj,
We generally deal with lightly trimmed sums Snk (m) =
2:7=1 Xj,n
fixed integer. Since Sn =
for each n, we have Sn(O)
=
f1
k' where m ~ 0 is a
Sn, so the special case
m = 0 always corresponds to total gains. Subsequences {nk}r'='l of the positive integers
are always assumed to go to infinity.
THEOREM 2.1. If for some integer mo ~ 0, a subsequence {nk}~1 of tlJe positive
integers, and some constants A nk > 0 and C nk E III tlle sequence (Snk (mo) - CnJ/A nk
converges in distribution to a non-degenerate
subsequence {n~Ho=1
c {nklk=l and each integer m
{n~}k=1 c {nU~1 and a constant
W n " (m) :=
k
ask
-t
00,
ra~Jdom
Sn" (m)
k II
nk
(
-
a = a{n"}, 0
<
~ 0
a
<
\'ariabJe III (mol, then for each
there exist a further subsequence
00,
such that n~/An~
n"
m +1
" )
1
fLog-k-l + 1 - _,,_2 fL .)gn k 1 . . . . [) -ll'(m)
m +1
nk
a
---+
a
and
+ c(m)
wherec(m) is some constant, W(mo) is as before, and IJoneoftlleotherrandom
variables W (m) is degenerate.
The theorem shows that using the normalizing sequence {nd and the indicated
centering sequence we can achieve all possible limiting types and hence we can restrict
attention to these sequences without loss of generality when answering, in particular, the
questions posed in the introduction concerning total gains.
Define W nk = W nk (0) as above, replacing n~ by
nk
everywhere. Let us agree that in
the diadic expansion of a number " 1/2 < 1 ::; 1, given by
k
00
(2.1)
1 = Laj/2 i =
j=1
lim
k ...... oo
Sk,
Sk
=
L aj/2 i ,
j=1
4
k>
- 1,
e·
where Ul
=
1 and
Uj
=
0 or I for j 2: 2, we exclude all the sequences (I,a2,u3, ... )
which contain only a finite number of non-zero elements. Let E], E 2 , • .• be independent
exponential random variables with the common expectation 1 and with the Gamma(j)
random variables Y j = E 1 +.,. + Ej as jump-points, consider the standard left-continuous
Poisson process
00
N(s) =
(2.2)
L I(Y
j
< s),
s 2: 0,
j=1
where 1(·) is the indicator function. Also, consider the independent increment.s
(2.3)
where
1
-<-"<1
2 - ,-
is a fixed number.
{Tlk}~l of
THEOREM 2.2. For a given subsequence
sequence W nk converges in distribution as k -
-e
00
if and only if
I
-<-"<1.
2 - ,-
(2.4)
In this case, for any integer m 2: 0,
(2.5)
the positive integers the
S nk (m) nk
rL o gn-k - 1
m
where, with
(2.6)
In particular, if Tn = 0 then under (2.4),
(2.7)
5
+1
-f)
V ("1 ) ,
"'I
'where V,.., = \'.)(0) and we have
v,.., = .!.V,.., = .!.
,
(2.8)
{joo(N(S) - s)diP;(s) + r"" N(s)diP;(s) + ,}
',..,
=
10
~ {fUVh2k) - ,2 k )2- k + f:: Nh2- k )2 k + ,}
k=O
k=l
=
~ {~(~,h)
-
-yT')2'
+ ~ ~,h)2'}.
Furthermore, if,
= 1/2, then tlJe sequence nk = nk(1 /2)
(2.4), while jf 1/2
<
(2.9)
,~
() =
nk = nk,
= 2 k -]
+ 1, k = 1,2, ... , satisfies
1, then the sequence
2 k-l
+ a2 2k- 'J. + ... + ak-] 2 + ak + 1,
k=I,2, ... ,
satisfies (2.4), where a2, a3, ... are the binary digits in the diadic expansion (2.1).
Martin-LM's result in (1.3) is obtained by choosing nk = nk(l)
using the third representation of V
=
\-T]
= W]
= V]
=
2 k in (2.9) and
in (2.8).
It is interesting to note that (2.7) and the general approach in 13), within the framework
of which we work in the proofs, at once imply that if {n i'l} ~ l ' I
sequences satisfying (2.4) with, =
,I,
= ], ... , T, are T diffferent
I = 1, ... ,T, respectively, then
(2.10)
where W,..,
= (V,..,
-,Log,)/, for any of, = ,,, I
= 1, ... ,T,
with V,.., given by any of the
three representations in (2.8) in terms of the same Poisson process. For any t E IR, we
have E exp(iU,..,t) = (E exp(il-T t))"" with the expression in (1.4) substituted. We see that
U,.." 1/2
~
, ~ 1, determined by (2.10) and explicitly given in (2.8) is a segment of an
independent-increment or Levy process. In fact, V-l' 1/2
~
,
~
1, is a segment of a special
semi-stable process as described by Levy [9; Section 581 and we will demonstrate below
that
(2.11)
6
e·
The above results show that the continuum many infinitely divisible types of distributions that partially attract the Petersburg distribution is given by the set {G"l : 1/2
') ~ I}, where G"l(x) = P{W"l :S x}, x E
n,
~
and the essential role of the parameter') is
in fact to connect the gaps in the special subsequence {2 k }k=1 of Martin-Lof. We plan
to return to this problem by providing a general premium formula that makes successive
Petersburg games asymptoticaly fair in a special subsequent note.
In order to get some additional feeling concerning the limiting distribution function
G 1 of W"l in (2.7), for each n
= 1,2, ... , let Lnh)
be the Levy distance Letwet~n G"l and
the approximating distribution function
Note that, in addition to (2.10), the general proba~i1istic approach in 13] and the details
of the proofs below imply that if {nk} satiofies (2.4) then for any integers I, 1ft ~ 0,
·e
(2.] 2)
as k -
00,
and so the sum
L -I
m
.
]=
2- lLog (1'Jh)J
1 ')
may be looked upon as the asymptotic contribution of the largest m gains to the limiting
distribution of the total gain Snk in (2.5). The following result is a corollary to the Theorem
in [11 after some straightforward but lengthier calculations not reproduced here.
THEOREM 2.3. For any') > 0 and any positive number p < 1, Lnh) = o(n- p / 2 )
as n -
00.
Now we turn to the proofs of the first two theorems. Throughout the paper, all the
asymptotic relations are understood to take place as k 7
00
if not specified otherwise.
Proof of Theorem 2.1. According to the results in [3,2], the limiting behavior
of centered and normalized finitely trimmed sums is by and large determined by the
asymptotic behavior of the function
t t-t
r
q2(S,t) = 18
18 (u /\ v - uv}dQ(u}dQ(v)
r Q2(u)du
=sQ2(s) + tQ2(l- t} + 18
l
-
l
(2.13)
as s, t
1 0,
-
t
Q(u)du
{sQ(s) + tQ(1 - t) + j l-t}2
s
where u /\ v = min( u, t!) and this well-known equality holds for any 0 < s <
1 - t < 1. Presently we need the function q2(s)
= q2(s,s),
0 < s < 1/2, for which,
substituting (1.5) and (1.6) into (2.13), we obtain by simple calculation that
if
1
~
2r
1
<
s < -r -I
2 - '
r -> 2 ,
e.
or, what is the same,
q2(S} = 2 fLog (I/.:I)]3
--
2 --- (fLog(l/s)l
+ 1)2,
0 < s < 1/2.
Hence the asymptotic equality
(2.14)
q(s) "'-
V3 2 fLu lI;{1/s}1/2
as
s 10
holds, and from the same calculation we also have
(2.15)
O<s:S1.
Thus for the sequence
(2.16)
8
we have by (2.14),
(2.17)
Introducing
(2.18)
Ok
= 0nk =
where, obviously, 0
~ Ok,
rLogTlk 1- Lognk
f3s < 1 for any k
and
2:: 1 and s > 0, we have
rLogTlk 1- lLogs J
rLog(nk/s)l =
{
lLogs J,
f3~ =-- Logs -
,
rLogTlk 1- lLogs J - 1
if
o~
Ok
+ f3s <
1,
if
Whence and from (1.6) and (2.17),
(2.19)
for any fixed s > 0, where this function plays the role of ¥,I:l(nk,s) in [3,21.
The assumption of the theorem and Theorem 5 in [31 imply that for each subsequence {nU c {nk} there is a further subsequence {n~' {c {nit} such that both se-
'"
"'
quences Q(s/71 "'
k )/Ank and a(n k hQ
in what follows by
a(n~') / An'"
k
---.
<, (s)/An~'
of functions converge weakly (denoted
=» on (0,00) to some finite functions
6, where 0 ~
Ij
cP J and CP2, respectively, and
< 00. Since, necessarily, A nk
--+
00, it follows from (1.5)
that CPI == O. But, since
(2.20)
it is clear that if 6 were zero then we would have 'P:l == 0, and then by the same Theorem 5 in
[3] the limiting random variable W(mo) would be degenerate. Hence 6 > 0 and, necessarily,
CP2
to.
Therefore, cP n ", (.) => 6- Icp2('), and thus by (2.20) there is a further subsequence
k
{n'V} C {n~/} such that a(n{~')CPnl\'
(.)/n'v => a<P2(') for some a > O. If we now define
k
Wnl\· (m)
k
V
as W nn
· (m) in the theorem upon replacing rLognLVl by rLog(nL I(m
k
9
+ 1))1,
then by Theorem 4 in
13J this
"'n1\' (m) converges in distribution to a-1W(m) + c(m) for
"
each m ~ 0, where in fact the non-degenerate W(m) is represented in a special form,
taking into account (2.15) an the fact that J~/n" Q(u)du = 2/nk
---+
o.
Now by (2.20),
and hence for each m ~ 0 we can choose a further subsequence {1t~} C {1t LV} and a
constant c(m), both depending on
TTl,
such that W ,,(m) converges as stated.
n"
•
We are a little bit sloppy in the defintion of 'Pn" (.) in (2.19). Let 0 < Pn" < 1 be
such that Pn"
! 0 and nkPn"
--t
0, and define 'Pn" (s) by (2.19) if 0 <.: s ~ nk - nkPn",
while if s > nk - nkPn" then set 'Pn" (s)
=
-Q(Pn,,)/a(nk)' Now 'Pn" (.) is a negative,
non-decreasing function on the whole (0,00) with 'Pn" (0)
= 'Pn" (0+)
= - 00.
LEMMA 2.4. For a given subsequence {7ik}k:l the functions 'Pn" (.) com'crge
weakly to some finite function given on (0, (0) if and only if condition (2.4) holds. In
this case,
(2.21)
with the right-continuous function 'P; (-) taken from (2.6).
Proof. In the notation of (2.18), condition (2.4) is equivalent to
(2.22)
in which case
(2.23)
First we prove the necessity of (2.22). Suppose that weak convergence takes place but
(2.22) is not true. Then for two subsequences {nk} and {n~} we have !!k
10
= an~
--t!!
=
lim inf ak and ak = an"
---t
"
a = lim sup ak, where 0 :::;
Q ..:.:. Q :::;
f3.. defined in (2.18) we have
continuity point s > 0 of the limiting function such that for
g
< 1 - f3,. <
gk
Q,
+ f313 < 1 and
1. Clearly, we can find a
and hence for all k large enough gk < 1- fJ,. < G.k or, what is the same,
ak
+ f3s >
1. Thus from (2.19),
.
and
Equating the two limits, we obtain
Q
= 2 + g,
'P
II
n"
(s)
1
---t -
-
2
-J
2ii" _ L(. s
-2 I
3
g
J.
which is impossible.
Now we consider sufficiency. Using (2.4), (2.22), and (2.23), from (2.19) we obtain
that if 0 <
0:
< 1, or equivalently, 1/2 < , < I, then
if fJ,s ." J -t Log"
if fJ.,; > 1 -t Log"
if, = 1, then
-e
and if , = 1/2, then
if tJ,s
= 0,
if fJ.. >
o.
When, = 1 or , = 1/2, it is obvious that 'P",(') can be written in the form claimed in
(2.21). An elementary argument shows that this is also true when 1/2 < , < 1. (Of
course, if 1/2 < , < 1 then there is no convergence in general at those s > 0 for which
f3s
= 1 + Log,.
below.)
These are the jump-points s
= ,2 k ,
k :::: 0, ±I, ±2, .... See also (2.21)
•
Proof of Theorem 2.2. Before starting the actual proof, we note that for any
sequence {nk} by (2.14) and (2.20) we have
o2(h/nk)
1
nk
2r Lo g(nk/ h )]
o(l/nk) """
It
2r Lo g n " 1
nk/h
11
2
< It'
h
> 0,
and hence
lim Iimsupo(h/nk)/o(l/nk)
(2.24)
h-oo k-oo
= 0.
Also, whatever is the sequence ink} like, it follows from (2.20) that any subsequence
{nk} c ink} contains a further subsequence {n~} c {nk} such that (2.4) is satisfied along
{n~} and with some 1 = 1{n"}' 1/2 ~ 1 ~ 1. Then by (2.17), a(n~)/n~ ~
hence by Lemma 2.4, for "'~k 0
Since Q((s/nk)+)/nk ~
= a(nk)"'nk Oink
we have
°for all s > 0, this last weak convergence, (2.24) and (2.15) imply
by an application of Theorem 1- in
that for any m ;:::
.J3Fr and
121 (which is an augmented form of Theorem 1 in [3])
°in Wn" (m) of Theorem 2.1 we have
Wn"(m)
k
k
1
~D -VOm(O,,,,:',O)
1
t
1{
:= -
e·
• I
/00
1
j}.. . +l sdif';(s)
(N(s) - s)dcp;(s)
y ... +1
1
+m<p;(l"m+d-
j
m+l
1
}
<p;(s)ds-<p;(1).
Now if W nk = W nk (0) converges in distribution, then its subsequental limits are
all of the form Vo,o(O, ,.,;, 0)/1 having an infinitely divisible distribution.
Since this
representation, being a special case of the general representation in Theorem 3 in 13],
is unique, it follows that 1 must be the same for all the above subsequences {1l~}, and
hence (2.4) is necessarily satisfied.
Suppose now (2.4).Then for any m 2::: 0, the above convergence of W nk (712) takes place
along the original ink}. It follows then that the left side of (2.5) converges in distribution
to
(2.25)
1 r
_
-lo m(O',.,.. . '0)
1"
12
Tn + 1
+ 1 - --,
1
and since by easy manipulation one can see that
•
(in fact, regardless of a meaningful function <p replacing the present cp;) and, separating
the cases 1
= 1/2 and
1/2 < 1
~
1, that
(see (2.27) below), it is clear that the limit in (2.25) is the same as l''')(m) in (2.5). Thus
(2.5) holds, and since under (2.4),
(2.26)
the convergence in (2.7) and the first representation in (2,8) also follow. Furthermore,
since the function <p; in (2.6) can be written as
(2.27)
the second representation in (2.8) follows from the first by carrying out the integration.
To prove the last equation in (2.8), note that by simple computation,
j
}
L(N(-y2 k ) - 1 2k )2- k + L N(-y2- k )2 k + 1
k=O
k=1
j-I
([N(-y2 k+ l )
= L
-
N(-y2 k )] - 12k) 2- k + IN(-y2 i ) - 1 2j ]2- j
k=O
j
+ L[N(-y2- k + l )
-
N(-y2- k)j2 k + N(-y2-- i )2 i + 1
k=1
for each j ~ 1. Since the second term goes to zero almost surely by the law of large
numbers as J'
-t
00
and the fourth term is almost surely zero for all j large enough, the
last line of (2.8) follows upon letting j
---7
00.
13
Finally, it is trivial that nk(I/2) = 2 k-
1
+ 1 satisfies (2.4) with 1 = 1/2. Also, if
1/2 < 1 ::; 1, then it is plain that fLognkh)l = k for the sequence T1kh) in (2.9) and
hence, in the notation of (2.1),
The theorem is completely proved . •
Proof of (2.11). From the third representation in (2.8) we obtain
VII' =
=
to
([N(2
k)- N(2 k-I)I- 2k-
~ ([N(2'+I) -
(
l
)
N(2')I- 2') ;,
2L +
~ [N Uk) - N Ck~I)] 2k+1
+ ~ [N
12 1)
2
C']I) - N U~)] 2'
-I2
+ IN(I) - N( -)1 - - 2 - IN(I) - IV( - )]2
e·
which is the same as (2.11) . •
3. MODERATELY AND HEAVILY TRIMMED SUMS
Let {nk}r=l and {m nk }~l be two subsequences of the positive integers such that
(3.1)
and consider the moderately trimmed sums
nk -m"k
Snk
E
(m nk ) =
Xj,nk'
j=l
The first result is an analogue of Theorem 2.1 in the present setting.
14
THEOREM 3.1. If (3.1) is satisfied and {or some constants An" > 0 and C n" E
IR. the sequence (Snk(m nk ) - Cn")/A n,, converges in dhtribution to a non-degenerate
random variable W, then {or each subsequence {nk}k=l
i...-
subsequence {n~H"=l
a{n"}, 0
c {nklk=l
and a constant a
=
{nk}k=l there exist a furtlJer
< a <
00,
such that
(n~/ !T1fII)/A" -..aand
V
as k ,-..
n"
00,
n"
where c is some constant.
Again we see that it is enough to deal with centering and norming sequences of the
given special form. Define now W nk as in the abo\'(~ theorem, replacing n~ by nk. Set
THEOREM 3.2. For a given subsequence {nd~ 1 of the positi~'e int.egers and a
sequence {m n" }k=1 satisfying (3.1) the sequence W nk converges in distribution as k
if and only if one o{ the following three mutually exclusive conditions holds:
.
(3.2)
and
(3.3)
-- 00
Jm nk (1 - 2-t'"" ) -.. u
(3.4)
as k -
for some
0::;
<v<
- 0,
U
00.
If (3.2) holds then
(3.5)
as k -..
00,
where Z is a standard normal random variable and
15
<
00,
-+
00
If (3.3) holds then, as k
-+ 00,
(3.6)
where (Z, Z2) is a bivariate normal vector witll mean vector zero, EZ 2 = EZi = 1, and
If (3.4) holds then, as k
-+ 00,
(3.7)
where (Z, Z2) is a bivariate normal vector with mean vector zero, EZ2 = EZi
EZZ2 =
= 1,
and
-1/V3.
Furthermore, the pair of sequences (nk' 77l nk ) where m nk is an arbitrary sequence of
positive integers such that
711 nk -+ 00,
as k
-+ 00,
and
e·
where a2,ag, ... are the binary digits in the diadic expansion (2.1) of any fixed number
1/2 < "I < 1, satisfies (3.2).
The pair (nk' 77l nk ) given for any k
where
qk
and
(3.8)
and
Tk
by
are arbitrary positive integers such that
qk -+ 00,
VI, V2, • ••
= ],2, ...
Tk -+ 00,
and
qk - Tk -+ 00
as
are the binary digits in the diadic expansion of -
k
V/
-+ 00
v'2 -
l-v / v'2J satisfies
(3.3) with v < 0, and the pair (nk' m nk ) given by
k = 1,2, ... ,
16
with qk and Tk satisfying (3.8), satisfies (3.3) with v
= 0.
Finally, with qk and Tk satisfying (3.8), the pair (Tlk,m nA ) gi"en for any k = 1,2, ...
by
•
where U b
u
U2, • ••
are the binary digit.s in the diadic expansion of u -l u J, satisfies (3.4)
Witll
> 0, while the pair (nk' m nk ) given for any k = 1,2, ... by
satisfies (3.4) with u =
o.
We note that when talking about a binary expansion of a number in (0, 1) in the
above theorem we always use the convention following (2.1), but we also assume that all
the binary digits of zero are zero.
-e
It is interesting to point out the fact that the sequence B(Tlk) in (3.5), for which by
(2.20) we have
(3.9)
does not in general converge under (3.2). In fact, if
O<liminf£nk
k-oo
~limsup£nk
< 1
k-oo
then we always have (3.2).
The last result of this section is for heavily trimmed sums
lfinJ
•
Sn(n -l,1nJ)
=
L
Xj,n,
where
j=l
1
2
-<,1<1.
It is a special case of a half-sided version of Theorem 5 in [4] easily stated for an underlying
distribution that is concentrated on the positive half line.
17
THEOREM 3.3. (i) If - Log(I - ;3) is not an integer, tllen
vn
as n
-+ 00,
"l,f'lnj
{
L...]=~
X"
l,n -
J1(;3)
}
-+f)
ad O,;3)Z
where Z is a standard normal random variable,
J1(;3) = 2 - fLog(I -;3)1- 2(1 _;3)2-fLog(I-Ml >
°
and
adO,;3) = (6{2- fLog (I-t1J1_ I}
+ 2fLog(1
-;3)1 - fLog(1 - mj2) 1/2.
(ii) If - Log(1 - ;3) is an integer, then
-
r.l) -" 1/2
' f i l a x (0 , - Z)
2,
jJ
wllere (Z, Z2) is a bivariate normal \'ector with mean vector zero, E Z2
= I,
£ Z~
= ;3,
and
EZZ2 = -{2;3 + (I -;3)Log(1 -;3)}/{1 + 2;3 - (1-;3)(1 - Log(1 - .a))'l}1/2
e·
and where a2(0,;3) = ({I + 2;3 - (I -;3)(1 - Log(I -;3))2}/(I - #))1/2.
Part (i) of this theorem leads to an interesting modification of the Petersburg game.
Suppose the casino and the player agree to play n consecutive Petersburg games so that
the largest n - l;3n J principal gains will not be payed to the player, where - Log(1 -13) is
not an integer. Since we have
f;
lPnj
p
as n -
00,
{
Xj,n
> np(;3)
}
I
-+ -
2
and
P
L
{
lPn
j
X j,n < nJL(fJ)
}
1=1
-~
I
2
the fair premium for the player to pay to the casino for playing this sequence of
n games is nJ1(13) if n is large enough. Of course the casino can also determine a different
premium formula from part (i) of the theorem to raise unfairly its fair chance 1/2.
Proof of Theorem 3.1. Presently we need the norming sequence
18
instead of the one in (2.16), for which by (2.14),
(3.10)
•
Also, we use ek = en" in the formulation of Theorem 3.2, that is
(3.11)
and introduce the functions, that play the role of 1/12,nk in
141,
Vim;;;
2
'
if x > Jm n " /2. We can separate the values of these fUllctions according to the three
alternatives dk(x) 2: 1, 0 ::; dk(x) < J, and ddx)
0, where, with ek from (3.11),
<,
dk(X) = ek -t Log(I - x/Jm n ,,), and after showing that
dk(X) 2: 1
if and .only if
x <0
and
vm~~(1- 2 1 -
dk(x) < 0
if and only if
x > 0
and
x > y'm nk (1 - 2- Ek ),
Ek
)
2: .r,
·e
we obtain by (1.6) and (2.14) that for any fixed x E IR,
- Js~B(nk)
(3.12)
1/1n,,(X)"'"
~(I
- ?l-E,,)
>
- x,
, if x < 0 and
V
, if x > 0
Jiii n " (1 - 2- 10 " ) < x,
nk
0
and
where B(nk) is as in (3.5), satisfying (3.9).
Using the assumption of the theorem and the converse half of Theorem 3 in [4] (as
•
formulated in Section 3 of [6]; see also the end of Section 1 in [51), we see that for each
{nk) c {nkl there is a further {n~'} c {nk) such that 1/1 111(') => 1/1(') for some finite
n"
function 1/1(-) on IR and
a
n" (m n" )/A
III
II'
III
-+
nA;
19
6,
0< 6<
00.
"(The case {) = 0 can be ruled out again because then the limit would be degenerate.) Thus,
using also (3.10), (3.9), (2.15), Theorem 1 in [4], and the convergence of types theorem, it is
now routine to see that there exist a subsequence {n~} C {7,'~'} and constants 0 < a < 00
•
and c E IR such that all the statements of the theorem hold true. •
Again, before the proof of Theorem 3.2 we need a technical lemma, paralel to Lemma
2.4. The proof of this lemma uses ideas very similar to those in the proof of Lemma 2.4
but, of necessity, is lengthier and more complicated. In the
int(~rcst
of saving space, it is
omi tted here.
LEMMA 3.4. For a given
tI'nk
{nd~1
and {m nk }k"=1 satisfying (3.1) the functions
converge .....eakly to some non-decreasing left-continuous function tlJ on IR, satisfying
tI'(O) S 0 and 1/'(0+) 2: 0 if and only if one of the conditions (3.2), (3.:I) or (3.4) holds. If
(3.2) holds then
(3.13)
1/J n k (x)
-t
0
x E IR.
for every
e·
If (3.3) holds then
(3.14)
.....here
,
x
S
11,
x> v.
If (3.4) holds then
•
(3.15)
.....here
x
S u,
x> u.
20
3.~.
Proof of Theorem
First we consider the three sufficiency statements. If (3.2)
holds, then the statement follows by a direct application of Theorem 1 in [4] on account
•
of (3.13) of Lemma 3.4, using (3.10) .
If (3.3) holds then by Lemma 3.4 we have (3.14). Using the norming factor a~k
•
nk/Jrn n" instead ofan,,(rnnJ"'" J6nk/Jrnn", the latter obtained from (3.10) and
=
(3.14),
the left side of (3.6) converges in distribution to
v'6
{z + l-Z~ t/Jv(X)dX} = v'6{Z +
by Theorem 1 in [4], where the covariance E Z Z'}.
max(O,v
= - y/2/3
+ Z:l)}
is obtained by the fact that
presently
(3.16)
which follows by integrating by parts and using formulae (1.6) and (2.15), and hence by
-e
/2f3 in the limit.
(3.14) we obtain -
Finally, if (3.4) holds then the proof is exactly the same as above, replacing the use
of (3.14) by that of (3.15) and noting that presently we use
J3 nk/ Jrn n " , that
---t
EZZ2
instead of
a nk (m nk ) .....
the limit of the left side of (3.7) is, again by Theorem 1 in [4], now
and that (3.16) is still true with
r2,n"
a;lk
y'6
replaced by
J3, and
hence by (3.15) we now obtain
= -1/J3.
Now we turn to necessity. Using (3.]0), (3.9), and the converse result referred to in
•
the proof of Theorem 3.1, it follows that if IVn1 converges in distribution then for any
subsequence {nU
(3.12) we have t/J
c {nd
II ( . )
n"
=?
there is a further subsequence {n~}
C
{71U such that for 'l/.I nk in
tJI(·) on IR for some appropriate limiting function t/J. But then by
Lemma 3.4 we must have exactly one of (3.2), (3.3), and (3.4) along {n~}. Applying now
21
the already proved sufficency results along {n~}, we get one of the t.hree possible limiting
behavior. Since, obviously, no two of the three limiting random variables in (3.5), (3.6),
and (3.7) can be equal in distri bution for any choice of v
~
0 and u ;: 0, we see that these
•
subsequential limits must be the same, exactly one of those ill (3.5), (3.6), or (3.7). Since
•
the subsequence {nk} C {nk} was arbitrary, the necessity statement follows.
The proof of the last statements of the theorem, that is, that the stated constructions
for (3.2), (3.3), and (3.4) to hold are indeed valid is elementary but all together extremely
tedious and lengthy. We challenge the interested reader to check what we state.
•
4. EXTREME SUMS
Again and throughout this last section, {ndk:::=-l and {m nk }~1 will be two subsequences of the positive integers satisfying (3.1), and we are interested in the exreme sums
nJ:
EnA,(m n,.)
= Snk
- SnA,(rn nk )
1ft",-
L
=
=L
Xj,nk
j=nk -m"k +1
X nk1 I-j,,~u
j=1
the sums of the largest m nk gains. Following the pattern of the preceding two sections,
first we state the following.
THEOREM 4.1. If for some constants A nk > 0 and
(E nk (m nk ) -
e nk )/ A nk
e nk
E III the sequence
converges in distribution to a non-degenerate random variable
R, then for each subsequence {n~}~1 C {ndk=1 there exist a further subsequeIlce
{n~}k=l c {nklk::::l and a constant a =
a{n"},
0 < a <
00,
such that 'ft~/An~ ~ a
and
R".n
k
as k
~ 00,
E ,,(m II)
1
n k lInk
--T,,(m")~D-R+c
n
n
n
a
k
k
k
where
Tnk(m nk ) = (22rLOgnk l-lLognd-l - 2rLugnk
+ 2rLog(nk/m"k)l m nk
nk
and c is some constant.
22
1) n1k + rLognkl -
rLog(nk/mnk)l
e-
Again, let R TLk be as above, replacing n~ by nk. Our last result is the following.
THEOREM 4.2. For a given subsequence
•
{nd k= J
of positive integers and an
arbitrary subsequence {m nk lr=l of positive integers satisfying (.1.1) the sequence R Uk
converges in distribution if and only if the sequence {nk}r==J satisfies condition (2.-1) of
•
Theorem 2.2 with some 1/2 :S 1 S; 1. In this case,
where W'"1 is tlle limiting random variable appearing in (2. i).
Of course, the special constructions of {nd in and above (2.9) for condition (2.4)
are still valid, and we emphasize that {m Uk
}
satisfying (3.1) is completely arbitrary.
The normalizing sequence {Uk} is the same for full sums and extreme sums.
This is
of course entirely natural having (2.12). The results in SertiOIl 2 show that the Petersburg
·e
distribution function F is stochastically compact (cf. [3]) and we also see that
gain
Xn,n
th(~ largc~t
is also stochastically compact, with all possible limiting distributions given in
(2.12). So the Petersburg game exhibits the phenomenon disrussed in Corollary ]2 in 13].
It is interesting to observe the generally non-convergent oscillatory term ill the ('entering sequence in Theorem 4.2, for which, if log stands for the natural logarithm, it is
easy to see that
Proof of Theorem 4.1. Recalling the notation in (2.13), we now need to know the
asymptotic behavior of 0- 2 (1- mnk Ink, 1Ink). Using (1.5), (1.6), (2.13), and both formulae
in (2.15) yield
(4.1 )
23
for any m nk satisfying (3.1). Hence, instead of the special norming sequence J7iku(1 mnklnk, link) designed for extreme sums in [5], we call use a(nk) belonging to whole
sums, since by (2.]7),
•
(4.2)
•
According to
[51, here we need the following variants of the functions
CPnk and tPnk in the
proofs of Theorems 2.1 and 3.1:
-Q((I-~)-)-tQ((I-*)-)
-cP
nk(
s) ·.-
a(nk)
{
-Q(P"k-)+Q((I--.-,';;)-)
a(nk)
where Pnk is the sequence introduced before Lemma 2.4, and
"/m,,~
{Q((l- m"k
)_) _ Q((] _~. _ X.Jm"k)_)}
a (nk
nk
nIt
nIt
tPnk(X) =
,
~nk(-Jmn,,/2)
Ixl «
2
'
~~
2
'
X"
V:if;;i-
fPn" (Jm nk 12)
y'ffl •. "
2 .
< x.
By (2.17). (2.20), (3.10), (3.12), and (4.2) it is obvious t.hat there is a positive constant
C > 0 such that 1/"lk (x) ~ C I V/mnk for any x E IR., and hence by (3.]).
(4.3)
x E nt.
"tfink (x) -'0,
On the other hand, with CPnk given in (2.19), we have
(4.4)
Also, by a computation similar to that leading to (4.]) we obtain that for arty positive
numbers Ink such that Ink
---+ 00
and In" Im n"
-t
0 we have
•
(4.5)
and from (2.15) and (1.6),
(4.6)
1 ( 1) + f
-Q (1--)Ttk
nk
1- I I
nk
I-m "I.;
24
Ink
.
Q(U)dU=Tn,,(m nk ).
After all these preliminaries we are ready now to prove the theorem.
Using the
assumption of it, (4.2), (4.3), (4.4), (2.19), and (2.20), an application of Theorem 2 in
•
15]
shows that for any {nk} C {nk} there is a further subsequen<:e {71~'} c... {nk} such that
x E IR,
where
t.p
ipn~'
(.) => cp(.)
on
(0, ee)
'" )/A rl '"
a(nk
and
k
-. b,
is a finite function and 6 > 0. Hence an application of Theorem 1 in 15], and
the convergence of types theorem provide a subsequence {n~}
statements hold.
C
{n~'} along which the
•
Proof of Theorem 4.2. First we note that by (4.1) and (4.5),
(4.7)
for any sequences {nd and {m rlk
numbers such that Ink
--+
Also, for each fixed s
}
satisfying (3.1) and for any sequen<:e {Ink} of positive
00 and Ink /m nk
--+
0.
> 0, by (4.4) and (2.19) we have
converges weakly on (0,00) to some finite function if and only if condition (2.4) holds. In
particular,
(4.8)
where
t.p;
is the left-continuous version of t.p; given in (2.6).
Assume now that R nk converges in distribution along the given {nd. Consider an
•
arbitrary subsequence {nk}
c {nd.
Then, obviously, there is a further subsequence
{71~} C {nk} such that condition (2.4) is satisfied for some 1/2 '.::. ~t 0::; 1 along this {n~}.
But then, using (4.3), (4.7), and (4.8) along {n~}, it follows from Theorem 1 in
(4.9)
R"
nk
--+[)
!.Vo,u(O,ip;,O):=!.
I
1
{
!
00
(N(.s) -
},'I
25
s)d;P~(.s) -
}'I
[
JI
15J
that
sd<p;(s) - ip;(l)
}
where
00
N(s)
= L 1(}'j ~ s), s ~
0,
j=l
IS
•
the right-continuous version of the left-continuous Poisson pro<:ess given in (2.2) and
where we use the convention that the symbol
J; means ht,y) whenever we integrate with
respect to a left-continuous function, just as the convention that 1;v means J(x,yl whenever
we integrate with respect to a righ-continuous function has been tacitly used throughout
the paper. Using these conventions it is easy to see that
and hence, using the special case
TIl
=0
of the formula right below (2.25), we see that
(4.9) is the same as
where V...,. is as in (2.7) and (2.8). At the same time, just as in the proof of Theorem 2.2, we
see that 1 must be unique for all these subsequences {71~}, and since {71~} was an arbitrary
subsequence of {nd, condition (2.4) must hold along {nk}, and hence we also have
(4.10)
Conversely, if condition (2.4) is satisfied then, by (4.3), (4.7), and (4.8), Theorem 1
in [51 implies (4.10). But under (2.4) the first term of
T nk
(m nk ), given in Thorem 4.],
converges to zero, and hence by (2.26) we obtain
•
which by adding and subtracting Logrn nk in the cent.ering sequence is clearly equivalent
to the stated convergence in the theorem.
•
26
REFERENCES
•
III
S. CSORCO, An extreme-sum approximation to infinitely divisible laws without a
normal component. In: Probability on Vector Spaces IV. (S. Cambanis and A. Weron,
eds.) pp. 000-000. Lecture Notes in :Mathematics 0000, Springer, Berlin, 1989.
[2] S. CSORCO, A probabilistic approach to domains of partial attraction. Aciv. in Appt.
Moth. To appeAr.
[3] S. CSORCO, E. HAEUSLER, and D. oM. MASON, A probabilistic approach to the
asymptotic distribution of sums of independent, identically distributed random variabIes. Adv. in Appl. Math. 9(1988), 259-333.
[4] S. CSORCO, E. HAEUSLER, and D. M. MASON, The asymptotic distribution of
trimmed sums. Ann. Probab. 16(1988), 672-699.
15] S. CSORGO, E. HAEUSLER, and D. M. MASON, The asymptotic distribution of
extreme sums. Ann. Probab. To appear.
[61 S. CSORGO, E. HAEUSLER, and D. M. MASON, The quantile - transform - empirical - process approach to limit theorems for sums of order statistics. In: Sums,
Trimmed Sums, and Extremes lM. G. Hahn, ed.), pp. 000-000. Birkhauser, Basel,
1990. To appear.
'17] S. CSORGO and D. M. MASON, Intermediate sums and stochastic compactness of
maxima. In preparation.
[8] W. FELLER, An Introduction to Probability Theory and its Applications, l'ol.1. Wiley, New York, 1950.
•
[91 P. LEVY, Theorie de I'addition des variables aUatoires, 2 nd ed. Gauthier - Villars,
Paris, 1954.
[10] A. MARTIN - LOF, A limit theorem which clarifies the 'Petersburg paradox'. J. Appl.
Probab. 22(1985),634-643.
27
[111 G. SHAFER, The St. Petersburg paradox. In: Encyclopedia of Statistical Sciences,
Vol. 8 (S. Kotz, N. L. Johnson, and C. B. Read, eds.), pp. 865-870. Wiley, New-York,
1988.
Sandor Csorgo
Rossitza Dodunekovd.
Bolyai Institute
University 0/ Szeged
Aradi virtanuk tere 1
H-6720 Szeged
Hungary
Department 0/ Probability and Statistics
University 0/ Sofia
P. O. Box 979
1090 Sofia
Bolligaria
..
28