Journal of Economic Theory 84, 145195 (1999)
Article ID jeth.1998.2479, available online at http:www.idealibrary.com on
Limit Laws for Non-additive Probabilities and
Their Frequentist Interpretation*
Massimo Marinacci
Dipartimento di Scienze Economiche, Universita di Bologna, Piazza Scaravilli, 2,
40126 Bologna, Italy
marinacceconomia.unibo.it
Received March 17, 1997; revised September 15, 1998
In this paper we prove several limit laws for non-additive probabilities. In
particular, we prove that, under a multiplicative notion of independence and a
regularity condition, if the elements of a sequence [X k ] k1 are i.i.d. random
variables relative to a totally monotone and continuous capacity &, then
&
\{|
n
X 1 d&lim inf
n
n
1
1
: X lim sup
: X & &X 1 d&
n k=1 k
n k=1 k
n
|
=+
=1.
Since in the additive case X 1 d&=&&X 1 d&, this is an extension of the classic
Kolmogorov's Strong Law of Large Numbers to the non-additive case. We argue
that this result suggests a frequentist perspective on non-additive probabilities.
Journal of Economic Literature Classification Numbers: C60, D81. 1999 Academic Press
1. INTRODUCTION
The literature on limit theorems in Probability Theory has always considered additive probabilities. In both objective and subjective settings,
additivity has been generally considered a fairly natural assumption. In a
frequentist set-up, the dominant objective interpretation of probability,
additivity is obviously compelling. But, in the subjective interpretation as
well, additivity has been for a long time an uncontroversial assumption,
justified as a rationality requirement (e.g., de Finetti's notion of coherence;
see de Finetti [2]).
However, in some areas this apparently quite natural property has been
abandoned in favor of non-additive probabilities. In subjective probability
* This paper was written while I was with the Economics Department of the University of
Toronto, Canada. I thank Bruno Bassan, Larry Epstein, Paolo Ghirardato, and, especially,
Itzhak Gilboa for helpful comments. An associated editor provided several suggestions that
substantially improved the exposition. I also gratefully acknowledge the financial support of
the Social Sciences and Humanities Research Council of Canada.
145
0022-053199 30.00
Copyright 1999 by Academic Press
All rights of reproduction in any form reserved.
146
MASSIMO MARINACCI
this happened because additivity prevents an effective analysis of the degree
of confidence that decision makers have in their probability assessments.
A number of papers in artificial intelligence, mathematical economics, and
statistics have used non-additive probabilities to study the implications
of this crucial feature of subjective assessments (see, e.g., Dempster [4, 5],
Gilboa [13], Huber [18, 19], Shafer [30], Schmeidler [26, 28], Wasserman
and Kadane [36], Seidenfeld and Wasserman [29], Wakker [33], and
Walley [34]).
A bit more surprisingly, non-additive probabilities have been used also
in a frequentist setting, especially in Quantum Mechanics. Indeed, as a consequence of the famous waveparticle duality, the probabilities that
describe quantum phenomena are generally non-additive, even though a
frequentist interpretation is usually attached to them (cf. Feynman and
Hibbs [11]). This has often been regarded as a puzzling feature of the
subject.
Our study of limit laws for non-additive probabilities has been motivated
by their importance in both subjective and objective settings. Here we focus
on the subjective side. The standard example used to explain the importance of non-additivity in subjective probability is the classic Ellsberg
Paradox, due to Ellsberg [10]. There are two urns: urn I, of which both
composition and proportion are known, say, 50 red and 50 black balls; and
urn II, in which the proportion of red and black balls is not known. In urn
I it is natural to consider as equiprobable both the drawing of a red ball
(event R I ) and that of a black ball (event B I ). Even in urn II, though,
considerations of symmetry suggest that the events R II and B II are equally
likely. In classic subjective probability for both urns we would end up
with
P(R I )=P(B I )=P(R II )=P(B II )= 12 .
However, it is natural to expect that the confidence which the decision
maker (DM) has in the two assignments is different, as he might well prefer
to base a decision on the probabilities P(R I ) and P(B I ) rather than on
P(R II ) and P(B II ). With non-additive probabilities it is possible to take
care of this problem in a rather simple way. For example, the decision
maker may retain the symmetry based equiprobability judgment he has for
urn II, so that P(R II )=P(B II ); however, he can now put P(R II )=P(B II )
=0.3 and the difference 1&P(R II )&P(B II )=0.4 expresses the degree of
confidence he has in his own probability judgment. Additive beliefs represent the special case of full confidence. For instance, this is the case for
urn I, in which the assignment P(R I )=P(B I )= 12 not only expresses an
equiprobability judgement, but also a full confidence in it.
147
LIMIT LAWS
1.1. Independence and the Frequentist Interpretation of Subjective
Probabilities
We adopt a multiplicative notion of independence, that is, two events A
and B are independent whenever P(A & B)=P(A)P(B), and we consider
sequences of random variables that are i.i.d. according to such a rule. 1
Both in the standard theory and in this paper, the most important class
of phenomena that the i.i.d. case models are those exemplified by drawings
from a sequence of urns with the same balls' composition. In the standard
setting, the DM has an unambiguous (i.e., additive) subjective assessment
over the balls' proportions in the urns. In particular, in the i.i.d. case such
an assessment is identical for all urns. For example, in a sequence of urns
containing black and red balls, a possible probability assignment is
P(R)= 13 and P(B)= 23 for all urns.
In our more general setting, however, the DM may have different levels
of confidence in his beliefs over the balls' proportions, so that his overall
subjective assessment is represented by a non-additive probability. Again,
such an assessment is identical for all urns in the i.i.d. case. For instance,
in the above sequence of urns with black and red balls, a possible
non-additive probability assignment is P(R)= 14 and P(B)= 12 for all urns.
For our more general setting, we prove that, in contrast to the additive
case, the DM believes that the limit frequency of red balls belongs to some
interval, but is not able to pin down its value to a single number.
Specifically, let us consider an i.i.d. sequence [X n ] n1 of [0, 1]-valued
random variables, where [X n =0] and [X n =1] are, respectively, the
events ``draw a red ball from urn n'' and ``draw a black ball from urn n.''
Suppose P([X n =0])=P([X n =1])= 14 for all n1. We prove that:
P
\
1
1 n
1 n
3
lim inf
: X k lim sup
: X k =1.
4
n
n
4
n
n
k=1
k=1
+
This shows that the DM has an unambiguous probability 1 belief that
the proportion of times a black ball is drawn is in the interval [ 14 , 34 ]. However, unlike the additive case, the DM is not able to further pin down its
value. 2
1
Parallel results can be proved if P(A & B)=P(A) P(B), where P is the dual probability
defined by P(A)=1&P(A c ).
2
We will prove that, under additional conditions, it also holds that
P
\
n
n
1
1
1
3
<lim inf
: X k lim sup
: X k < =0.
4
n
n
4
n
n
k=1
k=1
+
148
MASSIMO MARINACCI
This is a natural and fairly neat extension of the standard limit laws. In
fact, if the DM were fully confident in his beliefs, say P([X n =0])=
P([X n =1])= 12 for all n1, then we would get back to the standard result
P(lim inf S n =lim sup S n = 12 )=1.
n
n
In standard additive subjective probability, an important feature of limit
laws is to provide a frequentist perspective on subjective probabilities.
Specifically, let us consider drawings from a sequence of urns with the same
balls' composition and suppose that the DM has unambiguous (i.e.,
additive) subjective probability assessments over the ball's proportions in
the urns. In particular, suppose that these assessments are identical for all
urns, so that we are in an i.i.d. setting. Then, by the classic limit laws, such
DM's subjective probability assessments are equal to his subjective assessment of the long run frequency with which the different balls' colors will
appear in a sequence of drawings, one for each urn in the sequence. Consequently, one can think of his subjective probabilities as his assessment of
such long run frequencies. This gives an interesting connection between two
different views of probability (cf., e.g., Kreps [21] chap. 11).
In a similar vein, our limit laws for non-additive probabilities suggest a
frequentist perspective on subjective non-additive probabilities. Let us consider a sequence of Ellsberg urns containing red and black balls and
assume that the DM has the same non-additive subjective probability
assessment P(R) on all urns. The limit laws we will prove in the paper
suggest that the non-additive probabilities P(R) and P(R)= 1&P(B) (the
dual of P(R)) may be thought of as the DM's assessment of, respectively,
the lim inf and lim sup of the long run frequencies with which the red balls
will appear in a sequence of independent drawings from the given sequence
of urns.
As non-additive probabilities have often been seen as a further departure
of subjective probability from the frequentist tradition, this connection
seems especially valuable.
1.2. Related Literature and Organization
Related papers on limit laws for non-additive probabilities include
Walley and Fine [35] and Dow and Werlang [7]. In particular, a frequentist twist of non-additive probabilities was already noticed by Walley
and Fine [35]. However, our results are more general. Partly, this is the
case because our main results (sections 5 and 6) are based on a much
weaker notion of independence, which only requires multiplicativity of the
marginals. Indeed, in the non-additive case different probabilities on the
product space may correspond to a given set of independent marginals,
LIMIT LAWS
149
and we therefore have an indeterminacy problem (see Section 7.1). Both
Walley and Fine [35] and Dow and Werlang [7] consider stronger
notions of independence that select a particular probability on the product
space for a given set of independent marginals. 3 In contrast, our definition
of independence requires only that the probability of a product set equals
the product of the marginal probabilities of the sets.
The paper is organized as follows. Section 2 contains some preliminaries.
Section 3 introduces the notion of independence used in our main results.
Section 4 presents the result on which our derivation hinges (Theorem 6).
Section 5 contains a non-additive version of Kolmogorov's strong law. This
is the main result of the paper. Section 6 presents a weak law, while Section
7 discusses some extensions. In particular, it shows which form our results
take for the stronger notion of independence based on the representation
of non-additive probabilities as lower envelopes of sets of additive
probabilities. The proofs and the more technical material are in the
appendix.
2. PRELIMINARIES
Let S be an algebra of subsets of a space 0. A real-valued function
X: 0 Ä R is said to be S-measurable if for every : # R the upper sets
[| : X(|):] and [| : X(|)>:] belong to S. If S is a _-algebra, this is
the standard notion of measurability. We denote by M the set of all
S-measurable functions.
A set function &: S Ä [0, 1] is called a capacity if it satisfies the following
three properties:
1. &(<)=0,
2. &(A)&(B) whenever AB and A, B # S,
3. &(0)=1.
A capacity & is inner continuous if
4. lim n &(A n )=&(A) for every nondecreasing monotone sequence
[A n ] n=1 in S with n=1 A n =A.
It is outer continuous if
5. lim n &(A n )=&(A) for every nonincreasing monotone sequence
[A n ] n=1 in S with n=1 A n =A.
A capacity which is both inner and outer continuous is called
continuous.
3
One of these notions will be discussed in section 7.1.
150
MASSIMO MARINACCI
A capacity & is convex if
6. &(A)+&(B)&(A _ B)+&(A & B) for all A, B # S.
Convex capacities are often used in mathematical economics to model
decision makers who do not like ambiguity (uncertainty aversion; cf.
Schmeidler [28]).
A capacity & is totally monotone if
7. &(A 1 _ } } } _ A n ) [I[1, ..., n]] (&1) |I | +1 &( i # I A i ) for every
collection of subsets [A 1 , ..., A n ]S.
Totally monotone capacities are often called belief functions and play an
important role in the theory of non-additive beliefs. 4 Inner measures are an
example of inner continuous and totally monotone capacities. In the
appendix we will provide an example of a continuous and totally
monotone capacity.
A capacity & is a mass if
8. &(A _ B)=&(A)+&(B) for any A, B # S such that A & B=<.
Every mass is a totally monotone capacity, with
&(A 1 _ } } } _ A n )=
:
[I[1, ..., n]]
\ +
(&1) |I | +1 & , A i
i#I
for every collection of subsets [A 1 , ..., A n ]S (inclusion-exclusion
formula).
A capacity & is a measure if
9. &( n=1 A n )= n=1 &(A n ) for any sequence [A n ] n=1 in S of
pairwise disjoint sets such that n=1 A n # S.
As is well-known, a set function & : S Ä [0, 1] is a measure if and only
if it is a continuous mass.
Finally, a capacity & is null-additive if
10. &(A _ B)=&(A) for any A, B # S such that A & B=< and
&(B)=0.
It is worth noting that a convex capacity is null-additive if and only if
&(A)=0 implies &(A c )=1 for every A # S.
Null-additive capacities are an important class of capacities and Pap
[24] is mostly devoted to their study. From a decision-theoretic standpoint
null-additivity is closely connected to Savage-nullity. In the choice model
based on non-additive probabilities axiomatized by Schmeidler [28], a set
A is Savage-null if and only if &(A _ B)=&(B) for all B # S such that
4
See Dempster [4, 5], Shafer [30]. For recent work on totally monotone capacities, see
Gilboa and Schmeidler [15], Marinacci [22], as well as the references they contain.
151
LIMIT LAWS
A & B=< (see Schmeidler [28] p. 586). It is then easy to check that a
capacity is null-additive if and only if all its null sets are Savage-null.
We now introduce the core, an important notion in the theory of
capacities. The core C(&) of a capacity & : S Ä [0, 1] is the set
[+ : S Ä [0, 1] : + is a mass and +(A)&(A) for all A # S].
In other words, C(&) consists of all masses that dominate setwise the
capacity &.
For convex capacities we always have C(&){< (see Kelley [20] and
Shapley [31]). Moreover, for capacities with nonempty cores there is a
simple characterization of continuity. In fact, it is easy to see that a
capacity & : S Ä [0, 1] with C(&){< is continuous if and only if
lim n &(A n )=&(0) for any nondecreasing monotone sequence [A n ] n=1 in S
with n=1 A n =0.
The following known result will play an important role in what follows
(see Delbaen [3]).
Proposition 1. Let S be an algebra of subsets of a given space 0, and
& a convex capacity defined on S. Let C be a chain of subsets in S. There
exists a mass + # C(&) such that +(A)=&(A) for all A # C.
We close this section by introducing the notion of integral which we will
use in the sequel, developed in the seminal contribution of Choquet [1].
For the special case of inner and outer measures this integral was studied
by Vitali [32] (see Marinacci [23]).
Definition 2. Let & : S Ä [0, 1] be a capacity, and let X # M. The
Choquet integral of X relative to & is defined as follows:
|
X d&=
|
&([X:]) d:+
0
|
0
[&([X:])&1] d:.
&
If & is additive, X d& coincides with standard notions of integral.
A few papers have recently studied the Choquet integral. 5 In the sequel,
however, we only need the following elementary properties.
Proposition 3. Let S be an algebra of subsets of a given space 0, & a
capacity on S, and X n , X, Y elements of M. Then
5
See, e.g., Greco [16, 17], Schmeidler [27], and Zhou [37].
152
MASSIMO MARINACCI
1. if X(|)Y(|) for all | # 0, then X d& Y d&;
2. if :0, then :X d&=: X d&;
3. if & is convex, then (X+Y) d& X d&+ Y d&.
Finally, Dennenberg [6] provides a comprehensive introduction to nonadditive set functions and integrals, and we refer the interested reader to
this book for more material on the subject.
3. INDEPENDENCE AND REGULARITY
In this section we will introduce independence in our set-up and present
a technical condition, regularity, used in the sequel.
Let S be an algebra. The function X is called a random variable if it is
S-measurable, i.e., X # M. We denote by B0 the algebra generated by the
open sets of the real line. We set
A(X )=[[| : X(|) # B] : B # B0 ],
that is, A(X) is the smallest algebra in 0 that makes X a random variable.
Next we introduce independence in our set-up.
Definition 4. Let [X k ] nk=1 be a finite sequence in M, and let & be a
capacity on S. We say that the random variables [X k ] nk=1 are independent
relative to & if
&
\
n
+
n
, A k = ` &(A k )
k=1
k=1
whenever A k # A(X k ) for 1kn. Moreover, if [X k ] k1 is an infinite
sequence of random variables, we say that the random variables [X k ] k1 are
independent if every finite subclass of [X k ] k1 consists of independent
random variables.
In the sequel we will often need the following technical assumption:
Definition 5. Let [X k ] k1 be a sequence in M, and & a capacity on S.
Let & k be the restriction of & on A(X k ). The random variables [X k ] k1 are
regular relative to & if, for any :, ; # R with :<;, we have
& k([| : X k(|);])=& k([| : X k(|):])
LIMIT LAWS
153
whenever & k([| : :X k(|)<;])=0, and
& k([| : X k(|):])=& k([| : X k(|)>:])
whenever & k([X k(|)=:])=0.
If each & k were additive, then every sequence of random variables would
be regular. It is easy to check that this is also the case if each & k were nulladditive, a property introduced in Section 2. This is a very convenient case
because regularity is a condition on the random variables and, in general,
one has to check if a given sequence of random variables satisfies the condition. It is also worth noting that monotonicity of & k already implies that
& k([|: X k(|):]){& k([| : X k(|)>:]) for at most a countable number
of : # R.
The fact that regularity always holds whenever the marginals & k are nulladditive capacities is especially interesting. Capacities that are strictly
positive on every non-empty subset of A(X k ), i.e., & k(A)>0 for all
A # A(X k ), are a simple, but important, example of null-additive capacities
& k . For instance, in Bernoulli trials we have A(X k )=[<, A, A c, 0], where
A=[|: X k(|)=0] and A c =[| : X k(|)=1]. Here it is natural to assume
that & k(A)>0 and & k(A c )>0. Such a & k is null-additive. Distortions are
another example of null-additive capacities. In fact, let + k be a mass defined
on A(X k ) and f: [0, 1] Ä [0, 1] a strictly increasing function such that
f(0)=0 and f (1)=1. Suppose that & on S is such that & k(A)= f (+ k(A))
for all A # A(X k ) and all k1 (e.g., this is the case if & itself is defined by
f(+(A)) for all A # S, where + is a mass on S). It is easy to see that & k is
null-additive.
Finally, it is easy to check that in finite algebras every convex capacity
can be approximated setwise, as closely as wanted, with a convex nulladditive capacity. This shows that regularity is a mild condition on finite
algebras (for example, A(X k ) is finite whenever X k is finite-valued).
4. A KEY THEOREM
The next theorem will be a key ingredient in the proofs of our limit
results and it is also of interest in itself. It says that for any sequence of
regular and independent random variables, we can always find an additive
set function that preserves independence and expected value.
This result is important for our purposes because through it one may
hope to obtain in a non-additive setting several results from classical probability theory, such as the limit laws proved later. Let _(X k ) denote the
_-algebra [[| : X k(|) # B] : B # B], where B is the Borel _-algebra of the
154
MASSIMO MARINACCI
real line. That is, _(X k ) is the smallest _-algebra in 0 that makes X k a
random variable.
Theorem 6. Let 0 be a compact space, and & a convex and continuous
capacity on S. Let [X k ] k1 be a sequence of regular and independent
random variables relative to &. Suppose each random variable is continuous
on 0. Then there exists a measure + defined on _(X 1 , ..., X k , ...) such that
(i)
[X k ] k1 is a sequence of independent random variables relative to +;
(ii)
X k d&= X k d+ for all k1.
Remark. We need the topological assumptions in order to establish the
countable additivity of +. In the appendix we prove a weaker form of
Theorem 6 in which no topological assumptions are made, but + is then
only finitely additive (Lemma 18).
4.1. Example: Bernoullian Sequences
In this subsection we study an interesting case in which the topological
assumptions of Theorem 6 become superfluous. To this end we introduce
the following definition, in which we consider only finite-valued random
variables (that is, the range R(X k ) of each X k is finite).
Definition 7. A sequence [X k ] k1 of finite-valued, independent random
variables on (0, S, &) is called Bernoullian if for each sequence [: k ] k1 ,
with : k # R(X k ) for all k1, we have
, [| : X k(|)=: k ]{<.
k1
This assumption just requires that each sequence of values that the
random variables [X k ] k1 may take on is possible. For example, let
[X k ] k1 be a sequence of two-valued random variables, with X k(|) #
[0, 1] for every | # 0. For i k # [0, 1], set A ikk =[| : X k(|)=i k ]. According
to this notation, A 0k =[| : X k(|)=0] and A 1k =[| : X k(|)=1]. The
algebra A(X k ) consists of [<, A 0k , A 1k , 0]. The sequence is Bernoullian if
ik
N
k=1 A k {< for each infinite sequence [i k ] k1 # [0, 1] . In other words,
every sequence of successes and failures is possible.
For Bernoullian sequences the following nontopological version of
Theorem 6 holds. 6
6
The continuity of the capacity & is not needed here because Bernoullian sequences consist
of finite-valued random variables. A similar observation applies to Corollary 12 below.
LIMIT LAWS
155
Corollary 8. Let & be a convex capacity on S. Let [X k ] k1 be a
Bernoullian sequence of regular and independent random variables relative
to &. Then there exists a measure + defined on _(X 1 , ..., X k , ...) such that
(i)
[X k ] k1 is a sequence of independent random variables relative to +;
(ii)
X k d&= X k d+
for all k1.
This result is a corollary of Theorem 6. This might seem a bit surprising
because no topology is involved in Corollary 8. However, we can endow
the space 0 with an auxiliary compact topology under which every random
variable in a Bernoullian sequence is continuous. This topology, denoted
by {(A), has as basis the algebra A(X 1 , ..., X n , ...) itself, that is, {(A) consists of all sets that are unions (finite or infinite) of subsets belonging to
A(X 1 , ..., X n , ...). Using {(A) we can prove the following interesting
lemma. Corollary 8 is then a simple consequence of it and Theorem 6.
Lemma 9. Let [X k ] k1 be a Bernoullian sequence in (0, S, &). The
topology {(A) is compact and every random variable in the sequence is
continuous with respect to {(A).
5. STRONG LAW OF LARGE NUMBERS
Theorem 6 is a powerful result. Using it we can obtain in the nonadditive case results analogous to the standard limit theorems for independent random variables. Among them, we look at the Strong Law of
Large Numbers of Kolmogorov, the most famous limit law.
Definition 10. Let & be a capacity defined on a _-algebra S. A sequence
[X k ] k1 is a sequence of r.i.i.d. random variables relative to & if they are
independent, identically distributed, and if both [X k ] k1 and [&X k ] k1
are regular relative to &.
It is easy to check that if each & k is null-additive on A(X k ) then both
[X k ] k1 and [&X k ] k1 are regular relative to &.
We can now state our main result.
Theorem 11. Let 0 be a compact space, and & a totally monotone and
continuous capacity on a _-algebra S of subsets of 0. Let [X k ] k1 be a
sequence of continuous r.i.i.d. random variables relative to & such that both
E &(X 1 ) and E &(&X 1 ) exist. Set S n = 1n ni=1 X i . Then
156
MASSIMO MARINACCI
(i) &([|: E&(X 1 )lim infn S n(|)lim sup n S n(|)&E &(&X 1 )])=1,
(ii) &([| : E &(X 1 )>lim inf n S n(|)])=0,
(iii) &([| : &E &(&X 1 )<lim sup n S n(|)])=0.
Remark. In the additive case E &(X 1 )=&E &(&X 1 ), and every sequence
of random variables is regular.
The important case of Bernoullian sequences is considered in the following corollary, whose simple proof is omitted. For this case the topological
assumptions are superfluous.
Corollary 12. Let & a totally monotone capacity on a _-algebra S of
subsets of 0. Let [X k ] k1 be a Bernoullian sequence of r.i.i.d. random
variables relative to & such that both E &(X 1 ) and E &(&X 1 ) exist. Then
(i) &([|: E &(X 1 )lim infn S n(|)lim sup n S n(|)&E &(&X 1 )])=1,
(ii) &([| : E &(X 1 )>lim inf n S n(|)])=0,
(iii) &([| : &E &(&X 1 )<lim sup n S n(|)])=0.
6. A WEAK LAW
We now prove a version of the weak law of large numbers. This weaker
result requires a weaker condition on &: we no longer need total
monotonicity, but convexity is enough.
Theorem 13. Let 0 be a compact space, and & a convex and continuous
capacity on a _-algebra S of subsets of 0. Let [X k ] k1 be a sequence of
continuous r.i.i.d. random variables relative to & such that both E &(X 1 ) and
E &(&X 1 ) exist. Then, for every =>0 we have
(i)
lim n &([| : E &(X 1 )&=S n(|)&E &(&X 1 )+=])=1,
(ii)
lim n &([| : S n(|)E &(X 1 )+=])=0,
(iii)
lim n &([| : S n(|)&E &(&X 1 )&=])=0.
7. EXTENSIONS
7.1. Controlled Capacities
The notion of independence we have been using is rather weak. To see
why, let us consider two algebras A(X k ) and A(X k$ ). Our independence
LIMIT LAWS
157
condition simply requires multiplicativity on the sets A k & A k$ , i.e.,
&(A k & A k$ )=&(A k )&(A k$ ) for A k # A(X k ) and A k$ # A(X k$ ). We do not ask
anything on the sets of A(X k , X k$ ) which are not of the form A k & A k$ .
This is an especially attractive feature of our results. In fact, while in the
additive case there is a unique mass compatible with given values of
&(A k & A k$ ), this is no longer true for capacities. In other words, in the
additive case once we know the values of & on A(X k ) and A(X k$ ) (i.e., its
marginals on A(X k ) and A(X k$ )), by independence it is then possible to
determine uniquely its value on the entire algebra A(X k , X k$ ). This is not
true for capacities, where different values of & on A(X k , X k$ ) may be
compatible with given independent marginals on A(X k ) and A(X k$ ). Thus,
an important feature of our results is their robustness to this indeterminacy.
Still, one might wonder if we can strenghten our results by imposing
some specific link between the marginals and the overall capacity on
A(X k , X k$ ). In the literature several possible links have been discussed. 7
Among them, we focus on a quite popular alternative, motivated by the
characterization of convex capacities as lower envelopes of their cores.
Specifically, for every convex capacity & defined on an algebra it holds
that
&(A)= min +(A)
(1)
+ # C(&)
for all sets A in the given algebra. This result provides an interesting
perspective on &. In fact, as we discussed in the introduction, capacities are
used to model ambiguity in beliefs. Equality (1) shows that we can also
model ambiguity using sets of additive probabilities. The difference between
a standard decision maker and one that perceives ambiguity in his beliefs
is that while the former represents his beliefs with a single additive probability, the latter uses a set of them. In particular, the more ambiguous his
beliefs are, the larger such a set is. This view has been axiomatized by
Gilboa and Schmeidler [14] in the so-called multiple priors model.
As to the indeterminacy issue we were discussing, the multiple priors
approach suggests to look at all uniquely determined masses + generated
by independent marginals + k # C(& k ) and + k$ # C(& k$ ) and then to require
that the value assigned by & to a set A # A(X k , X k$ ) be equal to their lower
envelope. Formally,
Definition 14. Let & be a capacity on S, and [X k ] k1 a sequence of
random variables on S. Let
7
See Eichberger and Kelsey [9] and Ghirardato [12] for recent work on this and related
subjects, as well as the references they contain.
158
MASSIMO MARINACCI
{
\
+
1= + : + is a mass on S such that + , A k = ` + k(A k ) for all finite
k#I
k#I
=
index sets I, where + k # C(& k ) and A k # A(X k ) for each k # I .
(2)
The capacity & is said to be controlled if for each A # S we have
&(A)=min +(A).
+#1
This is, for example, the approach taken by Walley and Fine [35], who
mostly study controlled capacities.
With controlled capacities it is possible to obtain interesting
strengthenings of our results. For example, the strong law of large numbers
takes the following form.
Theorem 15. Let 0 be a compact space, and & a convex, continuous, and
controlled capacity on a _-algebra S of subsets of 0. Let [X k ] k1 be a
sequence of continuous and r.i.i.d. random variables relative to & such that
both E &(X 1 ) and E &(&X 1 ) exist. Then
(i) &([|: E &(X 1 )lim infn S n(|)lim sup n S n(|)&E &(&X 1 )])=1;
(ii) &([| : lim inf n S n(|)<E &(X 1 )])
=&([| : &E &(&X 1 )<lim sup n S n(|)])=0;
(iii) &([|: E &(X 1 )<lim infn S n(|)lim sup n S n(|)<&E &(&X 1 )])=0;
(iv) &([| : E &(X 1 ){lim inf n S n(|)])=0,
&([| : &E &(&X 1 ){lim sup n S n(|)])=0;
(v) if & is null-additive, then
&([| : E &(X 1 )=lim inf S n(|)])=1
n
&([| : &E &(&X 1 )=lim sup S n(|)])=1;
n
The novel parts relative to Theorem 11 are (iii)(v). It is worth noting
that only convexity is used, not any more total monotonicity.
More interestingly, however, with controlled capacities it is possible to
establish a central limit theorem. To state and prove it we need some notation. In the standard additive set-up, we look at the limit behavior of the
random variable (nS n &nE +(X n ))_n 12, where _ is the standard deviation.
In our context, we set:
1. # n =E &(X 2n )&E &(X n ) 2,
2. \ n =E &(&X 2n )&E &(&X n ) 2,
LIMIT LAWS
159
3. Y n =(nS n &nE &(X n ))# n n 12,
4 Z n =(nS n &n(&E &(&X n )))\ n n 12.
Notice that if & were additive, then # 2n =\ 2n =Var &(X n ), E &(X n )=
&E &(&X n ) and so Y n =Z n =(nS n &nE &(X n ))_n 12. Furthermore, here in
general # n {Var &(X n ).
We can now state the theorem. N( } ) is the cumulative distribution
function of a standard normal distribution.
Theorem 16. Let 0 be a compact space, and & a convex, continuous, and
controlled capacity on a _-algebra S of subsets of 0. Let [X k ] k1 be a
sequence of continuous and r.i.i.d. nonnegative random variables relative to &
such that both E &(X 21 ) and E &(&X 21 ) exist. Then
(i)
lim n &([| : Z n :])=N(:),
(ii)
lim n &([| : Y n >:])=1&N(:).
Remark. If & were additive, this result would reduce to a standard
Central Limit Theorem. However, the result is weaker than its additive
counterpart because, given a random variable X, the knowledge of the
value of & on the sets [| : X(|):] and [| : X(|)>:] is not enough to
characterize the distribution of & on the entire _-algebra _(X ).
We close this subsection with an interesting property of sums and
products of random variables that are independent relative to a controlled
capacities. We omit the proof of this result, which is a simple consequence
of Lemma 22.
Proposition 17. Let & be a controlled capacity on a _-algebra S of subsets of 0, and [X k ] nk=1 a finite sequence of bounded, independent, and
regular random variables relative to &. If each & k is convex on A(X k ), then
(i)
E &(X 1 } } } X n )=E &(X 1 ) } } } E &(X n ),
(ii)
E &(X 1 + } } } +X n )=E &(X 1 )+ } } } +E &(X n ).
Notice, in particular, how additivity arises under independence, which is
the polar case of comonotonicity. Interestingly, Ghirardato [12] has
showed that a similar result can be derived in a rather different approach.
7.2. Product Spaces
In this paper we have been working in a setting with a fixed probability
space, whose underlying probability was non-additive. Walley and Fine
160
MASSIMO MARINACCI
[35] and Dow and Werlang [7] have, in contrast, used an alternative setting in which one starts from sequences of random variables defined on distinct probability spaces. We preferred to work with a given probability
space because it is, in our opinion, a setting better suited to obtain general
results that hold regardless of which particular procedure one may choose
to deal with the indeterminacy problem mentioned before.
In any event, all the results we have proved in our setting with a fixed
probability space hold in the alternative setting with distinct probability
spaces. We omit their statements in this different setting as they are basically translations of the original results we stated before. The only significant difference is that, in the setting with distinct probability spaces, the
assumption of regularity becomes superfluous.
8. APPENDIX: PROOFS AND RELATED ANALYSIS
8.1. Section 4
In order to prove Theorem 6, we first prove the following lemma which
is also of interest in itself. Recall that by a mass we mean a finitely additive
set function.
Lemma 18. Let & be a convex capacity on S. Let [X k ] k1 be a sequence
of regular and independent random variables relative to &. Then there exists
a mass + defined on A(X 1 , ..., X k , ...) such that
(i)
[X k ] k1 is a sequence of independent random variables relative to +;
(ii)
X k d&= X k d+ for all k1.
Remark. For this theorem we need only the first part of the definition
of regularity, i.e., that for any :, ; # R with :<; we have
& k([| : X k(|);])=& k([| : X k(|):])
whenever & k([| : :X k(|)<;])=0.
Proof. We prove (i) and (ii) for a given finite sequence [X 1 , ..., X n ].
For k1, let C k be the chain formed by the upper sets [X k :], where
: # R. The algebra A(X k ) coincides with the algebra generated by C k . This
can be proved with a kind of ?&* argument. Let & k be the restriction of
& on A(X k ). By Proposition 1, there exists a mass + k # C(& k ) such that
+ k(A)=&(A) for all A # C k . Define a set function +$ on A(X 1 ) _ A(X 2 ) _
} } } _ A(X n ) as follows:
+$(A)=+ k(A)
if A # A(X k ).
161
LIMIT LAWS
We now show that +$ is well defined. Let AB with A # A(X k ) and
B # A(X k$ ) where 1 k, k$ n. By independence, &(A) = &(A & B) =
&(A) &(B). Therefore, &(A)=0 or &(B)=1, or both. Suppose &(B)=1,
then + k$(B)=1 because + k$ # C(&). In turn, this implies +$(B)=1, so
that +$(A)+$(B), as desired. Suppose &(A)=0. Since A # A(X k ) we
can write A = ni=1 (E i & F i ), where E i # C k , F ci # C k and E i $ F ci .
The sets (E i & F i ) are pairwise disjoint. Since &(A) = 0, we have
&(E i & F i ) = 0 for each 1 i n. By construction, + k(E i & F i ) =
&(E i )&&(F ci ). Since &(E i & F i )=0, regularity implies &(E i )=&(F ci ), so
that + k(E i & F i )=0. Since + k(A)= ni=1 + k(E i & F i )=0, we conclude
+$(A)+$(B), as desired. This proves that +$ is a well defined mass on
A(X 1 ) _ A(X 2 ) _ } } } _ A(X n ). Let A # A(X 1 , ..., X n ). It is easy to check
1
n
k
that A has the form m
i=1 (A i & } } } & A i ) where A i # A(X k ). Moreover,
1
n
the sets (A i & } } } & A i ) can be taken pairwise disjoint. Define + on
A(X 1 , ..., X n ) as follows
m
+(A)= : +$(A 1i ) } } } +$(A ni ).
i=1
We now check that + is well defined. Suppose
m
m$
A= . (A 1i & } } } & A ni )= . (B 1j & } } } & B nj ),
i=1
j=1
k
j
where B # A(X 1 ) and the sets (B & } } } & B ni ) are pairwise disjoint. Let
0 n =0 1 _ } } } _0 n , where the index in 0 k denotes only the position in the
Cartesian product. Let ? k be a map from A(X k ) to the power set of 0 k
defined as follows
1
i
? k(A)=A
for each
A # A(X k ).
Let A*(X k ) be the projection of A(X k ) through ? k . Let A*(X 1 , ..., X n )
denote the algebra generated by the measurable rectangles of the algebras
A*(X 1 ), ..., A*(X n ). Define a map ? 1, ..., n from A(X 1 , ..., X n ) to A*(X 1 , ..., X n )
as
\
m
+
m
? 1, ..., n . (A 1i & } } } & A ni ) = . (A 1i _ } } } _A ni )
m
i=1
i=1
i=1
n
i
1
i
for each A=
(A & } } } & A ). Set + *(?
k
1, ..., n(A))=+$k(A) for each
is
additive
on
A*(X
A # A(X k ). Since each + *
k
k ), it is known there exists
a mass +*
on
A*(X
,
...,
X
)
defined
as
1, ..., n
1
n
m
+*
*(A 1i ) } } } + n*(A ni )]
1, ..., n(A)= : [+ 1
i=1
162
MASSIMO MARINACCI
for each A # A*(X 1 , ..., X n ). Its projection through ? 1, ..., n is
m
+ 1, ..., n(A)= : [+ 1(A 1i ) } } } + n(A ni )]
i=1
for each A # A(X 1 , ..., X n ). Clearly, + 1, ..., n(A)=+(A) for each A #
m
1
n
A(X 1 , ..., X n ). Since +*
1, ..., n is well defined, if i=1 (A i & } } } & A i )=
m$
1
n
j=1 (B j & } } } & B j ){< we get:
\
\
m
. (A 1i & } } } & A ni )
+*
1, ..., n ? 1, ..., n
i=1
m
++
m$
= : [+ 1*(A 1i ) } } } + n*(A ni )]= : [+ 1*(B 1j ) } } } + n*(B nj )]
i=1
j=1
\
\
m$
=+*
. (B 1j & } } } & B nj )
1, ..., n ? 1, ..., n
j=1
++ .
1
n
m$
1
n
Therefore, m
i=1 +$(A i ) } } } +$(A i )= j=1 +$(B j ) } } } +$(B j ), and this proves
that + is well defined. Clearly, under + the random variables [X 1 , ..., X n ]
are independent. Since, by construction, +(A)=&(A) for A # C 1 _ } } } _ C n ,
it follows that X k d&= X k d+ for all 1kn.
Let I be a finite subset of [1, ..., n, ...]. Let [X k ] k # I [X k ] k1 . By what
has been proved above, there exists a mass + I that satisfies (i) and (ii) on
[X k ] k # I . Let I$ be another finite subset of [1, ..., n, ...] such that II$. If
A # A([X k ] k # I ), then
+ I (A)=+ I$(A).
(2)
Equality (2) is easy to check. For, if A # A([X k ] k # I ), then A= m
i=1 k # I
A ki for some A ki # A(X k ). Consequently, by construction + I (A)=
k
m
i=1 > k # I + k(A i )=+ I$(A).
Suppose now that I$ neither contains nor is contained in I. Set
I"=I$ _ I. Using (2) on A([X k ] k # I" ) it is easy to check that
+ I (A)=+ I"(A)=+ I$(A) if A # A([X k ] k # I ) & A([X k ] k # I$ ). This allows us
to define a set function + on A(X 1 , ..., X k , ...) as follows: +(A)=+ I (A) if
A # A([X k ] k # I ). The set function + is well defined by what proved above.
Moreover, since A(X 1 , ..., X k , ...)= I A([X k ] k # I ), if A, B are disjoint
subsets of A(X 1 , ..., X k , ...), with A # A([X k ] k # I ) and B # A([X k ] k # I$ ),
then there exists a finite subset I* such that A _ B # A([X k ] k # I* ). Set
I"=I _ I$ _ I*. Using equality (2) on A([X k ] k # I" ) it is then easy to check
additivity of +. This completes the proof of parts (i) and (ii). K
163
LIMIT LAWS
Proof of Theorem 6. Before starting, we remind that a set function + is
regular on a collection of sets F if +(A)=sup[+(F ) : FA, F # F, F
closed]. Let C k be the chain in S formed by the sets [|: X k(|):] and
[| : X k(|)>:] for each : # R. Let + k be the mass on A(X k ) such that
&(A)=+ k(A) for every A # C k . Since + k is in the core of & k , + k is a measure
on A(X k ). Regularity of [X k ] k1 implies the following equalities for any
:, ; # R with :<;:
& k[| : X k(|)>;]=& k[|: X k(|):]
if
& k[| : :X k(|);]=0,
& k[| : X k(|)>;]=& k[|: X k(|)>:]
if
& k[| : :<X k(|);]=0,
& k[| : X k(|);]=& k[|: X k(|)>:]
if
& k[| : :<X k(|)<;]=0.
We prove the first equality. From & k[| : :X k(|);]=0 it follows
that & k[|: :X k(|)<;]=0, and so & k[| : X k(|);]=& k[|: X k(|):]
by regularity. Since & k[| : X k(|)=;]=0, by regularity & k[| : X k(|);]
=& k[| : X k(|)>;], and this proves the equality. As to the second one,
& k[| : :<X k(|);]=0 implies & k[| : :+1nX k(|);]=0 for all n
large enough. Then, by the previous equality, & k[| : X k(|)>;]=
& k[| : X k(|):+1n] for all n large enough. Since (:, )=
n (:+1n , ), continuity implies
{
& k[| : X k(|)>:]=lim & k | : X k(|):+
n
1
n
=
=& k[| : X(|)>;].
Finally, let us consider the last equality. From & k[| : :<X k(|)<;]=0 it
follows that & k[|: :+1nX k(|)<;]=0 for n large enough. Then, by
regularity, & k[| : X k(|):+1n]=& k[| : X k(|);] for all n large
enough, so that continuity of & implies & k[| : X k(|);]=
& k[| : X k(|)>:].
We can now proceed as in the proof of Lemma 18 to construct a mass
+ on A(X 1 , ..., X k , ...) whose restriction on A(X k ) coincides with + k and
which satisfies points (i) and (ii) in Lemma 18. We want to show that + k
is countably additive. Continuity of + k on A(X k ) implies
164
MASSIMO MARINACCI
1
\ {
n =+
1
=lim + | : X (|)>:& ,
{
n=
1
+ [|: X (|)>:]=+ . [| : X (|):+
\
n =+
1
=lim + | : X (|):+ .
{
n=
+ k[|: X k(|):]=+ k , | : X k(|)>:&
n
n
k
k
k
k
k
k
n
n
k
k
Since X k is continuous, the sets [| : X k(|):] and & k[|: X k(|)>:] are,
respectively, closed and open for each : # R. For each A # C k this implies
+ k(A)=sup[+ k(F ) : FA, F # C k , F closed],
+ k(A)=inf[+ k(G) : GA, G # C k , G open].
Let A # A(X k ). Then we can write A= ni=1 (E i & F i ) where E i # C k and
F ci # C k . The sets E i & F i are pairwise disjoint. A standard procedure
ensures that + k is also regular on the entire A(X k ).
We now prove that + is regular on each A(X t1 , ..., X tn ). W.l.o.g., we
consider A(X 1 , ..., X n ). The mass + has been defined as
m
+(A)= : + 1(A 1i ) } } } + n(A ni ),
i=1
1
n
k
where A # A(X 1 , ..., X n ) has the form m
i=1 (A i & } } } & A i ), with A i #
1
n
A(X k ). The sets (A i & } } } & A i ) are pairwise disjoint. Let A=
1
n
j
m
i=1 (A i & } } } & A i ) and =>0. Let F i # A(X j ) for 1 jn be closed subj
j
sets such that +(A i )&+(F i )$ for 1 jn, where $ is a given positive
quantity. Then
m
m
: +(A 1i ) } } } +(A ni )& : +(F 1i ) } } } +(F ni )
i=1
i=1
m
m
) +(F ni )
= : +(A 1i ) } } } +(A ni )& : +(A 1i ) } } } +(A n&1
i
i=1
i=1
m
m
) +(F ni )& } } } + : +(A 1i ) +(F 2i ) } } } +(F 2i )
+ : +(A 1i ) } } } +(A n&1
i
i=1
m
& : +(F 1i ) } } } +(F ni )
i=1
i=1
165
LIMIT LAWS
m
= : +(A 1i ) } } } +(A n&1
)[+(A ni )&+(F ni )]+ } } }
i
i=1
m
+ : +(F 2i ) } } } +(F ni )[+(A 1i )&+(F 1i )]
i=1
m
: [[+(A ni )&+(F ni )]+ } } } +[+(A 1i )&+(F 1i )]]mn$.
i=1
As n and m are fixed for a given A, taking $ small enough we get
m
m
: +(A 1i ) } } } +(A ni )& : +(F 1i ) } } } +(F ni )=.
i=1
i=1
1
n
Since F= m
i=1 (F i & } } } & F i ) is a closed subset of A(X 1 , ..., X n ), we are
done. We conclude that + is regular on A(X 1 , ..., X n ). Since 0 is compact,
the set of closed subsets of A(X 1 , ..., X n ) form a compact class. By a
well-known result, + is then countably additive on A(X 1 , ..., X n ).
Let us denote by + t1 , ..., tn the regular measure on A(X t1 , ..., X tn ) whose
existence we have just proved. As we know from the proof of Lemma 18,
+ is defined on n A(X 1 , ..., X n ) as follows:
+(A)=+ t1 , ..., tn(A)
if
A # A(X t1 , ..., X tn ).
The mass + is then regular on n A(X 1 , ..., X n ). Since 0 is compact, it
follows that + is countably additive (see, e.g., Dunford and Schwartz [8]
p. 138). Therefore, + is a measure on n A(X 1 , ..., X n ). On the other hand,
_(X 1 , ..., X n , ...) is the _-algebra generated by n A(X 1 , ..., X n ). Then a
standard application of Caratheodory extension theorem ensures the existence of a unique measure on _(X 1 , ..., X n , ...) that extends +. This is the
measure on _(X 1 , ..., X n , ...) we were looking for. K
Proof of Lemma 9. Let p be a filter in A(X 1 , ..., X n , ...). Let us consider
the set B= [A : A # p]. Fix a k1, and let [A ki ] be the collection of all
subsets in p such that each A ki # A(X k ). Since p is a filter, and the collection [A ki ] is finite, we have i A ki {< and i A ki # A(X k ). Since each X k
is finite-valued, the algebra A(X k ) is generated by the finite partition
[| : X k(|)=:] : # R . Therefore, there exists a real number : k such that
[| : X k(|)=: k ] i A ki . Consequently,
, [|: X k(|)=: k ] ,
k1
k1
\, A + = [A : A # p],
k
i
i
166
MASSIMO MARINACCI
which implies
, [A : A # p]{<
(3)
because the sequence is Bernoullian.
Now let [G i ] i # I be an open cover of 0, i.e., each G i # {(A) and
i # I G i =0. W.l.o.g., suppose each G i # A(X 1 , ..., X n , ...). Suppose {(A) is
not a compact topology. Then i # I$ G ci {< for each finite subset I$ of I,
and this implies the existence of a proper filter p in A(X 1 , ..., X n , ...)
generated by the sequence [G ci ] i # I . By (3), there exists a set <{B0
such that p[A # A(X 1 , ..., X n , ...) : BA]. Then BG ci for each i # I, so
that B c $G i for each i # I. But, this implies i # I G i B c / 0, which
contradicts i # I G i =0.
Finally, it is clear that each random variable is continuous relative
to {(A). K
8.2. Section 5
The proof of Theorem 11 rests on the following lemma. A nontopological
version of the lemma holds for Bernoullian sequences.
Lemma 19. Let 0 be a compact space, and & a convex and continuous
capacity on a _-algebra S of subsets of 0. Let [X k ] k1 be a sequence of
regular, independent and continuous random variables relative to &. Set S n =
1n ni=1 X i , and let + be the measure provided by Theorem 6. We have
+(| : S n :)&(| : S n :)
for all : # R.
Moreover, if & is totally monotone we also have
+(| : lim inf S n :)&(| : lim inf S n :)
n
Proof.
for all : # R.
n
As is well known, we have
[|: X 1(|)+X 2(|):]=. [| : X 1(|)r] & [| : X 2(|):&r],
r
where r goes over the rational numbers. Let [r 1, ..., r n, ...] be a ranking of
rational numbers, unrelated to the natural ordering . Consider I=
[r 1, ..., r n ]. For convenience, set
A r =[| : X 1(|)r]
and
A :&r =[| : X 2(|):&r].
167
LIMIT LAWS
For convenience, suppose r 1 } } } r n. This is without loss of generality
because the set [r 1, ..., r n ] is always linearly ordered. Consequently, we
have
A r(1) $ } } } $A r(n)
and
A :&r(1) } } } A :&r(n) .
We claim that
{
& . (A r & A :&r )
r#I
=
n
&(A r(1) & A :&r(1) )+ : [&(A r(i) & A :&r(i) )&&(A r(i) & A :&r(i&1) )].
i=2
We prove it by induction. Let n=2. Using convexity we get:
&((A r(1) & A :&r(1) ) _ (A r(2) & A :&r(2) )) _ (A r(3) & A :&r(3) ))
&((A r(1) & A :&r(1) ) _ (A r(2) & A :&r(2) ))+&(A r(3) & A :&r(3) )
&&[((A r(1) & A :&r(1) ) & (A r(3) & A :&r(3) ))
_ ((A r(2) & A :&r(2) ) & (A r(3) & A :&r(3) ))]
=&((A r(1) & A :&r(1) ) _ (A r(2) & A :&r(2) ))+&(A r(3) & A :&r(3) )
&&((A r(3) & A :&r(1) ) _ (A r(3) & A :&r(2) ))
=&((A r(1) & A :&r(1) ) _ (A r(2) & A :&r(2) ))
+&(A r(3) & A :&r(3) )&&(A r(3) & A :&r(2) )
&(A r(1) & A :&r(1) )+&(A r(2) & A :&r(2) )&&(A r(2) & A :&r(1) )
+&(A r(3) & A :&r(3) )&&(A r(3) & A :&r(2) ).
Suppose the result is true for n&1. Then, using convexity and the
induction hypothesis, we get
\
n
& . (A r(i) & A :&r(i) )
i=1
+
n&1
\ . (A & A )+ +&(A & A
&&
\\ . (A & A )+ & (A & A
&
r(i)
:&r(i)
r(n)
:&r(n)
)
i=1
n&1
r(i)
i=1
:&r(i)
r(n)
:&r(n)
)
+
168
MASSIMO MARINACCI
n&1
\ . (A & A )+ +&(A
&A
&& . (A & A
\
=& . (A & A
) +&(A
\
+
)
&& . (A & A
\
+
=& . (A & A
) +&(A
\
+
=&
r(i)
:&r(i)
r(n)
& A :&r(n) )
i=1
n&1
r(i)
:&r(i)
r(n)
& A :&r(n) )
i=1
+
n&1
r(i)
:&r(i)
r(n)
& A :&r(n) )
r(n)
& A :&r(n) )&&(A r(n) & A :&r(n&1) )
i=1
n&1
r(n)
:&r(i)
i=1
n&1
r(i)
:&r(i)
i=1
n&1
=&(A r(1) & A :&r(1) )+ : [&(A r(i) & A :&r(i) )&&(A r(i) & A :&r(i&1) )]
i=2
+&(A r(n) & A :&r(n) )&&(A r(n) & A :&r(n&1) )
n
=&(A r(1) & A :&r(1) )+ : [&(A r(i) & A :&r(i) )&&(A r(i) & A :&r(i&1) )].
i=2
This proves the claim. It is clear from the proof of the claim that for an
additive set function like + it holds
{
+ . (A r & A :&r )
r#I
=
n
=+(A r(1) & A :&r(1) )+ : [+(A r(i) & A :&r(i) )&+(A r(i) & A :&r(i&1) )].
i=2
On the other hand, using independence and the fact that A r(i) # C 1 and
A :&r(i) # C 2 we obtain
&(A r(i) & A :&r(i) )=&(A r(i) ) &(A :&r(i) )=+(A r(i) ) +(A :&r(i) )
=+(A r(i) & A :&r(i) ).
Similarly, &(A r(i) & A :&r(i&1) )=+(A r(i) & A :&r(i&1) ). We conclude that
{
= {
=
& . (A r & A :&r ) + . (A r & A :&r ) .
r#I
r#I
(4)
169
LIMIT LAWS
Set A(I 1, ..., n )= r # I (A r & A :&r ). Following the ordering [r 1, ..., r n, ...],
we get a nondecreasing sequence [A(I 1, ..., n )] n1 with, by (4), &[A(I 1, ..., n )]
+[A(I 1, ..., n )] for all n1. Since the continuity of & implies that of +, from
lim A(I 1, ..., n )=[|: X 1(|)+X 2(|):]
n
it follows
&[| : X 1(|)+X 2(|):]+[| : X 1(|)+X 2(|):].
We now extend the previous argument to the case [| : ni=1 X i (|):].
We have
{
n
=
: X i : =. } } } . [X 1 r 1 ] & } } } & [X n&1 r n&1 ]
i=1
r1
rn&1
n&1
{
=
& X n :& : r j .
j=1
Let I=[r 1(i 1 )] i1 # [1, ..., k] _ } } } _ [r n&1(i n&1 )] in&1 # [1, ..., k] . W.l.o.g. suppose
r j (i j )r j (i j +1) for all 1 jn&1. Therefore,
[X j r j (1)]$ } } } $[X j r j (k)]
for all 1 jn&1
(5)
and
[X n :&r 1(1)& } } } &r n&1(1)]
[X n :&r 1(2)& } } } &r n&1(1)] } } }
[X n :&r 1(k&1)& } } } &r n&1(k&1)].
We can write
k
k
. } } } . [X 1 r 1(i 1 )] & } } } & [X n&1 r n&1(i n&1 )]
i1 =1
in&1 =1
{
n&1
& X n :& : r j (i j )
j=1
=
(6)
170
MASSIMO MARINACCI
_
n&1
{
= [X 1 r 1(1)] & } } } & [X n&1 r n&1(1)] & X n :& : r j (1)
_
(2)
& X :& : r (1)&r
{
=&
_ } } } _ [X r (k)] & } } } & [X
_
& X :& : r (k) .
{
=&
j=1
=&
_ [X 1 r 1(1)] & } } } & [X n&1 r n&1(2)]
n&2
n
j
n&1
j=1
1
n&1
1
r n&1(k)]
n&1
n
j
j=1
Using convexity, (5), and (6) we get:
&
{
k
k
. } } } . [X 1 r 1(i 1 )] & } } } & [X n&1 r n&1(i n&1 )]
i1 =1
in&1 =1
n&1
{
==
& [X r (1)] & } } } & [X
r
(1)] & X :& : r (1)
{_
{
=&
_ [X r (1)] & } } } & [X
r
(2)]
_
& X :& : r (1)&r
(2) _ } } }
{
=&
r
(k)]
_ [X r (k&1)] & } } } & [X
_
& X :& : r (k)&r (k&1)
{
=&=
+& [X r (k)] & } } } & [X
r
(k)]
{
& X :& : r (k)
{
==
&& [X r (k)] & } } } & [X
r
(k)]
{
& [X :& : r (1) .
==
& X n :& : r j (i j )
j=1
n&1
1
n&1
1
n&1
n
j
j=1
1
n&1
1
n&1
n&2
n
j
n&1
j=1
1
n&1
1
n&1
n&1
n
j
1
j=2
1
1
n&1
n&1
n&1
n&1
n&1
n
j
j=1
1
1
n&1
n
j
j=1
171
LIMIT LAWS
Now, repeat the same argument, using convexity, (5), and (6), for the sets
[r 1(k&2), r 2(k&1), ..., r n&1(k&1)]
[r 1(k&2), r 2(k&2), ..., r n&1(k&1)]
and so on until [r 1(1), r 2(1), ..., r n&1(2)]. We eventually get that
&
{
k
k
. } } } . [X 1 r 1(i 1 )] & } } } & [X n&1 r n&1(i n&1 )]
i1 =1
in&1 =1
n&1
{
& X n :& : r j (i j )
j=1
==
is larger than the expression
{
n&1
{
& [X 1 r 1(1)] & } } } & [X n&1 r n&1(1)] & X n :& : r j (1)
j=1
{
+& [X 1 r 1(1)] & } } } & [X n&1 r n&1(2)]
n&2
{
& X n :& : r j (1)&r n&1(2)
j=1
==
{
&& [X 1 r 1(1)] & } } } & [X n&1 r n&1(2)]
n&1
{
== + } } } +& {[X r (k&1)] & } } }
(k)] & X :& : r (k)&r (k&1)
{
==
& X n :& : r j (1)
1
1
j=1
n&1
& [X n&1 r n&1
n
j
1
j=2
&&
{{X r (k&1)] & } } } & [X
1
1
n&1
{
& X n :& : r j (1)]
+&
j=1
1
r n&1(k)]
=
{{X r (k)] & } } } & [X
1
n&1
n&1
r n&1(k)]
==
172
MASSIMO MARINACCI
n&1
{
& X n :& : r j (k)
j=1
==
{
&& [X 1 r 1(k)] & } } } & [X n&1 r n&1(k)]
n&1
{
& X n :& : r j (1)
j=1
== ,
whose components are positive or negative elements of the form
&[[X 1 r 1(i)] & } } } & [X n&1 r n&1(i)] & [X n :&;]],
where ; is a sum of elements in the set I, like for instance n&1
j=1 r j (1). As
before, we have equality if we consider the additive +. Using independence
we conclude that
&
\
k
k
. } } } . [X 1 r 1(i 1 )] & } } } & [X n&1 r n&1(i n&1 )]
i1 =1
in&1 =1
n&1
{
=+
+ . } } } . [X r (i )] & } } } & [X
\
& X :& : r (i ) .
{
=+
& X n :& : r j (i j )
j=1
k
k
i1 =1
in&1 =1
1
1
1
n&1
r n&1(i n&1 )]
n&1
n
j
j
j=1
Continuity of + leads to
{
n
= {
n
=
& | : : X i (|): + | : : X i (|): .
i=1
i=1
In particular, since the argument is not based on n, this implies
&[| : S n :]+[| : S n :].
This completes the proof of part (i).
&
As to point (ii), set S &
n =inf kn [S k ]. We have [| : S n :]=
&
kn [| : S k :]. We want to show that &[| : S n :]+[| : S &
n :].
By continuity of &, it suffices to prove that for any [| : S i1 , ..., S in ] of
arbitrary finite length it holds &[|: S i1 , ..., S in ]+[|: S i1 , ..., S in ]. Suppose
m>n. Then
Sm =
1
m
m
:
i=n+1
Xi +
n
Sn .
m
173
LIMIT LAWS
Set # n, m =(nm). We can write
[| : S m :, S n :]
=[| : S m :] & [| : S n :]
{
= |:
m
1
m# n, m
{ _{
= .
X i +S n :
i=n+1
r
=
m
1
m# n, m
|:
:
& [| : S n :]
# n, m
Xi
:
i=n+1
:
&r & [|: S n r]
# n, m
=
&=
& [| : S n :]
=.
r
_{
|:
_ _{
= .
1
m# n, m
|:
r:
_ _{
_ .
m
i=n+1
1
m# n, m
|:
r>:
Xi :
:
# n, m
m
Xi :
i=n+1
=
&r & [S n r] & [| : S n :]
:
# n, m
m
1
m# n, m
:
Xi
i=n+1
=
&r & [S n :]
&&
:
&r & [S n r]
# n, m
=
&
&& .
We know that
{
1
m# n, m
m
Xi
:
i=n+1
:
# n, m
&r
=
=. } } } . [Y 1 s 1 ] & } } } & [Y m&n&1 s m&n&1 ]
s1
sm&n&1
{
& Y m&n _#
:
n, m
&
&r &
m&n&1
:
j=1
sj
=
and
{
n&1
=
[S n r]=. } } } . [Z 1 t 1 ] & } } } & [Z n&1 t n&1 ] & Z n r& : t j ,
t1
tn&1
j=1
where both the s i and the t i go over the rational numbers, and we have set
Y i =(1m# n, m ) X i+n+1 and Z i =(1n) X i . Then
174
MASSIMO MARINACCI
_ _{
.
r:
|:
1
m# n, m
m
: Xi
i=n+1
:
&r & [S n :]
# n, m
=
m
1
m# n, m
&&
:
X
&r & [S r]
_ _{
=
&&
#
_ . . } } } . [Y s ] & } } } & [Y
s
_ _
:
& Y
{ _# &r& & : s =&
& . } } } . [Z t ] & } } } & [Z
t
]
_
& Z r& : t
{
=&&
_ . . } } } . [Y s ] & } } } & [Y
s
_ _
:
& [Y
_# &r& & : s =&
& . } } } . [Z t ] & } } } & [Z
t
]
_
& Z :& : t
{
=&&
= . . } } } . . } } } . [Y s ]
_
|:
_ .
r>:
:
i
1
r:
s1
n
n, m
i=n+1
m&n&1
1
m&n&1
]
m&n&1
]
sm&n&1
m&n&1
m&n
j
n, m
j=1
1
t1
n&1
1
n&1
tn&1
n&1
n
j
j=1
1
r>:
s1
m&n&1
1
sm&n&1
m&n&1
m&n
j
n, m
j=1
1
t1
n&1
1
n&1
tn&1
n&1
n
j
j=1
1
r:
s1
sm&n&1
t1
1
tn&1
& } } } & [Y m&n&1 s m&n&1 ]
:
m&n&1
{
_# & : s =&
& [Z t ] & } } } & [Z
t
] & Z r& : t
{
{
==&
_ . . } } } . . } } } . [X s ] & } } } & [X
s
_
:
& X { _# &r& & : s = & [Z t ]
& } } } & [Z
t
] & Z :& : t
{
=& .
& Y m&n &r &
n, m
j
j=1
n&1
1
n&1
1
n&1
n
j
j=1
1
r>:
s1
sm&n&1
t1
n&1
1
tn&1
m&n&1
n
j
n, m
1
1
j=1
n&1
n&1
n&1
n
j
j=1
m&n&1
]
175
LIMIT LAWS
Set
I=[r(i)] i # [1, ..., k] _ [s 1(i 1 )] i1 # [1, ..., k] } } }
_ [s m&n&1(i m&n&1 )] im&n&1 # [1, ..., k]
_ [t 1(i$1 )] i$1 # [1, ..., k] } } } _ [t n&1(i$n&1 )] i$n&1 # [1, ..., k] .
W.l.o.g. suppose
s j (i j )s j (i j +1)
for all 1 jm&n&1
t j (i)t j (i+1)
for all 1 jn&1
:r(i)r(i+1).
Therefore,
[Y j s j (1)]$ } } } $[Y j s j (k)]
for all
1 jm&n&1
[Z j t j (1)]$ } } } $[Z j t j (k)]
for all
1 jn&1
[Y m&n :&s 1(1)& } } } &s n&1(1)&r(1)]
[Y m&n :&s 1(2)& } } } &s n&1(1)&r(1)] } } }
[Y m&n :&s 1(k&1)& } } } &s n&1(k&1)&r(k&1)]
[Z n r&t 1(1)& } } } &t n&1(1)]
[Z n r&t 1(2)& } } } &t n&1(1)] } } }
[Z n r&t 1(k&1)& } } } &t n&1(k&1)].
Moreover,
&[Y j s j (i)]=+[Y j s j (1)]
and
1ik;
&[Z j t j (i)]=+[Z j t j (i)]
and
for all 1 jm&n&1
for all
1ik;
1 jn&1
(7)
176
MASSIMO MARINACCI
and
&[Y m&n :&s 1(1)& } } } &s n&1(1)&r(1)]
=+[Y m&n :&s 1(1)& } } } &s n&1(1)&r(1)];
&[Y m&n :&s 1(2)& } } } &s n&1(1)&r(1)]
=+[Y m&n :&s 1(2)& } } } &s n&1(1)&r(1)];
and so forth until
&[Y m&n :&s 1(k&1)& } } } &s n&1(k&1)&r(k&1)]
=+[Y m&n :&s 1(k&1)& } } } &s n&1(k&1)&r(k&1)];
&[Z n r&t 1(1)& } } } &t n&1(1)]=+[Z n r&t 1(1)& } } } &t n&1(1)];
and so forth until
&[Z n r&t 1(k&1)& } } } &t n&1(k&1)]
=+[Z n r&t 1(k&1)& } } } &t n&1(k&1)].
Using the fact that & is totally monotone, (7), the above equalities, and
independence, we get
\
& . . }}} . . }}}
r # I t1 # I
tn&1 s1 # I
n&1
{
[Z 1 t 1 ] & } } } & [Z n&1 t n&1 ]
.
sm&n&1 # I
=
& Z n r& : t j & [Y 1 s 1 ] & } } }
j=1
{
& [Y m&n&1 s m&n&1 ] & Y m&n :
# n, m
m&n&1
&r&
:
sj
j=1
. [Z t ] & } } } & [Z
\
s
& Z r& : t & [Y s ] & } } } & [Y
{
=
:
& Y
{ # &r& : s =+ .
+ . . } } } . . } } }
r # I t1 # I
tn&1 s1 # I
1
1
=+
n&1
t n&1 ]
sm&n&1 # I
n&1
n
j
1
m&n&1
1
j=1
m&n&1
m&n
j
n, m
j=1
Finally, by continuity, we get
&[| : S m :, S n :]+[| : S m :, S n :].
m&n&1
]
177
LIMIT LAWS
Let k<n. We can write
[| : S m :, S n :, S k :]
_ _{ #
1
_ .
_ _{| : #
& [| : S :] .
&
n, m
r:
m
1
|:
= .
i=n+1
:
&r & [S n :] & [| : S k :]
# n, m
=
m
Xi :
n, m
r>:
Xi
:
i=n+1
&
:
&r & [S n r]
# n, m
=
&
k
On the other hand
[| : S n r, S k :]
n
1
n# n, m
_ _{
1
_ .
|:
_ _{ n#
|:
= .
t:
i=k+1
r
&t & [S k t]
# k, n
=
n
Xi :
k, n
t>:
Xi
:
i=k+1
r
# k, n
&&
&t & [S :] .
=
&&
k
We finally get the following equality, which we denote by (*)
[| : S m :, S n :, S k :]
_
= . .
r: t>:
{
|:
1
n# k, n
1
m# n, m
m
Xi :
i=n+1
n
:
&t
# n, m
r
=
&t & [S t]
{
=
&
#
1
:
_ . . |:
: X
&t
_ { m#
=
#
1
r
& |:
{ n# : X # &t= & [S :] &
1
:
_ . . |:
_ { m# : X # &t=
1
:
& |:
{ n# : X # &t= & [S t]&
1
:
_ . . |:
: X
&t
_ { m#
=
#
1
:
& |:
: X
&t & [S :] .
{ n#
=
&
#
& |:
:
Xi
k
k, n
i=k+1
m
i
n, m i=n+1
r: t:
n, m
n
i
k, n
k
k, n
i=k+1
m
i
n, m i=n+1
r>: t>:
n, m
n
i
k, n
k
k, n
i=k+1
m
i
n, m i=n+1
r>: t:
n, m
n
i
k, n
i=k+1
k
k, n
&
178
MASSIMO MARINACCI
We know that
{
m
1
m# n, m
Xi
:
i=n+1
sm&n&1
{
& Y m&n {
|:
=
[Y 1 s 1 ] & } } } & [Y m&n&1 s m&n&1 ]
=. } } } .
s1
:
&t
# n, m
_#
n
1
n# k, n
Xi
:
i=k+1
:
m&n&1
&
&t &
n, m
r
&t
# k, n
:
=
sj ;
j=1
=
=. } } } . [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
h1
hn&k&1
{
& T n&k _#
r
n&k&1
&
&t & :
k, n
j=1
=
hj ;
[S k t]=. } } } . [Z 1 d 1 ] & } } } & [Z k&1 d k&1 ]
d1
dk&1
k&1
{
=
& Z k t& : d j ,
j=1
where Y i =(1m# n, m ) X i+n+1 , Z i =(1k) X i , and T i =(1n# k, n ) X i+k+1 .
Putting these equalities in (*), we get:
[| : S m :, S n :, S k :]
{
= . .
r: t>:
_. } } }
s1
{
& Y m&n _
_#
. [Y 1 s 1 ] & } } } & [Y m&n&1 s m&n&1 ]
sm&n&1
:
&
m&n&1
&t &
n, m
:
sj
j=1
=&
& . } } } . [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
{
h1
hn&k&1
& T n&k _#
r
k, n
&
n&k&1
&t & :
j=1
hj
=&
179
LIMIT LAWS
_
& Z t& : d
{
=&=
_ . . . } } } . [Y s ] & } } } & [Y
{ _
:
& Y
{ _# &t& & : s =&
& . } } } . [T h ] & } } } & [T
h
_
r
& T
{ _# &t& & : h =&
& . } } } . [Z d ] & } } } & [Z
d
]
_
& Z :& : d
{
=&=
_ . . . } } } . [Y s ] & } } } & [Y
{ _
:
& Y
{ _# &t& & : s =&
& . } } } . [T h ] & } } } & [T
h
_
:
& T
{ _# &t& & : h =&
& . } } } . [Z d ] & } } } & [Z
d
]
_
& Z t& : d
{
=&=
_ . . . } } } . [Y s ] & } } } & [Y
{ _
:
& Y
{
_# &t& & : s =&
& . } } } . [Z 1 d 1 ] & } } } & [Z k&1 d k&1 ]
d1
dk&1
k&1
k
j
j=1
1
r: t:
s1
1
m&n&1
s m&n&1 ]
n&k&1
]
m&n&1
s m&n&1 ]
n&k&1
]
sm&n&1
m&n&1
m&n
j
n, m
j=1
1
h1
n&k&1
1
hn&k&1
n&k&1
n&k
j
k, n
j=1
1
d1
k&1
1
k&1
dk&1
k&1
k
j
j=1
1
r>: t>:
s1
1
sm&n&1
m&n&1
m&n
j
n, m
j=1
1
h1
n&k&1
1
hn&k&1
n&k&1
n&k
j
k, n
j=1
1
d1
k&1
1
k&1
dk&1
k&1
k
j
j=1
1
r>: t:
s1
m&n&1
1
sm&n&1
m&n&1
m&n
j
n, m
j=1
s m&n&1 ]
180
MASSIMO MARINACCI
_
:
& T
{ _# &t& & : h =&
& . } } } . [Z d ] & } } } & [Z
_
& Z :& : d
{
=&= .
& . } } } . [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
h1
hn&k&1
n&k&1
n&k
j
k, n
j=1
1
d1
k&1
1
d k&1 ]
dk&1
k&1
k
j
j=1
This implies
[| : S m :, S n :, S k :]
{
= . . . }}} .
r: t>:
s1
sm&n&1
. } } } . [Y 1 s 1 ] & } } }
. }}} .
h1
hn&k&1
d1
{
& [Y m&n&1 s m&n&1 ] & Y m&n dk&1
_#
:
&
m&n&1
&t &
n, m
sj
:
j=1
=
& [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
n&k&1
r
h
{ _# &
=
& [Z d ] & } } } & [Z
d
] & Z t& : d
{
{
==
_ . . . } } } . . } } } . . } } } . [Y s ] & } } }
{
:
s
]& Y
& [Y
{ _# &t& & : s =
& T n&k &t & :
k, n
j
j=1
k&1
1
k&1
1
k&1
k
j
j=1
1
r: t:
s1
sm&n&1
h1
hn&k&1
d1
1
dk&1
m&n&1
m&n&1
m&n&1
m&n
j
n, m
j=1
& [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
{
& T n&k _#
r
n&k&1
h & [Z d ] & } } }
&
=
] & Z :& : d
{
==
&t & :
k, n
j
1
1
j=1
k&1
& [Z k&1 d k&1
k
j
j=1
{
_ . . . }}} .
r>: t>:
s1
sm&n&1
. } } } . [Y 1 s 1 ] & } } }
. }}} .
h1
{
hn&k&1
& [Y m&n&1 s m&n&1 ] & Y m&n d1
_#
:
n, m
dk&1
&
&t &
m&n&1
:
j=1
sj
=
181
LIMIT LAWS
& [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
{
& T n&k _#
:
n&k&1
&
hj
&t & :
k, n
j=1
=
k&1
{
& [Z 1 d 1 ] & } } } & [Z k&1 d k&1 ] & Z k t& : d j
{
_ . . . }}} .
r>: t:
s1
h1
==
. } } } . [Y 1 s 1 ] & } } }
. }}} .
sm&n&1
j=1
hn&k&1
{
& [Y m&n&1 s m&n&1 ] & Y m&n d1
_#
dk&1
:
n, m
&
m&n&1
&t &
:
j=1
sj
=
& [T 1 h 1 ] & } } } & [T n&k&1 h n&k&1 ]
{
& T n&k _#
:
n&k&1
h & [Z d ] & } } }
&
=
] & Z :& : d
{
== .
&t & :
k, n
j
1
1
j=1
k&1
& [Z k&1 d k&1
k
j
j=1
Now we can take a finite index set I as we did before to prove part (i).
Proceeding then in the same way, by using total monotonicity of &,
independence, and this last set of equalities, we get that & is larger than +
when we consider the above unions with indexes on I. By continuity of &,
we finally obtain
&[| : S m :, S n :, S k :]+[| : S m :, S n :, S k :]
(8)
as wanted. Now we complete the proof by induction. By (8), there are sets
A 1 R, ..., A n R such that
[|: S i1 :, ..., S in :]= . } } } . (X i1 : in ) & } } } & (X in : in ).
:i # Ai
1
1
:i # Ai
n
n
Therefore
[| : S i1 :, ..., S in :, S in+1 :]
_
&
= . } } } . (X : ) & } } } & (X : )
_
&
& . } } } . (X r ) & } } } & (X r ) & X
_
\
= . } } } . (X i1 : in ) & } } } & (X in : in ) & [S in+1 :]
:i
1
:i
:i
1
:i
n
i1
in
in
in
n
1
r1
rn
1
in
in
n&1
in+1
:& : r j
j=1
+& .
182
MASSIMO MARINACCI
Consequently, there exist sets A *
1 R, ..., A *
n R, A*
n+1 R such that
[| : S i1 :, ..., S in :, S in+1 :]
= . }}} .
:i # A*
i
1
1
.
(X i1 : in ) & } } } & (X in : in ) & (X in+1 : in+1 ).
:i # A*
in :in+1 # A*
in+1
n
Using this equality, independence, and total monotonicity, we conclude
that
&[| : S i1 , ..., S in ]+[| : S i1 , ..., S in ]
as wanted.
&
&
Set S &
=lim inf n S n , A n =[| : S n (|):], and A=[| : S (|):].
&
The sequence [A n ] n1 is nondecreasing because the S n form a nondecreasing sequence. Moreover, n A n =A. Since we have already proved
that &(A n )+(A n ) for each n, we get &(A)+(A) from the continuity of &
and +. This completes the proof of part (ii). K
Proof of Theorem 11.
By lemma 19(ii), we have
+(| : lim inf S n :)&(| : lim inf S n :).
n
n
On the other hand, the sequence [&X k ] k1 is also made of independent
and identically distributed random variables. Since we have also assumed
that each &X k is regular, there exists a measure m (in general, +{m) on
_(X 1 , ..., X k , ...) such that under m the sequence [&X k ] k1 is made up of
independent and identically distributed random variables, with E m(&X k )
=E &(&X k ). By lemma 19(ii), we have
m(|: lim inf &S n :)&(| : lim inf &S n :).
n
n
By the classic Kolmogorov's Strong Law of Large Numbers, we have
+(| : lim inf S n E &(X 1 ))=m(| : lim inf &S n E &(&X 1 ))=1.
n
n
Together with
[| : lim inf &S n E &(&X 1 )]=[| : lim sup S n &E &(&X 1 )]
n
n
this implies
&(| : lim inf S n E &(X 1 ))=&(| : lim sup S n &E &(&X 1 ))=1.
n
n
183
LIMIT LAWS
Convexity of & implies
&([| : E &(X 1 )lim inf S n(|)lim sup S n(|)&E &(&X 1 )])=1
n
n
as wanted. K
Theorem 11 holds for a continuous and totally monotone capacity. For
example, if + is a measure, then + 2 is a simple example of a continuous and
totally monotone capacity. We now provide a more interesting example.
Example. Let 0 1 and 0 2 be two infinite spaces, and 1 : 0 1 Ä 2 02 a
correspondence mapping points of 0 1 to subsets of 0 2 . For each A0 2 ,
set A*=[| # 0 1 : 1(|)A]. Let + be a measure on the power set 2 01.
Define a set function & on 2 02 as follows:
&(A)=+(A*)
for all
A0 2 .
As is well-known, & is totally monotone (cf. Dempster [4]). We prove that
if |1(|)| is finite for all | # 0 1 , then & is continuous. Let [A n ] n1 be a
nondecreasing sequence in 0 2 such that n1 A n =0 2 . It easy to check
that the sequence [A n*] n1 is nondecreasing in 0 1 . Moreover, for each
| # 0 1 there exists n | such that 1(|)A n for all nn | . For, suppose to
the contrary that for all n1 it holds 1(|) 3 A n (this is the correct negation because A n A n+1 for all n1). Since |1(|)| is finite, there exists at
least one |^ # 1(|) such that for all n1 there exists n^ n n with |^ Â A n^n .
Otherwise for all |$ # 1(|) we would have eventually |$ # A n . All this
implies the existence of a subsequence [A nj ] j1 [A n ] n1 and a |^ # 1(|)
such that j1 A nj =0 2 and |^ Â A nj for all j1. But this is impossible.
Therefore, for each | # 0 1 we have eventually 1(|)A n . This implies
n1 A *=0
n
1 . Using the continuity of + we conclude
lim &(A n )=lim +(A n*)=+(0 1 )=1
n
n
as wanted.
Remark. In sections 4 and 5 we have assumed continuity of the
capacity & on S, except when all random variables are finite-valued.
However, all our results can be proved under the following somewhat
weaker condition.
Definition 20. Let [X k ] k1 be a sequence of random variables defined
on a _-algebra S. An A(X 1 , ..., X n , ...)-capacity is an outer continuous
capacity & on S such that for every nondecreasing sequence [A n ] n1 of
subsets of A(X k ) with n1 A n =0 we have lim n &(A n )=&(0).
184
MASSIMO MARINACCI
Here we ask full continuity only on the algebras A(X k ) and only outer
continuity on the whole _-algebra S.
8.3. Section 6
We prove Theorem 13. Notation is as in the proof of Theorem 11. By the
standard weak law, for every =>0 it holds
lim +(|: S n E &(X 1 )&=)=1
and
n
lim m(|: &S n E &(&X 1 )&=)=1.
n
By Lemma 19,
+(|: S n E &(X 1 )&=)&(| : S n E &(X 1 )&=),
m(| : &S n E &(&X 1 )&=)&(| : &S n E &(&X 1 )&=).
Therefore
lim &(| : S n E &(X 1 )&=)=lim &(|: &S n E &(&X 1 )&=)=1
n
n
and, by monotonicity,
lim &([| : E &(X 1 )&=S n(|)] _ [| : S n(|) &E &(&X 1 )+=])=1.
n
Convexity of & then implies
1lim &([| : E &(X 1 )&=S n(|) &E &(&X 1 )+=])
n
lim &(| : S n E &(X 1 )&=)+lim &(|: S n &E &(&X 1 )+=)
n
n
&lim &([|: E &(X 1 )&=S n(|)] _ [| : S n(|)&E &(&X 1 )+=])
n
=1+1&1=1
as wanted. Parts (i) and (ii) are proved in a similar way. K
8.4. Section 7.1
In order to prove Theorem 15, we first derive the following simple, but
important, property of controlled capacities.
Proposition 21. Let & be a controlled capacity on S, and [X k ] k1 a
sequence of regular and independent random variables on S. Let + be the
mass provided by Lemma 18. For every finite subclass [X k ] k # I we have
+(A)&(A) for each A # A([X k ] k # I ).
185
LIMIT LAWS
Proof. Using the notation of the proof of Lemma 18, we have
+ k # C(& k ), where & k is the restriction of & on A(X k ). Recall that for each
set A # A([X k ] k # I ) there exists an integer n such that A= ni=1 k # I A ki ,
where A ki # A(X k ) for each k # I, and the sets k # I A ki are pairwise disjoint.
Let m # ba(0, S), and let m k be the restriction of m on A(X k ). By
Definition 14,
\
n
+
& . , A ki =min
i=1 k # I
{
n
=
: ` m k(A ki ) : m # ba(0, S) .
i=1 k # I
The result is now an easy consequence of the definition of +. K
We now state and prove a version of lemma 19 for controlled capacities.
The improvement is twofold. On the one hand, we get equalities where we
had before only inequalities. On the other hand, to prove (iii) below we do
not need total monotonicity; for controlled capacities the convexity of the
restrictions & k is enough.
Lemma 22. Let 0 be a compact space, and & a continuous capacity on
the _-algebra S of subsets of 0. Let [X k ] k1 be a sequence of continuous,
regular and independent random variables relative to &. Set S n =(1n)
ni=1 X i , and let + be the measure provided by Theorem 6. If & is controlled
and if each & k is convex on A(X k ), then
(i)
+(| : S n(|):)=&(| : S n(|):) for all : # R,
(ii)
+(| : S n(|)>:)=&(| : S n(|)>:) for all : # R,
(iii)
+(| : lim inf n S n :)=&(| : lim inf n S n :) for all : # R.
Proof.
Consider the index set I=[r 1, ..., r n ]. As in Lemma 19, set
A r =[| : X 1(|)r]
and
A :&r =[| : X 2(|):&r].
Without loss, suppose r 1 } } } r n. We have
A r(1) $ } } } $A r(n)
and
A :&r(1) } } } A :&r(n) .
This implies that for any TI the set r # T (A r & A :&r ) has the form
A r$ & A :&r for some r, r$ # I, so that &( r # T (A r & A :&r ))=
+( r # T (A r & A :&r )). Therefore,
186
MASSIMO MARINACCI
\
& . (A r & A :&r )
r#I
+
|
= 1 r # I (Ar & A:&r) d&=
=min
+
{
|
(&1) |T | +1 1 r # T Ar & A:&r d&
:
[T : <{TI]
(&1) |T | +1 1 r # T Ar & A:&r d+: +(A r(i) )=+ 1(A r(i) ),
:
[T : <{TI]
+(A :&r(i) )=+ 1(A :&r(i) ) for all i # I
=
[T : <{TI]
=
\ , (A & A )+
& , (A & A
) .
\
+
(&1) |T | +1 +
:
r
:&r
r
:&r
r#T
(&1) |T | +1
:
=
[T : <{TI]
r#T
Therefore
\
+
& . (A r & A :&r ) =
r#I
(&1) |T | +1 &
:
[T : <{TI]
\ , (A & A )+
r
:&r
(9)
r#T
and
{
= {
=
& . (A r & A :&r ) =+ . (A r & A :&r ) .
r#I
r#I
Continuity of + and & then implies &([| : X 1 +X 2 :])=
+([| : X 1 +X 2 :]).
The above argument can be extended to the sum of n random variables
proceeding, mutatis mutandis, as we did in Lemma 19. The proof of
point (ii) is similar. It suffices to consider the chain of upper sets
[[X k >:] : : # R].
We finally prove (iii). Suppose m>n. We claim that for any :, ; # R it
holds
{
n
m
& | : : X i (|):,
i=1
{
:
X i (|);
i=n+1
n
=+ | : : X i (|):,
i=1
m
:
i=n+1
=
=
X i (|); .
187
LIMIT LAWS
We have
{
n
=
| : : X i ; =. } } } . [X 1 t 1 ] & } } } & [X n&1 t n&1 ]
i=1
t1
tn&1
n&1
{
& X n ;& : t j
j=1
=
and
{
m
|:
=
X i : =. } } } . [X n+1 s n+1 ] & } } } & [X m&1 s m&1 ]
:
i=n+1
sn
sm&1
m&1
{
& X m :& :
j=n+1
=
sj ,
where both the s i and the t i go over the rational numbers. Then
{
n
m
| : : X i (|):,
i=1
X i (|);
:
i=n+1
_
=
{
n&1
= . } } } . [X 1 t 1 ] & } } } & [X n&1 t n&1 ] & X n ;& : t j
t1
tn&1
_
& X :&
{
j=1
& . } } } . [X n+1 s n+1 ] & } } } & [X m&1 s m&1 ]
sn
sm&1
m&1
=&
=. } } } . . } } } . [X t ] & } } } & [X
_
& X ;& : t
{
=&
& [X
_ s ] & } } } } } } & [X s
& X :& : s
{
=& .
m
sj
:
j=n+1
1
t1
tn&1
sn
1
n&1
t n&1 ]
sm&1
n&1
n
j
j=1
n+1
n+1
m&1
m&1
]
m&1
m
j
j=n+1
Set
I=[t 1(i 1 )] ki1 =1 _ } } } _ [t n&1(i n&1 )] kin&1 _ [s n+1(i n+1 )] kin+1
_ } } } [s m&1(i m&1 )] kim&1 =1 .
=&
188
MASSIMO MARINACCI
W.l.o.g. suppose s j (i j )s j (i j +1) for all n+1 jm&1, and t j (i j )
tj (i j +1) for all 1 jn&1. Therefore:
[X j s j (1)]$ } } } $[X j s j (k)]
for all
n+1 jm&1,
[X j t j (1)]$ } } } $[X j t j (k)]
for all
1 jn&1,
[X m ;&s n+1(1)& } } } &s m&1(1)]
[X m ;&s n+1(2)& } } } &s m&1(1)] } } }
[X m ;&s n+1(k&1)& } } } &s m&1(k&1)],
[X n :&t 1(1)& } } } &t n&1(1)]
[X n :&t 1(2)& } } } &t n&1(1)] } } }
[X n :&t 1(k&1)& } } } &t n&1(k&1)].
Now, using the last set of inclusions, we can proceed as we did in part (i)
to get equality (9). We then get
&
\ . }}}
t1 # I
. }}} .
.
tn&1 # I sn+1 # I
sm&1 # I
_[X t ] & } } } & [X
1
1
n&1
t n&1 ]
n&1
{
=
& X :& : s
{
=&+
=+ . } } } .
. }}} .
\
_[X t ] & } } } & [X t
s
] & } } } & [X
s
]
& X ;& : t & [X
{
=
& X :& : s
{
=&+ .
& X n ;& : t j & [X n+1 s n+1 ] & } } } & [X m&1 s m&1 ]
j=1
m&1
m
j
j=n+1
1
t1 # I
tn&1 # I sn+1 # I
n&1
1
sm&1 # I
n&1
n
j
n+1
n+1
m&1
j=1
m&1
m
j
j=n+1
By continuity we finally obtain
{
n
m
& | : : X i (|):,
i=1
{
:
X i (|);
i=n+1
n
=+ | : : X i (|):,
i=1
m
:
i=n+1
=
=
X i (|); .
m&1
n&1
]
189
LIMIT LAWS
Since & is controlled, this implies the claim. Then
n
{
m
& | : : X i (|):,
i=1
X i (|);
:
i=n+1
n
=
m
X (|);
{
=
=+ |: : X (|): + | : : X (|);
{
= {
=
=& | : : X (|): & | : : X (|); .
{
= {
=
=+ |: : X i (|):,
i=1
:
i
i=n+1
n
m
i
i
i=1
i=n+1
n
m
i
i
i=1
(10)
i=n+1
In part (ii) of Lemma 19 we have proved that
[| : S m :, S n ;]
1
m# n, m
_ _{
1
_ .
_ _{|: m#
|:
= .
r;
m
i=n+1
n, m
r>;
Xi
:
:
# n, m
m
Xi :
i=n+1
=
&&
&r & [|: S r] .
=
&&
&r & [| : S n ;]
:
# n, m
n
Let I=[r 1, ..., r n ] be a finite sequence or rationals. W.l.o.g., suppose
r 1 } } } r n. Moreover, for convenience, suppose :r 1 } } } r n. Set
if r>;
if r;.
[| : S n r]
{ [| : S ;]
1
= |:
{ m# :
Ar=
n
m
A :&r
n, m
Xi
i=n+1
:
# n, m
=
&r .
By the claim we have just proved, we have
&(A r & A :&r )=+(A r & A :&r )
Moreover,
A r(1) $ } } } $A r(n)
and
A :&r(1) } } } A :&r(n)
&(A r )=+(A r )
and
&(A :&r )=+(A :&r ).
and
(11)
190
MASSIMO MARINACCI
Therefore, proceeding as we did in part (i) we get
{
=
& . (A r & A :&r ) =
r#I
:
=
:
r
:&r
r
:&r
r#T
(&1) |T | +1
[T : <{TI]
{
\ , (A & A )+
+ , (A & A
)
\
+
(&1) |T | +1 &
[T : <{TI]
r#T
=
=+ . (A r & A :&r ) .
r#I
Using continuity of & and +, we conclude that
&[| : S m :, S n ;]=+[| : S m :, S n ;].
Before going on we prove the following claim: Suppose (i) A # C n+1 , (ii)
B # A(X 1 , ..., X n ), and (iii) &(B)=+(B); then &(A & B)=+(A & B). In fact,
we have &(A)=+(A). Moreover, since & is controlled, it holds
&(A & B)=min[+$(A & B) : +$k # C(& k )]
=min[+$(A) & +$(B) : +$k # C(& k )]=+(A)+(B)=+(A & B).
The second equality follows form the definition of controlled capacity, and
from (i) and (ii). The third one follows from (iii) and &(A)=+(A). We now
prove by induction that
&[| : S 1 : 1 , ..., S n : n ]=+[| : S 1 : 1 , ..., S n : n ].
The first step of the induction has already been proved. We can write
[|: S 1 : 1 , ..., S n : n ]
{
= | : S : , ..., S
{
= | : S : , ..., S
{
1
& X : &r
\n
+=
= | : S 1 : 1 , ..., S n&1 : n&1 ,
1 n
: X i : n
n i=1
n&1
1
S n&1 + X n : n
n
n
1
1
n&1
: n&1 ,
1
1
n&1
: n&1 , .
n
n
=
r
\
n&1
S n&1 r
n
+
=
191
LIMIT LAWS
=[| : S 1 : 1 ] & } } } & [|: S n&2 : n&2 ]
{
_
{
=
{
.
r>((n&1)n) :n&1
\
n&1
1
S n&1 r & | : X n : n &r
n
n
+ \
+=
1
.
(| : S
:
) & | : X : &r
\ n
+=
n&1
1
S
.
|:
r & |: X : &r
\ n
+ \ n
+
& (| : S : ) } } } & (| : S
:
)
=
1
_
.
(S
:
)}}} &
{
\n X : &r+
& (| : S : ) & } } } & (| : S
:) .
=
&
|:
n&1
n&1
n
n
r((n&1)n) :n&1
n&1
n
n
r>((n&1)n) :n&1
1
n&2
1
n&1
n&2
n&1
n
n
r((n&1)n) :n&1
1
n&2
1
Let I=[r 1, ..., r n ] be a finite sequence or rationals. W.l.o.g. suppose
r } } } r n. Set
1
A :&r =[| : X n : n &r]
B=(| : S 1 : 1 ) & } } } & (|: S n&2 : n&2 )
Ar=
{
\
n&1
S n&1 r & B
n
+
(S n&1 :) & B
n&1
: n&1
n
n&1
: n&1 .
if r
n
if r<
We have
A r(1) $ } } } $A r(n)
and
A :&r(1) } } } A :&r(n) .
(12)
The induction hypothesis says that &[| : S 1 ; 1 , ..., S n&1 ; n&1 ] is equal
to +[| : S 1 ; 1 , ..., S n&1 ; n&1 ] for any finite sequence [ ; i ] n&1
i=1 of real
numbers. Hence &(A r )=+(A r ). Clearly, &(A :&r )=+(A :&r ). Therefore, by
the claim we have proved, &(A r & A :&r )=+(A r & A :&r ). Hence
&(A r(i) & A :&r(i) )=+(A r(i) & A :&r(i) )=+(A r(i) ) +(A :&r(i) )
=&(A r(i) ) &(A :&r(i) )
&(A r(i) & A :&r(i&1) )=+(A r(i) & A :&r(i&1) )=+(A r(i) ) +(A :&r(i&1) )
=&(A r(i) ) &(A :&r(i&1) )
192
MASSIMO MARINACCI
for all 1in. Consequently,
{
=
& . (A r & A :&r ) =
r#I
=
(&1) |T | +1
:
r
:&r
r
:&r
r#T
[T : <{TI]
{
\ , (A & A )+
+ , (A & A
)
\
+
(&1) |T | +1 &
:
[T : <{TI]
r#T
=
=+ . (A r & A :&r ) .
r#I
Using continuity of & and +, we conclude that
&[| : S 1 : 1 , ..., S n : n ]=+[| : S 1 : 1 , ..., S n : n ].
To complete the proof it suffices to use continuity of & and +. K
Proof of Theorem 15.
|
It is easy to see that
&[X:] d:+
0
=
|
|
0
[&[X:]&1] d:
&
&[X>:] d:+
0
|
0
[&[X>:]&1] d:.
&
By Lemma 22,
+(| : S n >:)=&(| : S n >:)
for all : # R.
By the classic Kolmogorov's Law of Large Numbers,
lim +(| : S n >E &(X 1 ))=0.
n
Therefore,
lim &(| : S n >E &(X 1 ))=0,
n
which implies, by continuity of &,
&(| : lim inf S n(|)>E &(X 1 ))=0.
n
A similar argument for the sequence [&X k ] k1 implies
&(| : lim inf S n(|)< &E & &(X 1 ))=0.
n
This completes the proof.
K
193
LIMIT LAWS
8.5. Section 7.2
In this subsection we prove the classic Central Limit Theorem for
capacities. We first need a stronger version of Lemma 18, in which the
mass + preserves all moments E &(X nk ) of the random variables, and not just
the means E &(X k ).
Lemma 23. Let & be a convex capacity on S. Let [X k ] k1 be a sequence
of regular, independent, and nonnegative random variables relative to &. Then
there exists a mass + defined on A(X 1 , ..., X k , ...) such that
(i)
[X k ] k1 is a sequence of independent random variables relative to +;
(ii)
E &(X nk )=E +(X nk )
for all k1 and n>0.
Proof. Since the random variables are nonnegative, we have [| : X nk :]
=[| : X k : 1n ] for each : # R. Therefore,
&[|: X nk :]=&[| : X k : 1n ]
=+[| : X k : 1n ]=+[| : X nk :].
This implies E &(X nk )=E +(X nk ), as wanted.
K
The following lemma, whose simple proof is omitted, is the version of
Theorem 6 that we need for the sequel.
Lemma 24. Let 0 be a compact space, and & a convex and continuous
capacity on the _-algebra S of subsets of 0. Let [X k ] k1 be a sequence of
continuous, regular, independent, and nonnegative random variables relative
to &. Then there exists a measure + defined on _(X 1 , ..., X k , ...) such that
(1)
[X k ] k1 is a sequence of independent random variables relative to +;
(2)
E &(X nk )=E +(X nk )
for all k1 and n>0.
Proof of Theorem 16. Let m be the measure that by Corollary 24 can
be associated to the sequence [&X k ] k1 . Since & is controlled, a slight
modification of the proof of Lemma 22 leads to
&([| : Z n :])=m([| : Z n :]).
By the classic Central Limit Theorem,
lim m([| : Z n :])=N(:)
n
and we conclude that lim n &([| : Z n :])=N(:), as wanted.
194
MASSIMO MARINACCI
Let + be the measure that by Corollary 24 can be associated to the
sequence[X k ] k1 . Reasoning as before, we get
&([| : Y n >:])=+([| : Y n >:]).
By the classic Central Limit Theorem, lim n +([| : Y n :])=N(:). Consequently, lim n +([| : Y n >:])=1&N(:), so that lim n &([|: Y n >:])
=1&N(:), as wanted. K
REFERENCES
1. G. Choquet, Theory of capacities, Ann. Inst. Fourier 5 (1953), 131295.
2. B. de Finetti, ``Teoria della Probabilita,'' Einaudi, Turin, 1970. English translation:
``Theory of Probability,'' Wiley, New York, 1974.
3. F. Delbaen, Convex games and extreme points, J. Math. Anal. Appl. 45 (1974), 210233.
4. A. Dempster, Upper and lower probabilities induced from a multivalued mapping, Ann.
Math. Statist. 38 (1967), 325339.
5. A. Dempster, A generalization of Bayesian inference, J. Roy. Statist. Soc. Ser. B 30 (1968),
205247.
6. D. Denneberg, ``Non-Additive Measure and Integral,'' Kluwer, Dodrecht, 1994.
7. J. Dow and S. R. C. Werlang, Laws of large numbers for non-additive probabilities,
mimeo, LBS and EPGEFGV, 1994.
8. N. Dunford and J. T. Schwartz, ``Linear Operators, Part 1,'' Interscience, New York, 1957.
9. J. Eichberger and D. Kelsey, Uncertainty aversion and preference for randomization,
J. Econ. Theory 71 (1996), 370381.
10. D. Ellsberg, Risk, ambiguity, and the Savage axioms, Quart. J. Econ. 75 (1961), 643669.
11. R. Feynman and A. R. Hibbs, ``Quantum Mechanics and Path Integrals,'' McGrawHill,
New York, 1965.
12. P. Ghirardato, On independence for non-additive measures, with a Fubini theorem,
J. Econ. Theory 73 (1997), 261291.
13. I. Gilboa, Expected Utility with purely subjective non-additive probabilities, J. Math.
Econ. 16 (1987), 6588.
14. I. Gilboa and D. Schmeidler, Maxmin expected utility with a non-unique prior, J. Math.
Econ. 18 (1989), 141153.
15. I. Gilboa and D. Schmeidler, Canonical representation of set functions, Math. Oper. Res.
20 (1995), 197212.
16. G. Greco, Sur la mesurabilite d'une fonction numerique par rapport a une famille
d'ensembles, Rend. Sem. Mat. Univ. Padova 65 (1981), 163176.
17. G. Greco, Sulla rappresentazione di funzionali mediante integrali, Rend. Sem. Mat. Univ.
Padova 66 (1982), 2141.
18. P. J. Huber, The use of Choquet capacities in statistics, Bull. Inst. Internat. Statist. 45
(1973), 181191.
19. P. J. Huber, ``Robust Statistical Procedures,'' SIAM, Philadelphia, 1996.
20. J. L. Kelley, Measures on Boolean algebras, Pacific J. Math. 9 (1959), 11651177.
21. D. Kreps, ``Notes on the Theory of Choice,'' Westview Press, Boulder, 1988.
22. M. Marinacci, Decomposition and representation of coalitional games, Math. Oper. Res.
21 (1996), 10001015.
23. M. Marinacci, Vitali's early contribution to non-additive integration, Riv. Mat. Sci.
Econom. Social., to appear.
LIMIT LAWS
195
24. E. Pap, ``Null-Additive Set Functions,'' Kluwer, Dordrecht, 1995.
25. L. J. Savage, ``The Foundations of Statistics,'' Wiley, New York, 1954.
26. D. Schmeidler, Subjective probability without additivity, mimeo, Foerder Institute for
Economic Research, Tel Aviv University, 1982.
27. D. Schmeidler, Integral representation without additivity, Proc. Amer. Math. Soc. 97
(1986), 253261.
28. D. Schmeidler, Subjective probability and expected utility without additivity,
Econometrica 57 (1989), 571587.
29. T. Seidenfeld and L. Wasserman, Dilation for sets of probabilities, Ann. Statist. 21 (1993),
11391154.
30. G. Shafer, ``A Mathematical Theory of Evidence,'' Princeton Univ. Press, Princeton, 1976.
31. L. S. Shapley, Cores of convex games, Internat. J. Game Theory 1 (1971), 1126.
32. G. Vitali, Sulla definizione d'integrale delle funzioni di una variabile reale, Ann. Mat. Pura
Appl. (IV) 2 (1925), 111121.
33. P. Wakker, ``Additive Representation of Preferences,'' Kluwer, Dordrecht, 1989.
34. P. Walley, ``Statistical Reasoning with Imprecise Probabilities,'' Chapman and Hall,
London, 1991.
35. P. Walley and T. L. Fine, Towards a frequentist theory of upper and lower probability,
Ann. Statist. 10 (1982), 741761.
36. L. Wasserman and J. Kadane, Bayes' theorem for Choquet capacities, Ann. Statist. 18
(1990), 13281339.
37. L. Zhou, Integral representation of continuous comonotonically additive set functions,
Trans. Amer. Math. Soc. 350 (1998), 18111822.
© Copyright 2026 Paperzz