Central limit theorem for a class of one

Probab. Theory Relat. Fields
DOI 10.1007/s00440-010-0269-8
Central limit theorem for a class of one-dimensional
kinetic equations
Federico Bassetti · Lucia Ladelli · Daniel Matthes
Received: 23 October 2008 / Revised: 29 January 2010
© Springer-Verlag 2010
Abstract We introduce a class of kinetic-type equations on the real line, which
constitute extensions of the classical Kac caricature. The collisional gain operators are
defined by smoothing transformations with rather general properties. By establishing
a connection to the central limit problem, we are able to prove long-time convergence
of the equation’s solutions toward a limit distribution. For example, we prove that
if the initial condition belongs to the domain of normal attraction of a certain stable
law να , then the limit is a scale mixture of να . Under some additional assumptions,
explicit exponential rates for the convergence to equilibrium in Wasserstein metrics
are calculated, and strong convergence of the probability densities is shown.
Keywords Central limit theorem · Domain of normal attraction · Stable law ·
Kac model · Smoothing transformations
Mathematics Subject Classification (2000)
60F05 · 82C40
F.B.’s research was partially supported by Ministero dell’Istruzione, dell’Università e della Ricerca
(MIUR grant 2006/134526). L.L.’s research was partially supported by CNR-IMATI Milano (Italy). D.M.
acknowledges support from the Italian MIUR, project “Kinetic and hydrodynamic equations of complex
collisional systems”, and from the Deutsche Forschungsgemeinschaft, grant JU 359/7.
F. Bassetti
Università degli Studi di Pavia, via Ferrata 1, 27100 Pavia, Italy
e-mail: [email protected]
L. Ladelli (B)
Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy
e-mail: [email protected]
D. Matthes
Technische Universität Wien, Wiedner Hauptstraße 8-10/E101, 1040 Wien, Austria
e-mail: [email protected]
123
F. Bassetti et al.
1 Introduction
This paper is concerned with the following kinetic-type evolution equation for a timedependent probability measure µt on the real line R,
∂t µt + µt = Q + (µt ) (t ≥ 0).
(1)
Here Q + is a generalized Wild convolution, and the probability measure Q + (µt ) is
defined in terms of its Fourier–Stieltjes transform,
+ (µ )(ξ ) = E [φ(t; Lξ )φ(t; Rξ )] (ξ ∈ R),
Q
t
(2)
where φ(t; ξ ) = µ̂t (ξ ) = R eiξ v µt (dv) denotes the Fourier–Stieltjes transform of
µt . Above, (L , R) is a random vector defined on a probability space (Ω, F, P) and
E denotes the expectation with respect to P. Our fundamental assumption on (L , R)
is that there exists an α in (0, 2] such that
E |L|α + |R|α = 1.
(3)
Equations (1)–(3) constitute a generalization of previously studied kinetic models
in one spatial dimension: The celebrated Kac equation is obtained for the particular
choice L = cos Θ and R = sin Θ with a random angle Θ that is uniformly distributed on [0, 2π ). Indeed, since L 2 + R 2 = sin2 Θ + cos2 Θ = 1 a.s., (3) holds
with α = 2. Moreover, the class of inelastic Kac models, introduced by Toscani and
Pulvirenti in [21], fits into the framework of (1)–(3), letting L = | cos Θ| p−1 cos Θ
and R = | sin Θ| p−1 sin Θ, where p > 1 is the parameter of elasticity. With α = 2/ p,
one has
|L|α + |R|α = 1, a.s.
(4)
and thus also (3) holds.
The extension from the (inelastic) Kac equation satisfying (4) to the more general
class of evolution equations satisfying (3) originates from special recent applications
of kinetic theory. One such application is an approach to model the temporal distribution of wealth, represented by µt , in a simplified economy by means of (1)–(3) with
α = 1; see [17] and references therein. The relaxed condition (3) is a key element
of the modeling, as it takes into account stochastic gains and losses due to the trade
with risky investments. Indeed, wealth distributions with a Pareto tail are consistent
with certain models satisfying (3), but are excluded under the stricter condition (4) of
deterministic trading. Furthermore, we mention that a multi-dimensional extension of
(1)–(3) with α = 2 has been used recently by [6] to model a homogeneous gas with
velocity distributions µt under the influence of a background heat bath.
The goal of this paper is to study the asymptotic behavior of solutions to (1) under
the assumption (3) by means of probabilistic methods based on central limit theorems. The general idea to represent solutions to Kac-like equations in a probabilistic
way dates back at least to McKean [18]; this approach has been fully formalized and
123
Central limit theorem for a class of one-dimensional kinetic equations
employed in the derivation of various analytical results in the last decade. For the
original Kac equation, probabilistic methods have been used to estimate the approximation error of truncated Wild sums in [3], to study necessary and sufficient conditions
for the convergence to a steady state in [12], to study the blow-up behavior of solutions
of infinite energy in [4], to obtain rates of convergence to equilibrium of the solutions
both in strong and weak metrics, [7,8,13]. Also the inelastic Kac model has been
studied by probabilistic methods, see [2].
In this paper, the aforementioned probabilistic methods are adapted and extended
to the setting (1)–(3). Our main result is the proof of long-time convergence (both
weak and strong) of the solutions µt to a steady state, and a precise characterization of
the latter. Indeed, a striking difference that we observe between the previously studied
models—obeying the strict condition (4)—and the more general class of models introduced here—which are subject to (3) only—is that the stationary solutions are stable
laws in the first case, and scale mixtures of stable laws in the second. For example,1 for
α = 2, one can easily define arbitrarily small perturbations of the Kac equation that
obey (3), violate (4), and possess steady states µ∞ with fat tails instead of Gaussians.
The qualitative properties of the steady state, such as the fatness of its tails, are
determined by the mixing distribution for the stable laws. We prove that the mixing
distribution is a fixed point of a suitable smoothing transformation related to Q̂ + and
α, see (17). Then we apply results of Durrett and Liggett [9] and of Liu [15,16], where
the existence and properties of these fixed points have been investigated.
The paper is organized as follows. In Sect. 2, we derive the stochastic representation
of solutions to (1)–(3). Section 3 contains the statements of our main theorems. The
results are classified into those on convergence in distribution (Sect. 3.1), convergence
in Wasserstein metrics at quantitative rates (Sect. 3.2) and strong convergence of the
probability densities (Sect. 3.3). All proofs are collected in Sect. 4.
2 Preliminary results
Throughout the paper, (L , R) is a vector of non-negative random variables with given
distribution. We emphasize that the restriction to non-negative L and R is mainly
made to simplify the presentation and to avoid further case distinctions. Most of the
convergence results remain valid if no sign restrictions are imposed in L and R, for
instance if the solution is symmetric.
We assume that (3) holds for some α ∈ (0, 2], that, in this case, becomes
E[L α + R α ] = 1,
(5)
and we will frequently refer to the stricter condition
L α + R α = 1 a.s.,
(6)
1 A specific example is given after Theorem 3.
123
F. Bassetti et al.
which may or may not be satisfied. For later reference, introduce the convex function
S : [0, ∞) → [−1, ∞] by
S(s) = E[L s + R s ] − 1,
(7)
with the convention that 00 = 0. Clearly, S(α) = 0.
A probability measure µ0 on R is prescribed as initial condition for (1)–(2). Denote
by F0 its probability distribution function, by φ0 = µ̂0 its Fourier–Stieltjes transform,
and by X 0 some random variable (independent of everything else) with law µ0 . Let
µt be the corresponding solution to (1)–(2), which is shown to be unique and global
in time below. Finally, recall that φ(t; ·) denotes the Fourier–Stieltjes transform of
µt .
2.1 Probabilistic representation of the solution
A semi-explicit expression for the Fourier–Stieltjes transform of the solution of (1) is
given by the Wild sum
φ(t; ξ ) =
∞
e−t (1 − e−t )n q̂n (ξ ) (t ≥ 0, ξ ∈ R)
(8)
n=0
where q̂n is recursively defined by
q̂0 (ξ ) := φ0 (ξ )
q̂n (ξ ) := n1 n−1
j=0 E[q̂ j (Lξ )q̂n−1− j (Rξ )]
(n = 1, 2, . . .).
(9)
Originally, the series in (8) has been derived in [25] for the solution of the Kac equation. Following the ideas of McKean [18,19] and of Gabetta and Regazzini [12], the
Wild sum is now rephrased in a probabilistic way. To this end, let the following be
given on a sufficiently large probability space (Ω, F, P):
– a sequence (X n )n∈N of i.i.d. random variables with distribution function F0 ;
– a sequence ((L n , Rn ))n∈N of i.i.d. random vectors, distributed as (L , R);
– a sequence (In )n∈N of independent integer random variables, each In being uniformly distributed on the indices {1, 2, . . . , n};
– a stochastic process (νt )t≥0 with νt ∈ N and P{νt = n} = e−t (1 − e−t )n−1 .
We assume further that (In )n∈N , (L n , Rn )n∈N , (X n )n∈N and (νt )t≥0 are stochastically
independent. Define a random array of weights [β j,n : j = 1, . . . , n]n≥1 recursively:
Let β1,1 := 1, (β1,2 , β2,2 ) := (L 1 , R1 ) and, for any n ≥ 2,
(β1,n+1 , . . . , βn+1,n+1 )
:= (β1,n , . . . , β In −1,n , L n β In ,n , Rn β In ,n , β In +1,n , . . . , βn,n ).
123
(10)
Central limit theorem for a class of one-dimensional kinetic equations
Fig. 1 Two four-leafed McKean trees, with associated weights β j,4 : the left tree is generated by I4 =
(1, 1, 3) and its weights are β1,4 = L 1 L 2 , β2,4 = L 1 R2 , β3,4 = R1 L 3 , β4,4 = R1 R3 ; the right tree is generated by I4 = (1, 1, 2) and its weights are β1,4 = L 1 L 2 , β2,4 = L 1 R2 L 3 , β3,4 = L 1 R2 R3 , β4,4 = R1
Remark 1 In [18,19], the Wild series is related to a random walk on a class of binary
trees. For an introduction to the theory of the so-called McKean trees, we refer to the
article of Carlen, Carvalho and Gabetta [3]. The construction above relates to this as
follows. Each finite sequence In = (I1 , I2 , . . . , In−1 ) corresponds to a McKean tree
with n leaves. The tree associated to In+1 is obtained from the tree associated to In
upon replacing the In th leaf (counting from the left) by a binary branching with two
new leaves. The left of the new branches is labeled with L n , and the right one with
Rn . The weights β j,n are associated to the leaves of the In -tree: β j,n is the product of
the labels assigned to the branches along the ascending path connecting the jth leaf
to the root. See Fig. 1 for an illustration.
Finally, set
Wn :=
n
β j,n X j and Vt := Wνt =
j=1
νt
β j,νt X j .
(11)
j=1
Proposition 1 (Probabilistic representation) The law of Vt is the unique and global
solution µt to equation (1)–(2) with initial condition µ0 .
A representation of the form (11) has already been successfully employed in studying
the long-time behavior of solutions to the classical and the inelastic Kac equation, for
instance, in [2,12], respectively.
2.2 Stable laws
Some further notations need to be introduced. Recall that a probability distribution
gα is said to be a centered stable law of exponent α (with 0 < α ≤ 2) if its Fourier–
Stieltjes transform is of the form
⎧
α
⎪
⎨ exp{−k|ξ | (1 − iη tan(π α/2) sign ξ )}
ĝα (ξ ) = exp{−k|ξ |(1 + 2iη/π log |ξ | sign ξ )}
⎪
⎩
exp{−σ 2 |ξ |2 /2}
if α ∈ (0, 1) ∪ (1, 2)
if α = 1
(12)
if α = 2.
where k > 0 and |η| ≤ 1.
123
F. Bassetti et al.
By definition, a distribution function F belongs to the domain of normal attraction
of a stable law of exponent α if for any sequence of independent and identically disF,
tributed real-valued random variables (X n )n≥1 with common distribution
function
n
X i −cn
there exists a sequence of real numbers (cn )n≥1 such that the law of n −1/α i=1
converges weakly to a stable law of exponent α.
It is well-known that, provided α = 2, a distribution function F belongs to the
domain of normal attraction of an α-stable law if and only if F satisfies
lim x α (1 − F(x)) = c+ < +∞,
x→+∞
lim |x|α F(x) = c− < +∞.
x→−∞
(13)
Typically, one also requires that c+ + c− > 0 in order to exclude convergence to
the probability measure concentrated in 0, but here we shall include the situation
c+ = c− = 0 as a special case. The parameters k and η of the associated stable law
in (12) are identified from c+ and c− by
k = (c+ + c− )
π
,
2Γ (α) sin(π α/2)
η=
c+ − c−
,
c+ + c−
(14)
with the convention that η = 0 if c+ + c− = 0. In contrast, if α = 2, F belongs to
the domain of normal attraction of a Gaussian law if and only if it has finite variance
σ 2 . See for example Chapter 17 of [10].
2.3 Definition of the mixing distribution
From Proposition 1 it is clear that the behavior of µt , as t → +∞, is determined by
the behavior of the law of Wn as n → +∞. Apropos of this we note that a direct
application of the central limit theorem is not suitable to investigate the weak limit of
Wn , since the weights in (11) are not independent. However, one can apply the central
limit theorem to study the conditional law of Wn , given the array of weights β j,n . To
this end we shall prove that
Mn(α) :=
n
β αj,n
(15)
j=1
(α)
converges a.s. to a limit M∞ as n → +∞, and that the max j=1,...,n β j,n converges to
zero in probability. This allows to apply a central limit theorem for triangular arrays
to the conditional law of Wn and to prove that the latter converges weakly to an
(α) 1/α
α-stable law rescaled by (M∞
) . Hence one obtains that the limit law of Wn is a
scale mixture of α-stable laws.
The origin of the model’s richness in steady states under the milder condition (5)
(α)
is now easily understood. Condition (6) implies Mn = 1 a.s., and thus the mixing
(α)
distribution is degenerate. In contrast, condition (5) only implies E[Mn ] = 1, and
(α)
if (6) is violated, then the law of M∞ is not concentrated anymore, according to the
following result.
123
Central limit theorem for a class of one-dimensional kinetic equations
Proposition 2 Under condition (5),
E[Mn(α) ] = E[Mν(α)
] = 1 for all n ≥ 1 and t > 0,
t
(16)
(α)
(α)
and Mn converges almost surely to a non-negative random variable M∞ .
In particular, recalling the definition of S in (7),
– if (6) holds, then S(s) ≥ 0 for every s < α and S(s) ≤ 0 for every s > α.
(α)
= 1 almost surely;
Moreover, Mn(α) = M∞
(α)
– if (6) does not hold, and if S(γ ) < 0 for some 0 < γ < α, then M∞ = 0 almost
surely;
(α)
is a non-degen– if (6) does not hold, and if S(γ ) < 0 for some γ > α, then M∞
(α)
(α) γα
erate random variable with E[M∞ ] = 1 and E[(M∞ ) ] < +∞. Moreover, the
(α)
characteristic function ψ of M∞ is the unique solution of
ψ(ξ ) = E[ψ(ξ L α )ψ(ξ R α )] (ξ ∈ R)
(17)
(α) α
) ] is finite if and
with −iψ (0) = 1. Finally, for any p > α, the moment E[(M∞
only if S( p) < 0.
p
3 Statement of the main results
3.1 Convergence in distribution
We recall that Vt is a time-dependent random variable with law µt . The latter consti(α)
tutes the solution to (1)–(2). Recall also that M∞ has been defined in Proposition 2.
Theorem 1 Assume that (5) holds with α ∈ (0, 1)∪(1, 2) and that S(γ ) < 0 for some
γ > 0. Moreover, let condition (13) be satisfied for F = F0 and let X 0 be centered
if α > 1. Then Vt converges in distribution, as t → +∞, to a random variable V∞
with the following characteristic function
(α)
φ∞ (ξ ) = E[exp(iξ V∞ )] = E[exp{−|ξ |α k M∞
(1 − iη tan(π α/2) sign ξ )}]
(18)
(ξ ∈ R), where the parameters k and η are defined in (14). In particular, the law of
V∞ is α-stable if and only if (6) holds, and V∞ = 0 a.s. if and only if c+ = c− = 0
or γ < α. In all other cases, E[|V∞ | p ] < +∞ if and only if p < α.
A consequence of Theorem 1 is that if E[|X 0 |α ] < ∞, then the limit V∞ is zero almost
surely, since c+ = c− = 0. The situation is different in the cases α = 1 and α = 2,
where V∞ is non-trivial provided that the first respectively second moment of X 0 is
finite.
Theorem 2 Assume that (5) holds with α = 1 and that S(γ ) < 0 for some γ > 0. If
the initial condition possesses a finite first moment m 0 = E[X 0 ], then Vt converges
(1)
in distribution, as t → +∞, to V∞ := m 0 M∞ . In particular, V∞ = m 0 a.s. if and
123
F. Bassetti et al.
only if (6) holds, and V∞ = 0 a.s. if and only if γ < 1 or m 0 = 0. In all other cases,
E[|V∞ | p ] < +∞ for p > 1 if and only if S( p) < 0.
We remark that under the hypotheses of the previous theorem, the first moment of the
solution is preserved in time. Indeed one has,
⎡ ⎡
E[Vt ] = E ⎣E ⎣
νt
⎤⎤
= m0,
β j,νt X j νt , β1,νt , . . . , βνt ,νt ⎦⎦ = m 0 E Mν(1)
t
j=1
where the last equality follows from (16).
Theorem 2 above is the most natural generalization of the results in [17], where
the additional condition E[|X 0 |1+ ] < ∞ for some > 0 has been assumed. The
respective statement for α = 2 reads as follows.
Theorem 3 Assume that (5) holds with α = 2 and that S(γ ) < 0 for some γ > 0. If
E[X 0 ] = 0 and σ 2 = E[X 02 ] < +∞, then Vt converges in distribution, as t → +∞,
to a random variable V∞ with characteristic function
φ∞ (ξ ) = E[exp(iξ V∞ )] = E exp(−ξ
2σ
2
2
(2)
M∞
)
(ξ ∈ R).
(19)
In particular, V∞ is Gaussian if and only if (6) holds, and V∞ = 0 a.s. if γ < 2 or
σ = 0. In all other cases, E[|V∞ | p ] < +∞ for p > 2 if and only if S( p) < 0.
Under the hypotheses of the theorem, E[Vt ] = 0 for t ≥ 0 and, moreover, taking into
account that the X i are independent and centered,
⎡ ⎡
E[Vt2 ] = E ⎣E ⎣
νt
⎤⎤
β j,νt βk,νt X j X k νt , β1,νt , . . . , βνt ,νt ⎦⎦ = σ 2 E[Mν(2)
] = σ 2,
t
j,k=1
where we have used (16) in the last step.
Example 1 For illustration of the applicability of Theorem 3 above, we introduce a
family of (arbitrarily small) perturbations of the Kac equation, which exhibit stationary solutions with heavy tails; recall that the stationary states of the Kac model are
Gaussians.
Define L = (1 − ) + z 2 | cos Θ| and R = (1 − ) + z 2 | sin Θ|, with a random angle Θ that is uniformly distributed on [0, 2π ) and an independent positive
random variable z satisfying E[z 2 ] = 1, E[z γ ] < ∞, and E[z ζ ] = +∞ for some
real numbers ζ > γ > 2. One verifies by direct calculations that (5) holds with
α = 2, and that S(γ ) < 0 for all > 0 sufficiently small, while S(ζ ) = +∞.
γ
Thus, the hypotheses of Theorem 3 are met, and it follows that E[V∞ ] < ∞ and
ζ
E[V∞ ] = +∞.
In fact, the models so constructed possess heavy tailed steady states for arbitrary
> 0 even if z is a bounded random variable; it suffices that E[z s ] grows exponentially
as s → ∞.
123
Central limit theorem for a class of one-dimensional kinetic equations
The technically most difficult result concerns the situation α = 1: when µ0 belongs to
the domain of normal attraction of a 1-stable distribution with c+ + c− > 0, then its
first moment is infinite and Theorem 2 does not apply. Theorem 1 can be generalized
to this situation as follows.
Theorem 4 Assume that (5) holds with α = 1 and that S(γ ) < 0 for some γ > 0.
Moreover, let the condition (13) be satisfied for F = F0 . Then the random variable
Vt∗
:= Vt −
νt
q j,νt , where q j,n :=
j=1
sin(β j,n x) d F0 (x),
(20)
R
∗ with characteristic function
converges in distribution to a limit V∞
∗
(1)
)] = E exp −|ξ |k M∞
(1 + 2iη/π log |ξ | sign ξ )
φ∞ (ξ ) = E[exp(iξ V∞
(21)
(ξ ∈ R), where the parameters k and η are defined in (14). In particular, the law of
∗ is 1-stable if and only if (6) holds, and V ∗ = 0 a.s. if and only if c+ = c− = 0
V∞
∞
∗ | p ] < +∞ if and only if p < 1.
or γ < 1. In all other cases, E[|V∞
3.2 Rates of convergence in Wasserstein metrics
Theorems 1–3 above provide weak convergence of the solution Vt to a limit V∞ as
t → ∞. The (exponential) rate at which this convergence takes place can be quantified
in suitable Wasserstein metrics.
Recall that the Wasserstein distance of order γ > 0 between two random variables
X and Y is defined by
(E|X − Y |γ )1/ max(γ ,1) .
Wγ (X, Y ) := inf
(X ,Y )
(22)
The infimum is taken over all pairs (X , Y ) of real random variables whose marginal
distribution functions are the same as those of X and Y , respectively. In general, the
infimum in (22) may be infinite; a sufficient (but not necessary) condition for finite
distance is that both E[|X |γ ] < +∞ and E[|Y |γ ] < +∞. For more information on
Wasserstein distances see, for example, [22].
Theorem 5 Assume (5) and S(γ ) < 0, for some γ with 1 ≤ α < γ ≤ 2 or α < γ ≤ 1.
Assume further that (13) holds if α = 1, or that E[|X 0 |γ ] < +∞ if α = 1, respectively.
Then
Wγ (Vt , V∞ ) ≤ AWγ (X 0 , V∞ )e−Bt|S (γ )| ,
(23)
with A = B = 1 if γ ≤ 1, or A = 21/γ and B = 1/γ otherwise.
Clearly, (23) is meaningful only if Wγ (X 0 , V∞ ) < +∞. This is guaranteed for α = 1,
by the hypothesis E|X 0 |γ < +∞. In all other cases, the requirement Wγ (X 0 , V∞ ) <
123
F. Bassetti et al.
+∞ is non-trivial, since by Theorem 1, either V∞ = 0 or E[|V∞ |α ] = +∞. The
following lemma provides a sufficient criterion tailored to the situation at hand.
Lemma 1 Assume, in addition to the hypotheses of Theorem 5, that γ < 2α and that
F0 satisfies hypothesis (13) in the more restrictive sense that there exists a constant
K > 0 and some 0 < < 1 with
|1 − c+ x −α − F0 (x)| < K x −(α+) for x > 0,
|F0 (x) − c− (−x)−α | < K (−x)−(α+) for x < 0.
(24)
(25)
Provided that γ < α/(1 − ), it follows Wγ (X 0 , V∞ ) < +∞.
3.3 Strong convergence of densities
Under suitable hypotheses, the probability densities of µt exist and converge strongly
in the Lebesgue spaces L 1 (R) and L 2 (R).
Theorem 6 For given α ∈ (0, 1) ∪ (1, 2], let the hypotheses of Theorems 1 or 3 hold
with γ > α. Assume further that (13) holds with c− + c+ > 0 if α < 2, so that the
Vt converges in distribution, as t → +∞, to a non-degenerate limit V∞ . Moreover
assume also that
(H1) L r + R r ≥ 1 a.s. for some r > 0, and
√
f0 ∈
(H1) X 0 has a density f 0 with finite Linnik-Fisher functional, i.e., h :=
2
1
2
H (R), that is its Fourier transform h satisfies R |ξ | h(ξ ) dξ < +∞.
Then, the random variable Vt possesses a density f (t) for all t ≥ 0, V∞ has a density
f ∞ , and the f (t) converges, as t → +∞, to f ∞ in any L p (R) with 1 ≤ p ≤ 2.
Remark 2 Notice that, since S(α) = 0, condition (H1) can be satisfied only if r < α.
4 Proofs
4.1 Probabilistic representation (Propositions 1 and 2)
The proof of Proposition 1 is inspired by the respective proof for the Kac case from
[12].
Proof of Proposition 1 First of all it is easy to prove, following [25] and [18], that
formulas (8) and (9) produce the unique solution to problem (1); see also [23]. From
the definition of Vt in (11), it easily follows that
E[eiξ Vt ] =
∞
n=0
123
e−t (1 − e−t )n E[eiξ Wn+1 ] (t > 0, ξ ∈ R).
(26)
Central limit theorem for a class of one-dimensional kinetic equations
Hence, comparing the latter with the Wild sum representation (8) it obviously suffices
to prove that
q̂−1 (ξ ) = E[eiξ W ],
(27)
which we will show by induction on ≥ 1. First, note that E[exp(iξ W1 )] =
E[exp(iξ X 1 )] = φ0 (ξ ) = q̂0 (ξ ) and E[eiξ W2 ] = E[eiξ(L 1 X 1 +R1 X 2 ) ] = q̂1 (ξ ), which
shows (27) for = 1 and = 2. Let n ≥ 3, and assume that (27) holds for all
1 ≤ < n; we prove (27) for = n.
Recall that the weights β j,n are products of random variables L i and Ri . Define
the random index K n < n such that all products β j,n s with j ≤ K n contain L 1 as a
factor, while the β j,n s with K n + 1 ≤ j ≤ n contain R1 . By induction it is easily seen
that P{K n = i} = 1/(n − 1) for i = 1, . . . , n − 1; c.f. Lemma 2.1 in [3]. Now,
A K n :=
Kn
β j,n
j=1
L1
X j , B K n :=
n
j=K n +1
β j,n
X j and (L 1 , R1 )
R1
are conditionally independent given K n . By the recursive definition of the weights
β j,n in (10), the following is easily deduced: the conditional distribution of A K n ,
given {K n = k}, is the same as the (unconditional) distribution of kj=1 β j,k X j ,
which clearly is the same distribution as that of Wk . Analogously,
the conditional distribution of B K n , given {K n = k}, equals the distribution of n−k
j=1 β j,n−k X j , which
further equals the distribution of Wn−k . Hence,
E[eiξ Wn ] =
n−1
1 iξ(L 1 Ak +R1 Bk ) {K n = k}
E e
n−1
k=1
=
=
1
n−1
1
n−1
n−1 E E[eiξ L 1 Wk |L 1 , R1 ]E[eiξ R1 Wn−k |L 1 , R1 ]
k=1
n−2
E[q̂n−2− j (L 1 ξ )q̂ j (R1 ξ )]
j=0
which is q̂n−1 by the recursive definition in (9).
Before proving Proposition 2, we generalize a result from [11]. Denote by Gn the
σ -algebra generated by Ii , L i and Ri for i = 1, . . . , n − 1.
Lemma 2 For any s > 0 with S(s) < +∞, one has
⎡
E Mn(s) = E ⎣
n
j=1
⎤
β sj,n ⎦ =
Γ (n + S(s))
Γ (n)Γ (S(s) + 1)
123
F. Bassetti et al.
and
−t
−t n−1
(s)
=
= et S (s) .
e
(1
−
e
)
E
M
E Mν(s)
n
t
n≥1
(s)
If in addition S(s) = 0, then Mn is a martingale with respect to (Gn )n≥1 .
Proof Recall that (β1,1 , β1,2 , β2,2 , . . . , βn,n ) is Gn -measurable, see (10). We first
(s)
|Gn ] = Mn(s) (1 + S(s)/n), which implies that Mn(s) is a (Gn )n prove that E[Mn+1
(s)
(s)
martingale whenever S(s) = 0, since Mn ≥ 0 and, as we will see, E[Mn ] < +∞
for every n ≥ 1. To prove the claim write
⎡
(s)
E[Mn+1 |Gn ] = E ⎣
n
⎛
⎞
⎤
s
s
⎠ Gn ⎦
β sj,n+1 + βi,n+1
+ βi+1,n+1
I{In = i} ⎝
j=1,...,n+1, j=i,i+1
i=1
⎡
⎛
⎞
⎤
n
n
s
= E⎣
I{In = i} ⎝
β sj,n + βi,n
(L sn + Rns − 1)⎠ Gn ⎦
i=1
j=1
= Mn(s) + S(s)
n
β sj,n E[I{In = i}] = Mn(s) (1 + S(s)/n).
i=1
(s)
(s)
Taking the expectation of both sides gives E[Mn+1 ] = E[Mn ](1 + S(s)/n). Since
n−1
E[M2(s) ] = S(s) + 1 it follows easily that E[Mn(s) ] =
i=1 (1 + S(s)/i) =
Γ (n + S(s))(Γ (n)Γ (S(s) + 1))−1 . To conclude the proof use formula 5.2.12.17 in
[20].
Lemma 3 If S(γ ) < 0 for some γ > 0, then
β(n) := max β j,n
1≤ j≤n
converges to zero in probability as n → +∞.
γ
γ
Proof Observe that β(n) ≤ nj=1 β j,n , hence for every > 0, by Markov’s inequality
and Lemma 2, one gets
P{β(n) > } ≤ P
⎧
n
⎨
⎩
j=1
γ
β j,n ≥ γ
⎫
⎬
⎭
≤
1
1
(γ )
E[Mn ] ≤ C γ n S (γ ) .
γ
The last expression tends to zero as n → ∞ because S(γ ) < 0.
Proof of Proposition 2 Since S(α) = 0, the random variables Mn(α) form a positive
martingale with respect to (Gn )n by Lemma 2. By the martingale convergence theorem, see e.g. Theorem 19 in Chapter 24 of [10], it converges a.s. to a positive random
123
Central limit theorem for a class of one-dimensional kinetic equations
(α)
(α)
(α)
variable M∞ with E[M∞ ] ≤ E[M1 ] = 1. The goal of the following is to determine
(α)
the law of M∞ in the different cases we consider.
First, suppose that L α + R α = 1 a.s. It follows that L α ≤ 1 and R α ≤ 1 a.s., and
(α)
hence S(s) ≤ S(α) = 0 for all s > α. Moreover, it is plain to check that Mn = 1
(α)
a.s. for every n, and hence M∞ = 1 a.s.
Next, assume that S(γ ) < 0 for γ < α. Minkowski’s inequality and Lemma 2 give
⎡
E (Mn(α) )γ /α ≤ E ⎣
n
⎤
γ
β j,n ⎦ =
j=1
Γ (n + S(γ ))
≤ Cn S (γ ) .
Γ (n)Γ (S(γ ) + 1)
(α)
Hence, Mn converges a.s. to 0.
It remains to treat the case with S(γ ) < 0 and γ > α. Since S(·) is a convex
function satisfying S(α) = 0 and S(γ ) < 0 with γ > α, it is clear that S (α) < 0;
also, we can assume without loss of generality that γ < 2α. Further, by hypothesis,
E[(L α + R α )1+(γ /α−1) ] ≤ 2γ /α−1 E[L γ + R γ ] < +∞. Hence, one can resort to
Theorem 2(a) of [9]—see also Corollaries 1.1, 1.4 and 1.5 in [15]—which provides
existence and uniqueness of a probability distribution ν∞ = δ0 on R+ , whose Fourier–Stieltjes transform ψ is a solution of Eq. (17), with R+ xν∞ (d x) = 1. Moreover,
Theorem 2.1 in [16] ensures that R+ x γ /α ν∞ (d x) < +∞ and, more generally, that
p/α ν (d x) < +∞ for some p > α if and only if S( p) < 0.
∞
R+ x
(α)
Consequently, our goal is to prove that the law of M∞ is ν∞ . Let (M j ) j≥1 be a
sequence of independent random variables with common characteristic function ψ,
such that (M j ) j≥1 and (G n )n≥1 are independent. Recalling that ψ is a solution of (17)
it follows that, for every n ≥ 2,
⎧
⎡
⎧
⎫⎤
⎛
⎪
⎪
n
n−1
k−1
⎨
⎨ ⎬
1
⎢
⎝
iξ
E⎢
exp
β αj,n M j ⎦ =
β αj,n−1 M j
E ⎣exp iξ
⎪
⎩
⎭
n−1 ⎣
⎪
j=1
k=1
j=1
⎩
⎞⎫⎤
⎪
⎪
n−1
⎟⎬⎥
α
α
α
α
⎟
+βk,n−1 (L n−1 Mk + Rn−1 Mk+1 ) +
β
M j+1 ⎠ ⎥
⎦
⎪
"
#$
% j=k+1 j,n−1
⎪
⎭
d
= Mk
⎧
⎫⎤
⎡
n−1
⎨ ⎬
= E ⎣exp iξ
β αj,n−1 M j ⎦ .
⎩
⎭
⎡
j=1
By induction on n ≥ 2, this shows that
is ν∞ . Hence
n
j=1
M j β αj,n has the same law as M1 , which
⎤⎤
n
γ α
W (Mn(α) , M1 ) ≤ E ⎣E ⎣
(1 − M j )β αj,n Gn ⎦⎦ .
⎡ ⎡
γ
α
γ
α
j=1
123
F. Bassetti et al.
We shall now employ the following result from [24]. Let 1 < η ≤ 2, and assume that
Z 1 , . . . , Z n are independent, centered random variables and E|Z j |η < +∞. Then
η
n
n
Z j ≤ 2
E|Z j |η .
E j=1 j=1
(28)
We apply this result with η = γ /α and Z j = β αj,n (1 − M j ), showing that
⎡
⎤
γ
α n
n
γ
⎥
⎢
γ
E ⎣ (1 − M j )β αj,n Gn ⎦ ≤ 2
β j,n E|1 − M1 | α
j=1
j=1
almost surely. In consequence, using also Lemma 2,
W
γ
α
γ
α
(
)
⎡
Mn(α) , M1 ≤ 2E ⎣
n
⎤
γ
β j,n ⎦ E
γ
|1 − M1 | α ≤ C n S (γ ) .
j=1
This proves that the law of Mn(α) converges with respect to the Wγ /α metric—and
(α)
(α)
then also weakly—to the law of M1 . Hence, M∞ has law ν∞ . The fact that M∞ is
α
α
non-degenerate, provided L + R = 1 does not hold a.s., follows immediately. 4.2 Proof of convergence for α = 1 (Theorems 1 and 3)
Denote by B the σ -algebra generated by {β j,n : n ≥ 1, j = 1, . . . , n}. The proof
of Theorems 1 and 3 is essentially
an application of the central limit theorem to the
conditional law of Wn := nj=1 β j,n X j given B. Set Q j,n (x) := F0 (β −1
j,n x), where,
by convention, F0 (·/0) := I[0,+∞) (·). In this subsection we will use the functions:
ζn (x) := I{x < 0}
n
⎧
n ⎪
⎨ j=1
Q j,n (x) + I{x > 0}
n
(1 − Q j,n (x)) (x ∈ R)
j=1
⎞2 ⎫
⎪
⎬
⎜
⎟
2
2
x d Q j,n (x) − ⎝
x d Q j,n (x)⎠
( > 0)
σn () :=
⎪
⎪
⎭
j=1 ⎩(−,+]
(−,+]
⎫
⎧
⎪
⎪
n
⎬
⎨
x d Q j,n (x) .
ηn :=
1 − Q j,n (1) − Q j,n (−1) +
⎪
⎪
⎭
j=1 ⎩
⎛
(−1,1]
In terms of Q j,n , the conditional distribution function Fn of Wn given B is the convolution, Fn = Q 1,n ∗ · · · ∗ Q n,n . To start with, we show that the Q j,n s satisfy the
uniform asymptotic negligibility (UAN) assumption (30) below.
123
Central limit theorem for a class of one-dimensional kinetic equations
Lemma 4 Let the assumptions of Theorems 3 or 4 be in force. Then, for every divergent
sequence (n ) of integer numbers, there exists a divergent subsequence (n ) ⊂ (n )
and a set Ω0 of probability one such that
lim
(α)
n →+∞
(α)
Mn (ω) = M∞
(ω) < ∞,
lim
n →+∞
β(n ) (ω) = 0,
(29)
holds for every ω ∈ Ω0 . Moreover, for every ω ∈ Ω0 and for every > 0
lim
max
+
n →+∞ 1≤ j≤n ,
1 − Q j,n () + Q j,n (−) = 0.
(30)
Proof The existence of a sub-sequence (n ) and a set Ω0 satisfying (29) is a direct
consequence of Lemma 3 and Proposition 2. To prove (30) note that, for 0 < α ≤ 2
and α = 1,
max (1 − Q j,n () + Q j,n (−)) ≤ 1 − F0
1≤ j≤n .
β(n )
+ F0
−
β(n )
.
.
The claim hence follows from (29).
Lemma 5 Let the assumptions of Theorem 1 be in force. Then for every divergent
sequence (n ) of integer numbers there exists a divergent subsequence (n ) ⊂ (n )
and a measurable set Ω0 of probability one such that
lim
n →+∞
(α)
E[eiξ Wn |B](ω) = exp{−|ξ |α k M∞
(ω)(1 − iη tan(π α/2) sign ξ )} (31)
for every ξ ∈ R and for every ω in Ω0 .
Proof Let (n ) and Ω0 be the same as in Lemma 4. To prove (31), we apply the central
limit theorem for every ω ∈ Ω0 to the conditional law of Wn given B.
For every ω in Ω0 , we know that Fn is a convolution of probability distribution
functions satisfying the asymptotic negligibility assumption (30). Here, we shall use
the general version of the central limit theorem as presented e.g., in Theorem 30 in
Section 16.9 and in Proposition 11 in Section 17.3 of [10]. According to these results,
the claim (31) follows if, for every ω ∈ Ω0 ,
lim
ζn (x) =
(α)
c + M∞
xα
(x > 0),
(32)
lim
ζn (x) =
(α)
c − M∞
|x|α
(x < 0),
(33)
n →+∞
n →+∞
lim lim sup σn2 () = 0,
→0+ n →+∞
lim
n →+∞
ηn =
1
M (α) (c+ − c− )
1−α ∞
(34)
(35)
are simultaneously satisfied.
123
F. Bassetti et al.
In what follows we assume that P{L = 0} = P{R = 0} = 0, which yields that
β j,n > 0 almost surely. The general case can be treated with minor modifications. In
order to prove (32), fix some x > 0, and observe that
j=1
j=1
n n (
(
) ) (
)α β α j,n
−1
−1
−1
1 − F0 β j,n x =
1 − F0 β j,n x
ζ (x) =
β j,n x
.
xα
n Since lim y→+∞ (1 − F0 (y))y α = c+ by assumption (13), for every > 0 there exists
a ȳ = ȳ() such that if y > ȳ, then |(1 − F0 (y))y α − c+ | ≤ . Hence if x > β(n ) ȳ,
then
(α)
x −α (c+ − )Mn ≤
n (
(
))
(α)
1 − F0 β −1
≤ x −α (c+ + )Mn .
j,n x
(36)
j=1
In view of (29), the claim (32) follows immediately. Relation (33) is proved in a
completely analogous way.
In order to prove (34), it is clearly sufficient to show that for every > 0
lim sup
n →+∞
n
(α) 2−α
x 2 d Q j,n (x) ≤ C M∞
(37)
j=1(−,+]
with some constant C independent of . Recalling the definition of Q j,n , an integration
by parts gives
(
(
(
)
)
)
−1
−1
2
x d F0 β j,n x = − 1 − F0 β j,n + 2 x 1 − F0 β −1
d x,
j,n x
2
(0,]
0
and similarly for the integral from − to zero. With
K := sup x α [1 − F(x)] + sup(−x)α F(x),
x>0
(38)
x<0
which is finite by hypothesis (13), it follows that
.
(
)
2K 2
4−α
−1
α
β αj,n 2−α .
x d F0 β j,n x ≤ −1
+ 4Kβ j,n x 1−α d x ≤ 2K
2−α
(β j,n )α
2
(−,+]
To conclude (37), it suffices to recall that
123
0
j
(α)
(α)
β αj,n = Mn → M∞ by (29).
Central limit theorem for a class of one-dimensional kinetic equations
In order to prove (35), we need to distinguish if 0 < α < 1, or if 1 < α < 2. In the
former case, integration by parts in the definition of ηn reveals
1
ηn =
ζn (x) d x.
−1
Having already shown (32) and (33), we know that the integrand converges pointwise
with respect to x. The dominated convergence theorem applies since, by hypothesis
(13),
(α)
|ζn (x)| ≤ K |x|−α sup Mn (39)
n with the constant K defined in (38); observe that |x|−α is integrable on (−1, 1] since
we have assumed 0 < α < 1. Consequently,
lim ηn = c
n →∞
−
(α)
M∞
0
|x|
−α
dx + c
+
(α)
M∞
−1
1
|x|−α d x =
0
c+ − c− (α)
M∞ .
1−α
It remains to check (35) for 1 < α < 2. Since R x d Q j,n (x) = 0, one can write
ηn = −
n
(1 + x) d Q j,n (x) −
j=1(−∞,−1]
n (x − 1) d Q j,n (x).
j=1(1,+∞)
Similar as for 0 < α < 1, integration by parts reveals that
ηn =
ζn (x) d x.
(40)
{|x|>1}
From this point on, the argument is the same as in the previous case: (32) and (33)
provide pointwise convergence of the integrand; hypothesis (13) leads to (39), which
guarantees that the dominated convergence theorem applies, since |x|−α is integrable
on the set {|x| > 1}. It is straightforward to verify that the integral of the pointwise
limit indeed yields the right-hand side of (35).
Proof of Theorem 1 By Lemma 5 and the dominated convergence theorem, every
divergent sequence (n ) of integer numbers contains a divergent subsequence (n ) ⊂
(n ) for which
lim
n →+∞
(α)
α
E[eiξ Wn ] = E e{−|ξ | k M∞ (1−iη tan(π α/2) sign ξ )} ,
(41)
123
F. Bassetti et al.
where the limit is pointwise in ξ ∈ R. Since the limiting function is independent of
the arbitrarily chosen sequence (n ), a classical argument shows that (41) is true with
n → +∞ in place of n → +∞. In view of (26), the stated convergence follows.
By Proposition 2, the assertion about (non)-degeneracy of V∞ follows immediately
from the representation (18). To verify the claim about moments for γ > α, observe
that (18) implies that
E[|V∞ | ] =
|x| d F∞ (x) = E
p
p
(
(α)
M∞
)p
α
R
|u| p dG α (u),
(42)
R
where G α is the distribution function of the centered α-stable law with Fourier(α)
is finite at least
Stieltjes transform ĝα defined in (12). The p/α-th moment of M∞
for all p < γ by Proposition 2. On the other hand, the p-th moment of G α is finite if
and only if p < α.
The following lemma replaces Lemma 5 in the case α = 2.
Lemma 6 Let the assumptions of Theorem 3 hold. Then for every divergent sequence
(n ) of integer numbers, there exists a divergent subsequence (n ) ⊂ (n ) and a set
Ω0 of probability one such that
lim
n →+∞
E[eiξ Wn |B](ω) = e−ξ
2 σ2
2
(2)
M∞ (ω)
(ξ ∈ R)
for every ω in Ω0 .
Proof Let (n ) and Ω0 have the properties stated in Lemma 4. The claim follows if
for every ω in Ω0 ,
lim
n →+∞
lim
ζn (x) = 0 (x = 0),
lim
→0+ n →+∞
lim
n →+∞
(2)
σn2 () = σ 2 M∞
,
ηn = 0
(43)
(44)
(45)
are simultaneously satisfied.
By assumption, X 0 has finite second moment, and thus
lim y 2 (1 − F0 (y)) = lim y 2 (F0 (y)) = 0.
y→+∞
y→−∞
by Chebyshev’s inequality. Hence, given > 0, there exists a ȳ = ȳ() such that
y 2 (1 − F0 (y)) < for every y > ȳ. Since
n
−1
2
2
2
(β −1
(x > 0),
ζn (x) =
j,n x) (1 − F0 (β j,n x))β j,n /x
j=1
123
Central limit theorem for a class of one-dimensional kinetic equations
(2)
one gets ζn (x) ≤ Mn /x 2 whenever x > β(n ) ȳ. In view of property (29), the first
relation (43) follows for x > 0. The argument for x < 0 is analogous.
We turn to the proof of (44). A simple computation reveals
0≤σ
2
(2)
Mn n x d Q j,n (x) =
j=1(−,]
2
n
β 2j,n j=1
x 2 d F0 (x)
|β j,n x|>
(2)
≤ Mn x 2 d F0 (x),
|β(n ) x|>
which tends to zero as n → ∞ by property (29), for every > 0. In view of the
definition of σn2 (), which implies in particular σn2 () ≥ 0, this gives (44).
Finally, in order to obtain (45), we use (43) and the dominated convergence theorem;
the argument is the same as for (40) in the proof of Lemma 5.
Proof of Theorem 3 Use Lemma 6 and repeat the proof of Theorem 1. A trivial adaptation is needed in the calculation of moments if γ > 2: consider (42) with G α = G 2 ,
the distribution function of a Gaussian law, and note that it posses finite moments of
(2) p
every order. Hence E[|V∞ | p ] is finite if and only if E[(M∞ ) 2 ] is finite, which, by
Proposition 2, is the case if and only if S( p) < 0.
4.3 Proof of convergence for α = 1 (Theorems 2 and 4)
Theorem 4 is proven
first. We shall apply the central limit theorem to the random
variables Wn∗ = nj=1 (β j,n X j − q j,n ) with q j,n defined in (20). In what follows,
Q j,n (x) := F0 ((x + q j,n )/β j,n ). The next Lemma is the analogue of Lemma 4 above.
Lemma 7 Suppose the assumptions of Theorem 4 are in force. Then, for every δ ∈
(0, 1),
|q j,n | = sin(β j,n s) d F0 (s) ≤ Cδ β 1−δ
j,n
(46)
with Cδ = R |x|1−δ d F0 (x) < +∞. Furthermore, for every divergent sequence (n )
of integer numbers, there exists a divergent subsequence (n ) ⊂ (n ) and a set Ω0 of
probability one such that for every ω in Ω0 and for every > 0, the properties (29)
and (30) are verified.
Proof First of all note that Cδ < +∞ for every δ ∈ (0, 1) because of hypothesis (13).
Using further that | sin(x)| ≤ |x|1−δ for δ ∈ (0, 1), one immediately gets
1−δ
|s|1−δ d F0 (s).
|q j,n | = sin(β j,n s) d F0 (s) ≤ β j,n
R
R
123
F. Bassetti et al.
−1
1−δ
Note that, as a consequence of (46), ( + q j,n )β −1
j,n ≥ β(n) ( − C δ β(n) ). Clearly, the
expression inside the bracket is positive for sufficiently small β(n) . Defining (n ) and
Ω0 in accordance to Lemma 4, it thus follows
(
)
(
)
/
0
−1
−1
max 1 − Q j,n () + Q j,n (−) ≤ 1 − F0 c̄β(n
) + F0 −c̄β(n )
1≤ j≤n
for a suitable constant c̄ depending only on , δ and F0 . An application of (29) yields
(30).
Lemma 8 Suppose the assumptions of Theorem 4 are in force, then for every divergent
sequence (n ) of integer numbers there exists a divergent subsequence (n ) ⊂ (n )
and a measurable set Ω0 with P(Ω0 ) = 1 such that
lim
n →+∞
∗
(1)
E[eiξ Wn |B](ω) = exp{−|ξ |k1 M∞
(ω)(1 + 2iη log |ξ | sign ξ )}
(47)
(ξ ∈ R) for every ω in Ω0 .
Proof Define (n ) and Ω0 according to Lemma 7, implying the convergences (29),
and the UAN condition (30). In the following, let ω ∈ Ω0 be fixed. In view of Proposition 11 in Section 17.3 of [10] the claim (47) follows if (32), (33) and (34) are
satisfied with α = 1, and in addition
lim
n→+∞
n χ (t) d Q j,n (t) =
(1) +
M∞
(c
−
∞
−c )
j=1 R
χ (t) − sin(t)
dt
t2
(48)
0
with χ (t) = −I{t ≤ −1} + tI{−1 < t < 1} + I{t ≥ 1}.
Let us verify (32) for an arbitrary x > 0. Given > 0, there exists some ȳ = ȳ()
such that |y(1 − F0 (y)) − c+ | ≤ for all y ≥ ȳ because of hypothesis (13). Moreover,
in view in Lemma 7,
1/2
ŷ j,n :=
x − C1/2 β(n )
x + q j,n ≥
,
β j,n β(n )
which clearly diverges to +∞ as n → ∞ because of (29); in particular, ŷ j,n ≥ ȳ
for n large enough. It follows that for those n ,
c+ − c+ + β j,n ≤ 1 − F( ŷ j,n ) ≤
β j,n .
x + q j,n x + q j,n Recalling that ζn (x) =
n j=1 [1 −
c+ − x
123
1/2
+ C1/2 β(n )
F( ŷ j,n )], from (49) and (46) one gets
Mn(1)
≤ ζn (x) ≤
c+ + x
1/2
− C1/2 β(n )
Mn(1)
(49)
Central limit theorem for a class of one-dimensional kinetic equations
(1)
(1)
if n is large enough. Finally, recall that β(n ) → 0 as n → ∞, and that Mn → M∞
by (29). Since > 0 has been arbitrary, the claim (32) follows. The proof of (33) for
arbitrary x < 0 is completely analogous.
Concerning (34), it is obviously enough to prove that
1/2
lim lim sup s 2 ()
→0 n →+∞ n
=0
(50)
where sn2 () := nj=1 (−,] x 2 d Q j,n (x). We split the domain of integration in the
definition of sn2 at x = 0, and integrate by parts to get
sn2 ()
= −
2
n
Q j,n (−) −
j=1
n −
2
n
Q j,n (u)2u du
j=1(−,0]
(1 − Q j,n ()) +
j=1
n (1 − Q j,n (u))2u du
j=1(0,]
=: An () + Bn () + Cn () + Dn ().
Having already proven (32) and (33), we conclude
lim
lim {|An ()| + |Cn ()|} = 0.
→0+ n →+∞
(51)
Fix > 0; assume that n is sufficiently large to have |q j,n | < /2 for j = 1, . . . , n .
Then
|Bn ()| ≤
≤
≤
≤
.
n −w + q j,n dw
2 wF0
β j,n j=1 0
⎫
⎧
2|q |
⎪
.
n ⎪
⎬
⎨ j,n
−w + q j,n dw
2w dw + 2
wF0
⎪
⎪
β j,n ⎭
j=1 ⎩ 0
2|q j,n |
⎫
⎧
⎪
.
n ⎪
⎬
⎨
Kβ j,n 2
dw
w
4|q j,n | + 2
⎪
⎪
w − q j,n ⎭
j=1 ⎩
2|q j,n |
⎫
⎧
n ⎨
⎬ (
)
3/2
1/2
(1)
2
2
β j,n + β j,n 4K dw ≤ 4C1/4
β(n ) + 4K Mn ,
4C1/4
⎭
⎩
j=1
0
123
F. Bassetti et al.
with the constant K defined in (38). In view of (29), it follows
lim lim sup |Bn ()| = 0
(52)
→0+ n →+∞
as desired. A completely analogous reasoning applies to Dn . At this stage we can
conclude (50), and thus also (34).
In order to verify (48), let us first show that
n lim
n →∞
sin(x) d Q j,n (x) = 0.
(53)
j=1 R
We find
n
n
sin(x) d Q j,n (x) = sin(tβ j,n − q j,n ) d F0 (t)
j=1
j=1
R
R
n ≤
(cos(q j,n ) − 1)q j,n + (q j,n − sin(q j,n ))
j=1
+ sin(q j,n )
n (1 − cos(tβ j,n )) d F0 (t) ≤
(|I1 | + |I2 | + |I3 |) .
j=1
R
The elementary inequalities | cos(x) − 1| ≤ x 2 /2 and |x − sin(x)| ≤ x 3 /6 provide
the estimate
n
(|I1 | + |I2 |) ≤
n
j=1
3
|q j,n |3 ≤ C1/2
β(n ) Mn(1)
.
1/2
j=1
By (29), the last expression converges to zero as n → ∞. In order to estimate I3 ,
observe that, since |1 − cos(x)| ≤ 2x 3/4 for all x ∈ R,
(1 − cos(tβ j,n )) d F0 (t) ≤
R
3/4
2β j,n 3/4
|t|3/4 d F0 (dt) = 2C1/4 β j,n .
R
Consequently, applying Lemma 7 once again,
1/2
(1)
2
|I3 | ≤
|q j,n | (1 − cos(tβ j,n )) d F0 (t) ≤ 2C1/4
β(n ) Mn ,
j=1
j=1
n
n
R
which converges to zero due to (29).
123
Central limit theorem for a class of one-dimensional kinetic equations
Having proven (53), the condition (48) becomes equivalent to
n (1) +
(χ (t) − sin(t)) d Q j,n (t) → M∞
(c − c− )
j=1 R
R+
χ (t) − sin(t)
dt. (54)
t2
The proof of this fact follows essentially the line of the proof of Theorem 12 of [10].
Let us first prove that, if −∞ < x < 0 < y < +∞,
lim
n →+∞
(x,y]
(1) +
dνn (t) = M∞
(c y − c− x),
(55)
n 2
where νn [B] :=
j=1 B t d Q j,n (t) for every Borel sets B ⊂ R. For fixed ∈
(0, y), one uses (32) to conclude
lim
⎛
⎞
n 0 /
0 ⎟
⎜ 2/
dνn (t) = lim
⎝t 1 − Q j,n (t) y + 2 t 1 − Q j,n (t) dt ⎠
n →∞
(,y]
n →∞
=
j=1
+ (1)
2 c M∞
−y
+ (1)
2 c M∞
y
(,y]
+2
t
(1)
c + M∞
(,y]
t
(1)
dt = (y − )c+ M∞
.
Notice that we have used the dominated convergence theorem to pass to the limit
under the integral; this is justified in view of the upper bound provided by (36). In a
similar way, one shows for fixed ∈ (0, |x|) that
lim
n →∞
(x,−]
(1)
dνn (t) = (|x| − )c− M∞
.
Combining this with (50), one concludes
lim sup
n →∞
(1)
dνn (t) = (c+ y − c− x)M∞
.
(x,y]
The same equality is trivially true for lim inf in place of lim sup, proving (55). Now
fix 0 < R < +∞, and note that (55) yields that for every bounded and continuous
function f : [−R, R] → R
lim
n →∞
[−R,R]
f (t) dνn (t) =
(1) −
M∞
c
0
−R
f (t) dt +
(1) +
M∞
c
R
f (t) dt
(56)
0
123
F. Bassetti et al.
holds true. In particular, using f (t) = (χ (t) − sin t)/t 2 , one obtains
n lim
n →∞
=
(χ (t) − sin(t)) d Q j,n (t)
j=1[−R,R]
(1) +
M∞
(c
−
R
−c )
(χ (t) − sin(t))
dt.
t2
(57)
0
Moreover, since |χ (t) − sin t| ≤ 2,
n (χ (t) − sin(t)) d Q j,n (t) ≤ 2[ζn (−R) + ζn (R)].
j=1[−R,R]c
Applying (32) and (33) one obtains
lim sup |
n →+∞
n
j=1[−R,R]c
1
(α) +
(χ (t) − sin(t)) d Q j,n (t)| ≤ 2M∞
(c + c− ) ,
R
which gives
lim sup lim sup |
n
R→+∞ n →+∞ j=1
[−R,R]c
(χ (t) − sin(t)) d Q j,n (t)| = 0.
(58)
Combining (57) with (58) one gets (54).
Proof of Theorem 4 Use Lemma 8 and repeat the proof of Theorem 1.
Proof
of Theorem 2 The theorem is a corollary of Theorem 4. Since m 0 =
x
d
F0 (x) < ∞ by hypothesis, it follows that c+ = c− = 0, and so Vt∗ conR
∗
verges to 0 in probability. Now write Vt = m 0 Mν(1)
t + Vt − Rνt , with the remainder
n
(1)
Rn :=
j=1 (q j,n − β j,n m 0 ). Thanks to Proposition 2, m 0 Mνt converges in distri(1)
bution to m 0 M∞ . It remains to prove that Rνt converges to 0 in probability. Since
sin(x)
≤ H (x) := 1/6 x 2 I{|x| < 1} + I{|x| ≥ 1} ≤ 1/6,
−
1
x
123
Central limit theorem for a class of one-dimensional kinetic equations
it follows that
|Rn | ≤
n
j=1
≤
n
j=1
sin(β j,n x)
− 1|x| d F0 (x)
β j,n β j,n x
R
β j,n
H (β j,n x)|x| d F0 (x) ≤ Mn(1)
R
H (β(n) x)|x| d F0 (x).
R
(1)
Recall that Mn(1) converges a.s. to M∞
and β(n) converges
in probability to 0 by (29).
By dominated convergence it follows that also R H (β(n) x)|x| d F0 (x) converges in
probability to 0.
The (non-)degeneracy of V∞ and the (in)finiteness of its moments is an immediate
consequence of Proposition 2.
4.4 Estimates in Wasserstein metric (Theorem 5)
Proof of Theorem 5 We shall assume that Wγ (X 0 , V∞ ) < +∞, since otherwise the
claim is trivial. Then, there exists an optimal pair (X ∗ , Y ∗ ) realizing the infimum in
the definition of the Wasserstein distance,
∆ := Wγmax(γ ,1) (X 0 , V∞ ) = Wγmax(γ ,1) (X ∗ , Y ∗ ) = E|X ∗ − Y ∗ |γ
(59)
see e.g. Chapter 6 in [1]. Let (X ∗j , Y j∗ ) j≥1 be a sequence of independent and identically
distributed random variables with the same law of (X ∗ , Y∗ ), which are further independent of Bn = (β1,1 , β1,2 , . . . , βn,n ). Consequently, nj=1 X ∗j β j,n has the same
law of Wn , and nj=1 Y j∗ β j,n has the same law of V∞ . By definition of Wγ ,
γ ⎤
⎡
n
n
Wγmax(γ ,1) (Wn , V∞ ) ≤ E ⎣
X ∗j β j,n −
Y j∗ β j,n ⎦
j=1
j=1
γ ⎤⎤
⎡ ⎡
n
= E ⎣E ⎣ (X ∗j − Y j∗ )β j,n Bn ⎦⎦ .
j=1
Now, if 0 < α < γ ≤ 1, then Minkowski’s inequality yields
⎡ ⎡
Wγ (Wn , V∞ ) ≤ E ⎣E ⎣
n
j=1
⎤⎤
⎡
⎤
n
γ
γ
β j,n |X ∗j − Y j∗ |γ Bn ⎦⎦ = E ⎣
β j,n ⎦ ∆;
j=1
where ∆ is defined in (59). If instead 1 ≤ α < γ ≤ 2, we can apply the Bahr–Esseen
inequality (28) since E(X ∗j − Y j∗ ) = E(X 1 ) − E(V∞ ) = 0 and E|X ∗j − Y j∗ |γ =
γ
Wγ (X 0 , V∞ ) < +∞. Thus,
123
F. Bassetti et al.
⎡⎡
Wγγ (Wn , V∞ ) ≤ E ⎣⎣2
n
γ
β j,n |X ∗j
⎤
⎡
⎤
n
γ
− Y j∗ |γ Fn ⎦ = 2E ⎣
β j,n ⎦ ∆ .
j=1
j=1
By Jensen’s inequality,
Wγmax(γ ,1) (Vt , V∞ ) ≤
e−t (1 − e−t )n−1 Wγmax(γ ,1) (Wn , V∞ ).
n≥1
Combining the previous estimates with Lemma 2, we obtain
Wγmax(γ ,1) (Vt , V∞ ) ≤ a∆
⎡
e−t (1 − e−t )n−1 E ⎣
n≥1
n
⎤
γ
β j,n ⎦
= a∆et S (γ ) ,
j=1
with a = 1 if 0 < α < γ ≤ 1 and a = 2 if 1 ≤ α < γ ≤ 2.
Lemma 9 Let two random variables X 1 and X 2 be given, and assume that their distribution functions F1 and F2 both satisfy the conditions (24) and (25) with the same
constants α > 0, 0 < < 1, K and c+ , c− ≥ 0. Then Wγ (X 1 , X 2 ) < ∞ for all γ
α
.
that satisfy α < γ < 1−
Proof Define the auxiliary functions H , H+ and H− on R\{0} by H (x) = I{x >
0}(1 − c+ x −α ) + I{x < 0}c− |x|−α , H± (x) = H (x) ± K |x|−(α+) , so that H− ≤
Fi ≤ H+ for i = 1, 2 by hypothesis. It is immediately seen that H (x), H+ (x) and
H− (x) all tend to one (to zero, respectively) when x goes to +∞ (−∞, respectively).
Moreover, evaluating the functions’ derivatives, one verifies that H and H− are strictly
increasing on R+ , and that H+ is strictly increasing on some interval (R+ , +∞). Let
Ř > 0 be such that H ( Ř) > H+ (R+ ); then, for every x > Ř, the equation
H− (x̂) = H (x) = H+ (x̌)
(60)
possesses precisely one solution pair (x̌, x̂) satisfying R+ < x̌ < x < x̂. Likewise, H
and H+ are strictly increasing on R− , and H− is strictly increasing on (−∞, −R− ).
Choosing R̂ > 0 such that H (− R̂) < H− (−R− ), Eq. (60) has exactly one solution
(x̌, x̂) with x̌ < x < x̂ < −R− for every x < − R̂.
From the definition of the Wasserstein distance one has, for every γ > 0,
Wγmax(γ ,1) (X 1 ,
1
X 2) ≤
|F1−1 (y) − F2−1 (y)|γ dy,
0
where Fi−1 : (0, 1) → R denotes the pseudo-inverse function of Fi . We split the
domain of integration (0, 1) into the three intervals (0, H (− R̂)), [H (− R̂), H ( Ř)]
and (H ( Ř), 1), obtaining:
123
Central limit theorem for a class of one-dimensional kinetic equations
Wγ (X 1 , X 2 )
max(γ ,1)
− R̂
≤
|F1−1 (H (x)) − F2−1 (H (x))|γ H (x) d x
−∞
H
( Ř)
+
|F1−1 (y) −
F2−1 (y)|γ dy
H (− R̂)
∞
+
|F1−1 (H (x)) − F2−1 (H (x))|γ H (x) d x.
Ř
The middle integral is obviously finite. To prove finiteness of the first and the last
integral, we show that
∞
|F1−1 (H (x)) − x|γ H (x) d x < +∞;
Ř
the estimates for the remaining contributions are similar. Let some x ≥ Ř be given,
and let x̂ > x̌ > Ř satisfy (60). From H− < F1 < H+ , it follows that F1 (x̌) <
H (x) < F1 (x̂), which implies further that
x̌ − x < F1−1 (H (x)) − x < x̂ − x.
(61)
From the definition of H , it follows that x = x̂(1 + κ x̂ − )−1/α , with κ = K /c+ .
Combining this with a Taylor expansion, and recalling that x̂ > x > Ř > 0, one
obtains
(62)
x̂ − x = x (1 + κ x̂ − )1/α − 1 < x (1 + κ x − )1/α − 1 < Ĉ x 1− ,
where Ĉ is defined in terms of α, κ and Ř. In an analogous manner, one concludes
from x = x̌(1 − κ x̌ − )−1/α , in combination with 0 < R+ < x̌ < x and 0 < < 1,
that
x̌ − x = x̌ 1 − (1 − κ x̌ − )−1/α ≥ x̌ 1 − (1 + Č x̌ − ) > −Č x 1− , (63)
where Č only depends on α, κ and R+ . Substitution of (62) and (63) into (61) yields
∞
|F1−1 (H (x)) −
γ
x| H (x) d x < max(Ĉ, Č)
Ř
which is finite provided that 0 < γ < α/(1 − ).
γ
∞
x γ (1−)−α−1 d x,
Ř
Proof of Lemma 1 In view of Lemma 9, it suffices to show that the distribution function F∞ of V∞ satisfies (24) and (25) with the same constants c+ and c− as the initial
condition F0 (possibly after diminishing and enlarging K ). The proof is based on
123
F. Bassetti et al.
the representation of F∞ as a mixture of stable laws. More precisely, let G α be the
distribution function whose Fourier–Stieltjes transform is ĝα as in (12), then
F∞ (x) = E G α
-(
(α)
M∞
)−1/α .
x ,
see (18). Since α < γ < 2α, then there exists a finite constant K > 0 such that
|1 − c+ x −α − G α (x)| ≤ K x −γ for x > 0, and similarly for x < 0; see, e.g. Sections
(α)
(α)
2.4 and 2.5 of [26]. Using that E[M∞ ] = 1 and C := E[(M∞ )γ /α ] < ∞ (since
S(γ ) < 0) it follows further that
-(
)−1/α .
(α)
1 − c+ x −α − F∞ (x) = 1 − c+ E M (α) x −α − E G α
M∞
x ∞
(
)−α
+
(α) −1/α
(α) −1/α ≤ E 1 − c (M∞ )
x
− G α ((M∞ )
x)
(α) γ /α −γ
≤ E K (M∞
= C K x −γ .
) x
This proves (24) for F∞ , with = γ − α and K = C K . A similar argument proves
(25).
4.5 Proofs of strong convergence (Theorem 6)
The proof of strong convergence rests on n-independent a priori bounds on the characteristic functions q̂n . These bounds are derived in Lemma 10 and 11 below.
Lemma 10 Under the hypotheses of Theorem 6, there exists a constant θ > 0 and a
radius ρ > 0, both independent of n ≥ 0, such that |q̂n (ξ )| ≤ (1 + θ |ξ |α )−1/r for all
|ξ | ≤ ρ.
Proof By the explicit representation (18) or (19), respectively, we conclude that
(α)
|φ∞ (ξ )| ≤ Φ(ξ ) := E[exp(−|ξ |α k M∞ )], with the parameter k from (14), or
2
k = σ /2 if α = 2. Notice further that, by (17), Φ satisfies
Φ(ξ ) = E[Φ(Lξ )Φ(Rξ )].
(α)
(α)
(α)
(64)
Moreover, since M∞ = 0, E[M∞ ] = 1 and E[(M∞ )γ /α ] < +∞, the function
Φ is positive and strictly convex in |ξ |α , with Φ(ξ ) = 1 − k|ξ |α + o(|ξ |α ). It follows that for each κ > 0 with κ < k, there exists exactly one point Ξκ > 0 with
Φ(Ξκ ) + κ|Ξκ |α = 1, and Ξκ decreases monotonically from +∞ to zero as κ
increases from zero to k.
Since q̂0 = φ0 satisfies condition (13) by hypothesis, it follows by Theorem 2.6.5
of [14] that
q̂0 (ξ ) = 1 − k|ξ |α (1 − iη tan(π α/2) sign ξ ) + o(|ξ |α ),
123
Central limit theorem for a class of one-dimensional kinetic equations
with the same k as before, and η determined by (14). For α = 2, clearly q̂0 (ξ ) =
1 − σ 2 ξ 2 /2 + o(ξ 2 ). By the aforementioned properties of Φ, there exists a κ ∈ (0, k)
such that
|q̂0 (ξ )| ≤ Φ(ξ ) + κ|ξ |α
(65)
for all ξ ∈ R. This is evident, since |q̂0 (ξ )| = |φ0 (ξ )| = 1 − k|ξ |α + o(|ξ |α ), for small
ξ , while inequality (65) is trivially satisfied for |ξ | ≥ Ξk , since |φ0 | ≤ 1.
Starting from (65), we shall now prove inductively that
|q̂ (ξ )| ≤ Φ(ξ ) + κ|ξ |α .
(66)
Fix n ≥ 0, and assume (66) holds for all ≤ n. Choose j ≤ n. Using the invariance
property (64) of Φ, as well as the uniform bound of characteristic functions by one, it
easily follows that
|E[q̂ j (Lξ )q̂n− j (Rξ )]| − Φ(ξ ) ≤ E |q̂ j (Lξ )||q̂n− j (Rξ )| − Φ(Lξ )Φ(Rξ )
/
0
/
0
≤ E |q̂ j (Lξ )| − Φ(Lξ ) |q̂n− j (Rξ )| + E Φ(Lξ ) |q̂n− j (Rξ )| − Φ(Rξ )
≤ E κ(L|ξ |)α + E κ(R|ξ |)α = κ|ξ |α .
The final equality is a consequence of E[L α + R α ] = 1. By (9), it is immediate to
conclude (66) with = n + 1.
The proof is finished by noting that, since κ < k, (1 + θ |ξ |α )−1/r ≥ Φ(ξ ) + κ|ξ |α
holds for |ξ | ≤ ρ, provided that ρ > 0 and θ > 0 are sufficiently small.
Lemma 11 Under the hypotheses of Theorem 6, let ρ > 0 be the radius introduced
in Lemma 10 above. Then, there exists a constant λ > 0, independent of ≥ 0, such
that
|q̂ (ξ )| ≤ (1 + λ|ξ |r )−1/r for all |ξ | ≥ ρ.
(67)
Proof Since the density f 0 has finite Linnik–Fisher information by hypothesis (H2),
it follows that
⎛
⎞
|φ0 (ξ )| ≤ ⎝ |ζ |2 |
h(ζ )|2 dζ ⎠ |ξ |−1
R
√
for all ξ ∈ R, where h = f 0 and h is its Fourier transform. See Lemma 2.3 in [5].
For any sufficiently small λ > 0, one concludes
|φ0 (ξ )| ≤ (1 + λ|ξ |r )−1/r
(68)
for sufficiently large |ξ |.
123
F. Bassetti et al.
Next, recall that the modulus of the Fourier–Stieltjes transform of a probability
density is continuous and bounded away from one, locally uniformly in ξ on R\{0}.
Diminishing the λ > 0 in (68) if necessary, this estimate actually holds for |ξ | ≥ ρ.
Thus, the claim (67) is proven for = 0. To proceed by induction, fix n ≥ 0 and
assume that (67) holds for all ≤ n. In the following, we shall conclude (67) for
= n + 1.
Recall that r < α in hypothesis (H1); see Remark 2. Hence, defining
ρλ = (λ/θ )1/(α−r ) ,
(69)
it follows that (1 + θ |ξ |α )−1/r ≤ (1 + λ|ξ |r )−1/r if |ξ | ≥ ρλ . Taking into account
Lemma 10, estimate (68) for ≤ n extends to all |ξ | ≥ ρλ . We assume ρλ < ρ from
now on, which is equivalent to saying that 0 < λ < λ0 := θρ α−r .
With these notations at hand, introduce the following “good” set:
,
+
G
:= ω : L r (ω) + R r (ω) ≥ 1 + δr and min(L(ω), R(ω))ρ ≥ ρλ ,
Mλ,δ
depending on λ and a parameter δ > 0. We are going to show that if δ > 0 and λ > 0
G has positive probability. First observe that the law of
are sufficiently small, then Mλ,δ
(L , R) cannot be concentrated in the two point set {(0, 1), (1, 0)} because S(γ ) < 0
by the hypotheses of Theorem 6. Hence we can assume P{L r + R r > 1} > 0,
possibly after diminishing r > 0 (recall that if (H1) holds for some r > 0, then
it holds for any smaller r > 0 as well). Moreover, notice that L r + R r > 1 and
L = 0 or R = 0 implies L α + R α > 1. But since E[L α + R α ] = 1, it follows that
P{L > 0, R > 0, L r + R r > 1} > 0. In conclusion, the countable union of sets
∞
1
+
,
MλG0 /k,1/k = ω : L r (ω) + R r (ω) > 1, L(ω) > 0, R(ω) > 0
k=1
has positive probability, and so has one of the components MλG0 /k,1/k .
Also, we introduce a “bad” set, that depends on λ and ξ ,
B
:= {ω : min(L(ω), R(ω))|ξ | < ρλ } .
Mλ,ξ
G and M B are disjoint provided |ξ | ≥ ρ.
Notice that Mλ,δ
λ,ξ
We are now ready to carry out the induction proof, for a given λ small enough. Fix
j ≤ n and some |ξ | ≥ ρ. We prove that
|E[q̂ j (Lξ )q̂n− j (Rξ )]| ≤ E[|q̂ j (Lξ )||q̂n− j (Rξ )|] ≤ (1 + λ|ξ |r )−1/r .
(70)
B , then L|ξ | ≥ ρ
We distinguish several cases. If ω does not belong to the bad set Mλ,ξ
λ
and R|ξ | ≥ ρλ so that by induction hypothesis
/
0−1/r
Z j (ξ ) := |q̂ j (Lξ )||q̂n− j (Rξ )| ≤ (1 + λL r |ξ |r )(q + λR r |ξ |r )
0−1/r
/
≤ 1 + λ(L r + R r )|ξ |r
≤ (1 + λ|ξ |r )−1/r ;
123
Central limit theorem for a class of one-dimensional kinetic equations
indeed, recall that L r + R r ≥ 1 because of (H1). In particular, if ω belongs to the
G , then the previous estimate improves as follows,
good set Mλ,δ
0−1/r
/
Z j (ξ ) ≤ 1 + λ(1 + δr )|ξ |r
≤
-
1 + λρ r
1 + λ(1 + δr )ρ r
.1/r
(1 + λ|ξ |r )−1/r ,
where we have used that |ξ | ≥ ρ. Notice further that there exists some c > 0—
depending on δ, θ , λ0 , ρ and r , but not on λ—such that for all sufficiently small
λ > 0,
-
1 + λρ r
1 + λ(1 + δr )ρ r
.1/r
≤ 1 − cλ.
B , and assume without loss of genFinally, suppose that ω is a point in the bad set Mλ,ξ
r
r
r
r
erality that L ≥ R. Then L |ξ | ≥ (1 − R )|ξ | ≥ |ξ |r − ρλr , and so, for sufficiently
small λ and for any ξ ≥ ρ,
|q̂ j (Lξ )||q̂n− j (Rξ )| ≤ (1 + λL r |ξ |r )−1/r ≤ (1 + λ|ξ |r − λρλr )−1/r
≤ (1 + λρλr )1/r (1 + λ|ξ |r )−1/r .
Again, there exists a λ-independent constant C such that, for all sufficiently small
λ > 0, (1 + λρλr )1/r ≤ 1 + Cλρλr . Putting the estimates obtained in the three cases
together, one obtains
(
(
)
(
))
G
B
E[|q̂ j (Lξ )||q̂n− j (Rξ )|] ≤ (1 + λ|ξ |r )−1/r 1 − P Mλ,δ
− P Mλ,ξ
)
(
)
(
G
B
(1 − cλ) + P Mλ,ξ
(1 + Cλρλr )
+P Mλ,δ
(
)
G
≤ (1 + λ|ξ |r )−1/r 1 + λ(Cρλr − c P Mλ,δ
) .
B ) ≤ 1 in the last step, which
Notice that we have used the trivial estimate P(Mλ,ξ
eliminates any dependence of the term in the square brackets on ξ . To conclude (70),
it suffices to observe that as λ decreases to zero, ρλ tends to zero monotonically by
G ) is obviously non-decreasing and we have already
(69), while the measure P(Mλ,δ
G )
proven that P(MλG∗ ,δ ) > 0 for λ∗ and δ suitably chosen. Hence Cρλr ≤ c P(Mλ,δ
when λ > 0 is small enough. From (70), it is immediate to conclude (67), recalling
the recursive definition of q̂n+1 in (9).
Thus, the induction is complete, and so is the proof of the lemma.
Proof of Theorem 6 The key step is to prove convergence of the characteristic functions φ(t) → φ∞ in L 2 (R). To this end, observe that the uniform bound on q̂n obtained
in Lemma 11 above directly carries over to the Wild sum,
|φ(t; ξ )| ≤ e−t
∞
(1 − e−t )n |q̂n (ξ )| ≤ (1 + λ|ξ |r )−1/r (|ξ | ≥ ρ).
n=0
123
F. Bassetti et al.
The weak convergence of Vt to V∞ implies locally uniform convergence of φ(t) to
φ∞ , and so also |φ∞ (ξ )| ≤ (1 + λ|ξ |r )−1/r for |ξ | ≥ ρ. Let > 0 be given. Then
there exists a Ξ ≥ ρ such that
(
)
|φ(t; ξ )|2 + |φ∞ (ξ )|2 dξ
|φ(t; ξ ) − φ∞ (ξ )|2 dξ ≤ 2
|ξ |≥Ξ
|ξ |≥Ξ
∞
(1 + λ|ξ |r )−2/r dξ ≤
≤4
Ξ
.
2
Again by locally uniform convergence of φ(t), there exists a time T > 0 such that
|φ(t; ξ )−φ∞ (ξ )|2 ≤ /(4Ξ ) for every |ξ | ≤ Ξ and t ≥ T . In combination, it follows
that φ(t) − φ∞ 2L 2 ≤ for all t ≥ T . Since > 0 has been arbitrary, convergence of
φ(t) to φ∞ in L 2 (R) follows. By Plancherel’s identity, this implies strong convergence
of the densities f (t) of Vt to the density f ∞ of V∞ in L 2 (R).
2
Convergence in L 1 (R) is obtained by interpolation
between weak and L (R) convergence: Given > 0, choose M > 0 such that |x|≥M f ∞ (x) d x < /4. By weak
convergence of Vt to V∞ there exists a T > 0 such that |x|≥M f (t; x) d x < /2 for
all t ≥ T . Now Hölder’s inequality implies
⎛
⎜
| f (t; x) − f ∞ (x)| d x ≤ (2M)1/2 ⎝
R
+
|x|>M
⎞1/2
⎟
| f (t; x) − f ∞ (x)|2 d x ⎠
|x|≤M
(| f (t; x)| + | f ∞ (x)|) d x < (2M)1/2 f (t) − f ∞ L 2 +
3
4
Increasing T sufficiently, the last sum is less than for t ≥ T .
Finally, convergence in L p (R) with 1 < p < 2 follows by interpolation between
convergence in L 1 (R) and in L 2 (R).
Acknowledgments D.M. thanks the Department of Mathematics of the University of Pavia, where a
part of this research has been carried out, for the kind hospitality. All three authors would like to thank
E.Regazzini for his time, comments and suggestions. They also thank an anonymous referee for valuable
remarks that have helped improving the paper’s structure significantly.
References
1. Ambrosio, L., Gigli, N., Savaré, G.: Gradient flows: in metric spaces and in the space of probability
measures. Lectures in Mathematics. Birkhäuser, Boston (2008)
2. Bassetti, F., Ladelli, L., Regazzini, E.: Probabilistic study of the speed of approach to equilibrium for
an inelastic Kac model. J. Stat. Phys. 133, 683–710 (2008)
3. Carlen, E.A., Carvalho, M.C., Gabetta, E.: Central limit theorem for Maxwellian molecules and truncation of the Wild expansion. Commun. Pure Appl. Math. 53, 370–397 (2000)
4. Carlen, E., Gabetta, E., Regazzini, E.: Probabilistic investigations on the explosion of solutions of the
Kac equation with infinite energy initial distribution. J. Appl. Probab. 45, 95–106 (2008)
123
Central limit theorem for a class of one-dimensional kinetic equations
5. Carlen, E., Gabetta, E., Toscani, G.: Propagation of smoothness and the rate of exponential convergence
to equilibrum for a spatially homogeneous Maxwellian gas. Commun. Math. Phys. 305, 521–546 (1999)
6. Carrillo, J.A., Cordier, S., Toscani, G.: Over-populated tails for conservative-in-the-mean inelastic
Maxwell models. Discret. Contin. Dyn. Syst. 24(1), 59–81 (2009)
7. Dolera, E., Gabetta, E., Regazzini, E.: Reaching the best possible rate of convergence to equilibrium
for solutions of Kac’s equation via central limit theorem. Ann. Appl. Probab. 19, 186–209 (2009)
8. Dolera, E., Regazzini, E.: The role of the central limit theorem in discovering sharp rates of convergence to equilibrium for the solution of the Kac equation. Ann. Appl. Probab. (2010). doi:10.1214/
09-AAP623
9. Durrett, R., Liggett, T.: Fixed points of the smoothing transformation. Z. Wahrsch. Verw. Gebiete 64, 275–301 (1983)
10. Fristedt, B., Gray, L.: A Modern Approach to Probability Theory. Birkhäuser, Boston (1997)
11. Gabetta, E., Regazzini, E.: Some new results for McKean’s graphs with applications to Kac’s equation. J. Stat. Phys. 125, 947–974 (2006)
12. Gabetta, E., Regazzini, E.: Central limit theorem for the solution of the Kac equation. Ann. Appl.
Probab. 18, 2320–2336 (2008)
13. Gabetta, E., Regazzini, E.: Central limit theorem for the solution of the Kac equation: Speed of approach
to equilibrium in weak metrics. Probab. Theory Relat. Fields 146, 451–480 (2010)
14. Ibragimov, I.A., Linnik, Y.V.: Independent and Stationary Sequences of Random Variables.
Wolters-Noordhoff Publishing, Groningen (1971)
15. Liu, Q.: Fixed points of a generalized smoothing transformation and applications to the branching
random walk. Adv. Appl. Probab. 30, 85–112 (1998)
16. Liu, Q.: On generalized multiplicative cascades. Stoch. Process. Appl. 86, 263–286 (2000)
17. Matthes, D., Toscani, G.: On steady distributions of kinetic models of conservative economies. J. Stat.
Phys. 130, 1087–1117 (2008)
18. McKean, H.P. Jr.: Speed of approach to equilibrium for Kac’s caricature of a Maxwellian gas. Arch.
Ration. Mech. Anal. 21, 343–367 (1966)
19. McKean, H.P. Jr.: An exponential formula for solving Boltmann’s equation for a Maxwellian gas.
J. Combin. Theory 2, 358–382 (1967)
20. Prudnikov, A.P., Brychkov, Yu.A., Marichev, O.I.: Integrals and series, vol. 1. Elementary functions. Gordon & Breach Science Publishers, New York (1986)
21. Pulvirenti, A., Toscani, G.: Asymptotic properties of the inelastic Kac model. J. Stat. Phys. 114,
1453–1480 (2004)
22. Rachev, S.T.: Probability Metrics and the Stability of Stochastic Models. Wiley, New York (1991)
23. Sznitman, A.S.: Équations de type de Boltzmann, spatialement homogènes. Z. Wahrsch. Verw.
Gebiete 66, 559–592 (1986)
24. von Bahr, B., Esseen, C.G.: Inequalities for the r th absolute moment of a sum of random variables,
1 ≤ r ≤ 2. Ann. Math. Stat. 36, 299–303 (1965)
25. Wild, E.: On Boltzmann’s equation in the kinetic theory of gases. Proc. Camb. Philos. Soc. 47,
602–609 (1951)
26. Zolotarev, V.M.: One-dimensional stable distributions. In: Translations of Mathematical Monographs,
vol. 65. AMS, Providence (1986)
123