CHARACTERISTIC FUNCTIONS
We discuss some standard results concerning characteristic functions, namely
identifiability of the underlying measure, inversion to obtain a density under assumptions, and the continuity theorem. The proofs given are not as well known,
as the proofs in Durrett’s book, and can be found in [1].
Definition 0.1. Recall that the characteristic function of probability measure µ on
(R, B) is given by
Z
φ(t) =
eitx dµ(x).
When random variable X has distribution µ, of course φ(t) = E[exp{itX}].
Elementary properties are that φ(0) = 1 and φ is uniformly continuous.
Theorem 0.2. If φ is the characteristic
R function corresponding to distribution
function F , and φ(t) ∈ L1 (R), that is |φ(t)|dt < ∞, then F1 has a bounded,
continuous density f given by formula
Z
1
f (x) =
e−itx φ(t)dt.
2π
Theorem 0.3. If X1 and X2 are random variables, with distribution functions F1
and F2 , but with the same characteristic function φ(t), then F1 = F2 . Hence, characteristic function are in 1:1 correspondence with probability measures on (R, B).
Theorem 0.4. Let {µn } be a sequence of probability measures with characteristic
functions {φn (t)}. Suppose φn (t) → ψ(t) for each t ∈ R and ψ is continuous
at t = 0. Then, {µn } is tight. As a consequence, ψ is a characteristic function
corresponding to a probability measure µ and µn converges weakly to µ.
The idea of proof of these theorems is to convolute the measures in questions
with a Normal distribution with small variance. Then, these convolutions will be
absolutely continuous with densities. These densities can also be evaluated in this
case in terms of the characteristic functions, and hence the assumptions in the
theorems can be used.
Proof of Theorem 0.2. Consider a probability space where X and Z are independent random variables, X has distribution function F and Z is N (0, 1) distributed.
2 2
Let Y = X + σZ and note the associated characteristic function is φ(t)e−t σ /2 .
Now write
Z
Z Z
2 2
2 2
1
1
e−itx φ(t)e−t σ /2 dt =
e−itx eity e−t σ /2 dF (x)dt
2π
2π
and note the integrand is absolutely integrable by assumption on (R, B, dt) ×
(R, B, dF ).
Note also the completion of the square:
2 (y − x)2
t2 σ 2
1 y−x
it(y − x) −
=
+ itσ −
.
2
2
σ
2σ 2
1
2
CHARACTERISTIC FUNCTIONS
Then, we have by Fubini’s theorem, and a change of variables, that
Z
Z
Z
2 2
2
2
2 2
1
1
e−itx φ(t)e−t σ /2 dt =
e−(y−x) /(2σ ) e−t σ /2 dtdF (x).
2π
2π
By evaluating the t-integral, we have the right-side equals
Z
2
2
1
e−(y−x) /(2σ ) dF (x)
fσ (x) = √
2πσ 2
which is the density of X + σZ.
Let σ ↓ 0. Then, by dominated convergence,
Z
Z
2 2
1
1
e−itx φ(t)e−t σ /2 dt →
e−itx φ(t)dt := f (x).
2π
2π
R
Hence, for a < b, also by dominated convergence, noting that |fσ (x)| ≤ |φ(t)|dt <
∞, we have
Z bh
Z
Z b
i
1
fσ (x)dx →
e−itx φ(t)dt dx.
2π
a
a
Since a, b are arbitrary, we have f is the density of X. By the formula, and
φ ∈ L1 (R), f is clearly bounded and continuous.
Proof of Theorem 0.3. Consider a probability space where X1 , X2 , Z are independent random variables and Xi have distribution function Fi for i = 1, 2 and Z
is distributed as N (0, 1). Let Yi = Xi + σZ where σ > 0 for i = 1, 2.
2 2
Then, the characteristic function for Yi is φ(t)e−t σ /2 for both i = 1, 2 by
2 2
assumption. By the proof of Theorem 0.2, since φ(t)e−t σ /2 ∈ L1 (R), the density
of Y1 , Y2 is the same:
Z
2 2
1
f (x) =
e−itx φ(t)e−t σ /2 dt.
2π
Hence, the distributions of Y1 and Y2 are the same. To recover that the distributions of X1 and X2 are the same, we show that the distribution of X1 + σZ
converges weakly to the distribution of X1 . By the same argument, considering X2
instead of X1 , we would conclude the distributions of X1 and X2 are equal.
To this end, note that σZ → 0 in probability as σ ↓ 0 (which can be checked).
But then, by Slutzky’s theorem, X1 + σZ ⇒ X1 .
Before giving the proof of Theorem 0.4, we prove the following result.
Lemma 0.5. Let Xn be random variables weakly converging to a random variable
X. Then, {Xn } is tight.
Proof. Let a be a continuity point of the distribution function of X. Then, by
assumption, we have
P (|Xn | > a) → P (|X| > a).
One can arrange a so that the limit is less than say. Hence, for n ≥ N , some N , we
have P (|Xn | > a) ≤ 2. By considering individually the variables X1 , . . . , XN −1 ,
we can find a1 , . . . , aN −1 such that P (|Xi | > ai ) < 2 for i = 1, . . . , N − 1. Then,
for A = max{a1 , . . . , aN −1 , a}, we have supn P (|Xn | > A) ≤ 2.
Proof of Theorem 0.4. Let {Xn } be random-variables with distributions {µn }.
Let Z be an independent random-variable N (0, 1) distributed. Consider Yn =
CHARACTERISTIC FUNCTIONS
3
2
Xn + Z for n ≥ 1. Then, the characteristic function of Yn is φn (t)e−t /2 ∈ L1 (R).
Hence, by Theorem 0.2, Yn has density
Z
2
1
e−itx φn (t)e−t /2 dt
2π
which converges, by dominated convergence, to
Z
2
1
f (x) :=
e−itx ψ(t)e−t /2 dt.
2π
To see that the limit f is also a probability density function, we need only show
it integrates to 1. Write, by monotone convergence, and Fubini’s theorem that
Z
Z
2 2
f (x)dx = lim f (x)e−x δ /2 dx
δ↓0
Z
i
h 1 Z
2 2
2
e−itx e−x δ /2 dx dt.
= lim ψ(t)e−t /2
δ↓0
2π
One can use Theorem 0.2 to rewrite the expression in square brackets as the
Normal(0, δ 2 ) density evaluated at location t: (2πδ 2 )−1/2 exp{−t2 /(2δ 2 )}.
Now, we use the continuity of ψ at t = 0. Then,
Z
Z
2
2
1
ψ(t)e−t /2−t /(2δ) dt
f (x)dx = lim √
2
δ↓0
2πδZ
2 2
2
1
= lim √
ψ(uδ)e−u δ /2−u /2 du
δ↓0
2π
Z
2
1
= √
e−u /2 du = 1.
2π
Therefore, by Scheffé’s lemma, we have in particular that the distributions of Yn
converges weakly to a distribution with density f . Hence, by Lemma 0.5, {Yn } is
tight. Now, find M such that both
sup P (|Yn | > M ) < and P (|Z| > M ) < .
n
Then,
sup P (|Xn | > 2M )
n
=
sup P (|Xn + Z − Z| > 2M )
n
≤ sup P (|Xn | > M ) + P (|Z| > M ) ≤ 2.
n
Hence, also {Xn } is tight.
The rest of the theorem is now standard and follows by subsequence arguments
as done in class.
References
[1] Pathak, P. (1966) Proofs of some theorems on characteristic functions. Sankhya: The Indian
Journal of Statistics, Seriea A (1961-2002) 28 309-314. Available on JSTOR.
© Copyright 2026 Paperzz