Document

2. PRELIMINARIES
Laplace Transforms, Moment Generating
Functions and Characteristic Functions
2.1 Definitions
2.2 Theorems on Laplace Transforms
2.3 Operations on Laplace Transforms
2.4 Limit Theorems
2.5 Dirac Delta Function
2.6 Appendix: Complex Numbers
2.7 Appendix: Notes on Partial Fractions
31
Laplace Transforms, Moment Generating
Functions and Characteristic Functions
2.1. Definitions:
Let ϕ(t) be defined on real line.
a. Moment Generating Function (MGF)
Z ∞
M GF =
eθt ϕ(t)dt
−∞
which exists if
Z
∞
−∞
|eθt ϕ(t)dt| < ∞
32
b. Characteristic Function (CF)
Z ∞
CF =
eiyt ϕ(t)dt
−∞
which exists if
Z
∞
−∞
|eiyt ϕ(t)|dt < ∞
Since eiyt = cos yt + i sin yt,
Z ∞
|ϕ(t)|dt < ∞
CF exists if
|eiyt | = 1
−∞
33
(i2 = −1).
c. Laplace Transform
Let ϕ(t) be defined on [0, ∞) and assume
Z ∞
e−x0 t ϕ(t)dt < ∞
0
for some value of x0 ≥ 0.
Characteristic function of e−x0 t ϕ(t) is
Z ∞
e−iyt e−x0 t ϕ(t)dt (sign of iyt could be + or -).
CF =
0
Z ∞
e−(x0 +iy)t ϕ(t)dt
CF =
0
s
CF
=
=
x0 + iy
ϕ∗ (s) =
Z
∞
0
e−st ϕ(t)dt for R(s) ≥ x0
ϕ∗ (s) is Laplace Transform of ϕ(t).
34
ϕ∗ (s) =
Z
∞
e−st ϕ(t)dt
0
Since ϕ(t) defined on [0, ∞), for ε ≥ 0
Z
Z ∞
e−st ϕ(t)dt =
ϕ∗ (s) = lim
ε→0
ε
∞
e−st ϕ(t)dt.
0+
Often a script L is used to denote a LaPlace transform; i.e.
L {ϕ(t)} = ϕ∗ (s)
Suppose f (t) is a pdf on [0, ∞),
Z ∞
f ∗ (s) =
e−st f (t)dt , R(s) ≥ 0
0
Laplace transform is appropriate for non-negative r.v.
35
If the cdf is F (t) =
Z
t
f (x)dx =
0
f ∗ (s) =
Z
Z
t
dF (x)
0
∞
e−st dF (t)
0
is Laplace-Stieltjes Transform.
Note: E(e−sT ) = f ∗ (s)
f ∗ (0) = 1, 0 ≤ f ∗ (s) ≤ 1
36
2.2 Theorems on Laplace Transforms (LT)
a. Uniqueness Theorem.
Distinct probability distributions have distinct Laplace Transforms
b. Continuity Theorem
For n = 1, 2, . . . , let {Fn (t)} be a sequence of cdf 0 s such that Fn → F .
∗
Define {fn∗ (s)} as the
sequence
of
LT
such
that
L
{f
(t)}
=
f
(s)
n
n
Z
∞
e−st dF (t).
and define f ∗ (s) =
0
Then fn∗ (s) → f ∗ (s) and conversely if fn∗ (s) → f ∗ (s), then
Fn (t) → F (t)
37
c. Convolution Theorem
If T1 , T2 are independent, non-negative r.v. with p.d.f f1 (t), f2 (t) then
the pdf of T = T1 + T2 is
Z ∞
f1 (τ )f2 (t − τ )dτ
f (t) =
0
and L {f (t)} = f1∗ (s)f2∗ (s)
In general if {Ti } i = 1, 2, . . . , n are independent non-negative r.v.,
then the Laplace transform of the pdf of T = T1 + · · · + Tn is
n
Y
fi∗ (s)
L {f (t)} =
i=1
38
d. Moment Generating Property
Suppose all moments exist.
∗
f (s) =
Z
∞
e
−st
f (t)dt =
0
=
=
Z
∞
0
∞
X
n=0
∞
X
n=0
(
∞
X
(−1)
n
n=0
n
s
(−1)n
n!
Z
s n tn
n!
)
f (t)dt
∞
tn f (t)dt
0
sn
mn where mn = E(T n )
(−1)
n!
39
n
e. Inversion Theorem
Knowledge of f ∗ (s). The inversion formula is written
Z c+i∞
1
est f ∗ (s)ds
f (t) =
2πi c−i∞
where the integration is in the complex plane and c is an appropriate
constant. (It is beyond the scope of this course to discuss the inversion
formula in detail).
Notation: L {f (t)} = f ∗ (s)
f (t) = L−1 {f ∗ (s)}
where L−1 { } is referred to as the “Inverse Laplace Transform”.
40
Example: Exponential Distribution
f (t)
=
E(T )
=
f ∗ (s)
=
λe−λt for t ≥ 0
1/λ, V (t) = 1/λ2 .
Z ∞
Z
e−st λe−λt dt = λ
0
=
f ∗ (s)
=
=
λ
λ+s
∞
X
(−1)n
e−(s+λ)t dt
0
∞
(λ + s)e
−(λ+s)t
0
1
λ
=
λ+s
1+
n=0
⇒
Z
∞
n
s
n!
=
s
λ
n=0
n!
λn
mn = E(T n ) =
41
∞
X
n!
λn
(−1)n
λ
dt =
λ+s
s n
λ
L{λe
−λt
λ
,
}=
λ+s
L
−1
λ
} = λe−λt
{
λ+s
L{e−λt } = (λ + s)−1 , L−1 {(λ + s)−1 } = e−λt
Note:
Z c+i∞
1
est f ∗ (s)ds = f (t)
2πi c−i∞
Z c+i∞
λ
1
st
ds = λe−λt
e
2πi c−i∞
λ+s
Z c+i∞
1
est (λ + s)−1 ds = e−λt
2πi c−i∞
L−1 {(λ + s)−1 }
42
= e−λt
L−1 {(λ + s)−1 } = e−λt
Differentiating w.r. to λ
L−1 {(λ + s)−2 } = te−λt
again
L−1 {2(λ + s)−3 } = t2 e−λt
..
.
(n − 1) times
⇒
and
L−1 {(n − 1)!(λ + s)−n } = tn−1 e−λt
L
L
tn−1 e−λt
Γ(n)
= (λ + s)−n , Γ(n) = (n − 1)!
n−1 −λt
e
λ(λt)
Γ(n)
43
= λn /(λ + s)n
λ(λt)n−1 e−λt
is gamma distribution.
Note:
Γ(n)
If {Ti } i = 1, 2, . . . , n are independent non-negative identically
distributed r.v. following an exponential distribution with parameter λ and
T = T1 + . . . + T n
n
n
Y
λ
f ∗ (s) =
fi∗ (s) =
λ+s
1
44
Example: Suppose T1 , T2 are independent non-negative random variables
following exponential distributions with parameters λ1 , λ2 (λ1 6= λ2 ).
What is distribution of T = T1 + T2 ?
λ1
λ2
f ∗ (s) = f1∗ (s)f2∗ (s) =
.
λ1 + s
λ2 + s
f ∗ (s) can be written as a partial fraction; i.e.
∗
f (s)
where
A1
=
A2
A1 (λ2 + s) + A2 (λ1 + s)
A1
+
=
(λ1 + s) (λ2 + s)
(λ1 + s)(λ2 + s)
=
λ1 λ2
, A2 = −A1
λ2 − λ 1
45
Since L−1 {(λ + s)−1 } = e−λt
f (t) =
L−1 {f ∗ (s)} =
=
=
A1 L−1 {(λ1 + s)−1 } + A2 L−1 {(λ2 + s)−1 }
A1 e−λ1 t + A2 e−λ2 t
λ1 λ2 −λ1 t
−λ2 t
−e
e
λ2 − λ 1
Suppose T1 , T2 , T3 are non-negative independent r.v. so that T1 is
exponential with parameter λ1 and T2 , T3 are exponential each with
parameter λ2 . Find pdf of T = T1 + T2 + T3 .
2
λ2
B1
B2
A1
λ1
∗
+
+
=
f (s) =
λ1 + s
λ2 + s
λ1 + s λ2 + s (λ2 + s)2
f (t) = A1 e−λ1 t + B1 e−λ2 t + B2 te−λ2 t
46
Homework:
Show A1 = (λ1 − λ2 )−2 λ1 λ22
−1
−2
B1 = − (λ1 − λ2 ) + (λ1 − λ2 )
λ1 λ22
B2 = (λ1 − λ2 )−1 λ1 λ22
In general if fi∗ (s) = (λi /λi + s) i = 1, 2, . . . , n and
Qn
∗
f (s) = i=1 fi∗ (s), then f(t) is called the Erlangian distribution.
(Erlang was a Danish telephone engineer who used this distribution to
model telephone calls).
47
2.3 Operations on Laplace Transforms
L{e−λt }
L{tn−1 e−λt }
=
(λ + s)−1
=
Γ(n)/(λ + s)n
Setting λ = 0 in above
L{1}
= 1/s
L{tn−1 }
= Γ(n)/sn
48
Homework: Prove the following relationships
Z t
ϕ∗ (s)
ϕ(x)dx =
1. L
s
0
2. L {ϕ0 (t)} = sϕ∗ (s) − ϕ(0+ )
3. L {ϕ(r) (t)} = sr ϕ∗ (s) − sr−1 ϕ(0+ ) − sr−2 ϕ0 (0+ )
− . . . − ϕ(0) (0+ )
Prove (1) and (2) using integration by parts
Prove (3) by successive use of (2).
49
Suppose f (t) is a pdf for a non-negative random variable and
Z ∞
Z t
f (x)dx, Q(t) =
f (x)dx
F (t) =
0
Since L {
Rt
0
t
ϕ∗ (s)
ϕ(x)dx} = s
∗
f
(s)
L {F (t)} = F ∗ (s) = s
Since Q(t) = 1 − F (t)
∗
1
−
f
(s)
1
L {Q(t)} = Q∗ (s) = s − F ∗ (s) =
s
50
2.4 Limit Theorems
lim ϕ(t) = lim sϕ∗ (s)
t→0+
s→∞
Proof:
L {ϕ0 (t)} = sϕ∗ (s) − ϕ(0+ )
0∗
lim ϕ (s) = lim [sϕ∗ (s) − ϕ(0+ )]
s→∞
But lim
s→∞
Z
s→∞
∞
e−st ϕ0 (t)dt = 0
0
..˙ lim [sϕ∗ (s) − ϕ(0+ )] = 0
s→∞
51
lim ϕ(t) = lim sϕ∗ (s)
t→∞
s→0
Proof:
0
lim L {ϕ (t)} = lim
s→0
s→0
= lim
Z
t→∞
∞
e
−st
0
ϕ (t)dt =
0
Z
Z
∞
ϕ0 (t)dt
0
t
0
ϕ0 (x)dx = lim [ϕ(t) − ϕ(0+ )].
t→∞
Since L {ϕ0 (t)} = sϕ∗ (s) − ϕ(0+ )
lim [sϕ∗ (s) − ϕ(0+ )] = lim [ϕ(t) − ϕ(0+ )]
t→∞
s→0
⇒ lim sϕ∗ (s) = lim ϕ(t)
t→∞
s→0
52
2.5. Dirac Delta Function
Sometimes it is useful to use the Dirac delta function. It is a “strange”
function and has the property.

 0 for t 6= 0
δ(t) =
 ∞ t=0
Define
Let h > 0

 1 if t > 0
U (t) =
 0 t≤0


0



U (t + h) − U (t)
1
=
δ(t, h) =

h
h


 0
53
if t > 0
if − h < t ≤ 0
if t ≤ −h
Define

for t 6= 0
U (t + h) − U (t)  0
=
δ(t) = lim δ(t, h) = lim
 ∞ for t = 0
h→0
h→0
h
Consider
Z ∞
0
=
Z
ϕ(x)δ(t − x)dx
∞
0
= lim
h→0
U (t − x + h) − U (t − x)
ϕ(x) lim
dx
h→0
h
1
h
Z
∞
0
ϕ(x)U (t − x + h)dx −

 1
Since U (t − x + h) =
 0
Z
t−x+h>0
otherwise
54
∞
0
ϕ(x)U (t − x)dx
Z
∞
ϕ(x)δ(t − x)dx
0
)
(Z
Z t
t+h
1
ϕ(x)dx −
ϕ(x)dx
= lim
h→0 h
0
0
= lim
h→0
1
h
(Z
t+h
ϕ(x)dx
t
)
= lim
h→0
ϕ(t + θh)h
h
= ϕ(t)
(by mean value theorem)
for some value of θ 0 < θ < 1.
Z ∞
ϕ(x)δ(t − x)dx = ϕ(t)
..˙
0
Similarly ϕ(t) =
Z
∞
0
ϕ(t − x)δ(x)dx
55
ϕ(t) =
Z
∞
0
ϕ(x)δ(t − x)dx =
Z
∞
0
ϕ(t − x)δ(x)dx
Special Cases
ϕ(x) = 1 for all x
1=
Z
∞
0
δ(t − x)dx =
⇒
Z
Z
∞
δ(x)dx for all t
0
∞
δ(x)dx = 1
0
Suppose ϕ(x) = e−sx . Then
Z
If t = 0,
Z
∞
0
e−sx δ(x − t)dx = e−st
∞
0
e−sx δ(x)dx = L{δ(x)} = 1
56
Example 1 Consider a non-negative random variable T such that
pn = P {T = tn }. Define
∞
X
f (t) =
n=1
Z
∞
f (t)dt =
0
∞
X
pn
n=1
F (tn ) = P {T ≤ tn } =
∞
X
j=1
Z
pn δ(t − tn )
∞
0
δ(t − tn )dt =
pj U (tn+1 − tj ) =
Z
∞
X
pn = 1
1
tn
f (t)dt =
0
X
pj
j≤n
Note: If tj < tn+1 ⇒ tj ≤ tn
f ∗ (s) =
Z
∞
e−st f (t)dt =
0
∞
X
n=1
pn
Z
∞
0
57
e−st δ(t − tn )dt =
∞
X
n=1
pn e−stn
Example 2
Consider a random variable T such that p = P {T = 0} but for T > 0
Z t2
q(x)dx
P {t1 < T ≤ t2 } =
t1
⇒
f (t)
f ∗ (s)
= pδ(t) + (1 − p)q(t)
= p + (1 − p)q ∗ (s).
Another way of formulating this problem is to define the random variable
T by

 0 with probability p
T =
 t with pdf q(t) for t > 0
This is an example of a random variable having both a discrete and
continuous part.
58
2.6 Appendix
ELEMENTS OF COMPLEX NUMBERS
∧
∧
r(x, y)
r
r
0
θ
>
0
>
Rectangular
Polar Coordinates
Coordinates
r = Modulus
θ = Amplitude
x = r cos θ,
y = r sin θ
59
Complex no. representation:
p
z = x + iy, r = x2 + y 2
θ = tan−1
|z| = absolute value= r
x
y
i: to be determined
z = r(cos θ + i sin θ) = reiθ
Suppose zj = aj + ibj
Addition/Subtraction: z1 ± z2 ⇒ (a1 ± a2 ) + i(b1 ± b2 )
60
Multiplication:
zj = rj eiθj , z1 z2 = r1 r2 ei(θ1 +θ2 ) z1 z2 ⇒ r = r1 r2 ,
θ = θ 1 + θ2
z1 z2 = r1 r2 (cos(θ1 + θ2 ) + i sin(θ1 + θ2 ) = x + iy
x = r1 r2 cos(θ1 + θ2 ) = r1 r2 (cos θ1 cos θ2 − sin θ1 sin θ2 ) = a1 a2 − b1 b2
y = r1 r2 sin(θ1 + θ2 ) = r1 r2 (sin θ1 cos θ2 + cos θ1 sin θ2 ) = b1 a2 + a1 b2
⇒ z1 z2 = (a1 + ib1 )(a2 + ib2 ) = a1 a2 + i[a1 b2 + a2 b1 ] + i2 [b1 b2 ]
If i2 = −1 Re(z1 z2 ) = a1 a2 − b1 b2
i, i2 = −1,
⇒ i4n+1 = i,
i3 = −i,
i4 = 1
i4n+2 = −1,
i4n+3 = −i,
i4n = 1
z n = [r(cos θ + i sin θ)]n = rn (cos nθ + i sin nθ)
61
Exponential function
eθ
=
∞
X
θn
0
eiθ
eiθ
e−iθ
n!
i3 θ 3
i2 θ 2
+
+ ...
= 1 + iθ +
2
3!
θ4
θ6
θ3
θ5
θ2
+
−
+ . . . ) + i(θ −
+
...)
= (1 −
2
4!
6!
3!
5!
= cos θ + i sin θ,
= cos θ − i sin θ, cos(−θ) = cos θ; sin −θ = − sin θ
eiθ − e−iθ
eiθ + e−iθ
, sin θ =
cos θ =
2
2i
cos 0 = 1, sin 0 = 0
Since eiθ = cos θ + i sin θ; z = reiθ
If z = ex+iy = ex eiy :
ex : modulus, ey : amplitude
z1 z2 = ex1 +iy1 · ex2 +iy2 = ex1 +x2 +i(y1 +y2 )
62
MOMENT GENERATING AND
CHARACTERISTIC FUNCTIONS
tY
R∞
MGF: ψ(t) = E(e ) = −∞ ety dF (y)
R∞
ψ(t) exists if −∞ | ety | dF (y) ≤ ∞
Sometimes mgf does not exist.
Importance of M GF
Uniqueness Theorem: If two random variables have the same mgf , they
have the same cdf except possibly at a countable number of points having
0 probability.
63
Continuity Theorem: Let {Xn } and X have mgf {ψn (t)} and ψ(t) with
cdf 0 s Fn (x) and F (x). Then a necessary and sufficient condition for
lim Fn (X) = F (X) is that for every t, limn→∞ ψn (t) = ψ(t), where
n→∞
ψ(t) is continuous at t = 0.
Inversion Formula: Knowledge of ψ(t) enables the pdf or frequency
function to be calculated.
Convolution Theorem: If Yi are independent with mgf ψi (t), then the
n
n
X
Y
Yi is ψS (t) =
ψi (t)
mgf of S =
1
If ψi (t) = ψ(t)
1
⇒ ψS (t) = ψ(t)n
64
MOMENT GENERATING FUNCTIONS
DO NOT ALWAYS EXIST!
For that reason one ordinarily uses a characteristic function of a
distribution rather than the mgf .
function of a random variable y is
Def. The characteristic
Z
∞
ϕ(t) = E(eity ) =
−∞
ity |ϕ(t)| = E(e ) ≤
ity as e = 1.
Z
eity dF (y) for −∞ < t < ∞
∞
−∞
ity e dF (y) =
⇒ Characteristic Functions always exist.
65
Z
∞
dF (y) = 1
−∞
Relation between mgf and cf .
ϕ(t) = ψ(it)
ϕ(t) =
X (it)n
n!
ϕ(r) (0) = ir mr
mn
The characteristic function is a Fourier Transform; i.e. for any function
g(y)
F.T.(q(y)) =
Z
∞
eity q(y)dy
−∞
66
if q(y) is pdf ⇒ c.f.
RELATION TO LaPLACE TRANSFORMS
Let Y be a non-negative r.v. with pdf f (y); i.e. P {Y ≥ 0} = 1. Then it is
common to use Laplace Transforms instead of characteristic functions.
Def.: s = a + ib (a > 0). Then the Laplace Transform of f (y) is
Z ∞
e−sy f (y)dy
f ∗ (s) =
0
Since e−sy f (y) = e−iby e−ay f (y)
The Laplace Transform is the equivalent of taking the Fourier Transform
of e−ay f (y)
Ex. Exponential ϕ(t) = (1 −
Z ∞
f ∗ (s) =
e−sy λe−λy dy =
0
it −1
λ)
s
λ
= (1 + )−1
λ+s
λ
67
MOMENT GENERATING FUNCTIONS AND
CHARACTERISTIC FUNCTIONS
mgf
cf
Bernoulli
(pet + q)
(peit + q)
Binomial
(pet + q)n
(peit + q)n
e
Poisson
Geometric
pet /(1 − qet )
etb −eta
t(b−a)
Uniform over (a, b)
Normal
Exponential
λ(et−1 )
peit /(1 − qeit )
eitb − eita
it(b − a)
itm− 12 σ 2 t2
tm+ 12 σ 2 t2
e
= (1 − λt )−1
(1 −
e
λ
λ−t
e
λ(eit −1)
68
it −1
)
λ
INVERSION THEOREM
Integer valued R.V.
Let Y = 0, ±1, ±2, . . . with prob. f (j) = P (Y = j)
cf ϕ(t) = E(eitY ) =
∞
X
eitj f (j)
−∞
Inversion Formula:
1
f (k) =
2π
Z
π
e−ikt ϕ(t)dt
−π
69
1
f (k) =
2π
Z
Z

π
e
∞
X
1
e−ikt 
eitj f (j) dt =
2π
−π
j=−∞
π
i(j−k)t

dt
Z
=
−π
∞
X
j=−∞
π
ei(j−k)t dt
−π
π
−π
[cos(j − k)t + i sin(j − k)t]dt
π
π sin(j − k)t
cos(j − k)t
+ i
for j 6= k
−
j
−
k
(j
−
k)
−π
−π
=
sin nπ = 0 for any integer n.
cos nπ = cos(−nπ)
Z
f (j)
Z
π
ei(j−k)t
−π

 0 for j 6= k
dt =
 2π for j = k
70
INVERSION FORMULA FOR CONTINUOUS
TYPE RANDOM VARIABLES
Let
Z ∞Y have pdf f (y) and cf ϕ(t) which is integrable; i.e.
|ϕ(t)| dt < ∞
−∞
Inversion Formulae:
1
f (y) =
2π
Z
∞
eiyt ϕ(t)dt
−∞
Ex. Normal Distribution (standard normal)
Z ∞
−y 2 /2
e
ity
−t2 /2
√
dy = e
e
ϕ(t) =
2π
−∞
71
Replace t by −t
Z
2
∞
−y /2
2
−ity e
√
dy = e−t /2
e
2π
−∞
Interchange symbols t and y
Z ∞
−t2 /2
−ity e
−y 2 /2
√
dt = e
e
2π
−∞
Divide by
√
2π
1
2π
Z
2
∞
e−ity e−t
−∞
2
−y /2
e
/2
dt = √
2π
which is the inversion formular for N (0, 1).
72
UNIQUENESS THEOREM
Let X be an arbitrary r.v. and let Y be N (0, 1) where X and Y are
independent. Consider Z = X + cY (c is a constant).
ϕZ (t) = ϕX (t)e
−c2 t2 /2
Note that ϕZ (t) is integrable as |ϕX (t)| ≤ 1
Z ∞
1
e−itz ϕz (t)dt
f (z) =
2π −∞
73
P (a < Z ≤ b) =
=
=
FZ (b) − FZ (a) =
Z
b
a
1
2π
1
2π
Z
Z
∞
−∞
Z
b
f (z)dz
a
∞
e
−it(x+cy)
ϕx (t)e
−c2 t2 /2
dtdz
−∞
"Z
#
b
e
−itx
dx ϕx (t)e
a
−c2 t2 /2 −itcy
e
Let c → 0 ⇒ z → x and dz → dx
Fx (b) − Fx (a) =
1
2π
Z
∞
−∞
e
−ibt
−e
−it
−iat
ϕx (t)dt
⇒ The distribution function is determined by its c.f.
⇒ If two r.v.’s have same characteristic function they have the same
distribution function.
74
dt
Examples
Double Exponential f (x) = 12 e−|x|
−∞<x<∞
Z ∞
Z ∞
1
1
eitx e−|x| dx =
[cos tx + i sin tx]e−|x| dx
ϕ(t) =
2
2 −∞
−∞
Since sin(−tx) = − sin tx and cos(−tx) = cos(tx)
Z ∞
Z
1 ∞
(cos tx)e−|x| dx =
(cos tx)e−x dx
ϕ(t) =
2 −∞
0
as sin 0 = 0,
∞ e−x [t sin tx − cos tx]
1
=
ϕ(t) = 2
1
+
t
1 + t2
0
cos 0 = 1
Double Exponential is sometimes called Laplace’s Distribution.
75
Ex. What is c.f. of the Cauchy distribution which has
1
− ∞ < y < ∞?
pdf f (y) = π(1+y
2)
Note that by the inversion formula for the double exponential
Z ∞
1
1
1 |y|
−ity
e =
e
dt
2
2π −∞
1 + t2
Hence for the Cauchy distribution
Z ∞
eity
ϕ(t) =
−∞
1
−|t|
dy
=
e
π(1 + y 2 )
76
PROBLEMS
1. Let Y be any r.v.
(a) Show ϕy (t) = E[cos tY ] + iE[sin tY ]
(b) Show ϕ−y (t) = E[cos tY ] − iE[sin tY ]
(c) Show ϕ−y (t) = ϕy (−t)
2. Let Y have a symmetric distribution around 0, i.e. f (y) = f (−y)
(a) Show E(sin tY ) = 0 and ϕy (t) is real valued.
(b) Show ϕy (−t) = ϕy (t)
3. Let X and Y be ind. ident. dist. r.v.’s. Show ϕX−Y (t) = |ϕx (t)|
77
2
4. Let Yj (j = 1, 2) be independent exponential r.v.’s with
E(Yj ) = 1/λj .
Consider S = Y1 + Y2
(a) Find the c.f. of S
(b) Using (i) and the inversion theorem find the pdf of S. Hint: If
f (y) is exponential with parameter λ (E(Y ) = 1/λ), then
R ∞ −ity λ
λ
1
−λy
dt
=
λe
by the inversion
ϕ(t) = λ−it and 2π −∞ e
λ−it
formula.
78
2.7 Notes on Partial Fractions
Suppose N ∗ (s) and D ∗ (s) are polynomials in s. Suppose D ∗ (s) is a
polynomial of degree k and N ∗ (s) has degree < k.
D∗ (s) =
k
Y
i=1
(s − si ) (Roots are distinct)
and roots of N ∗ (s) do not coincide with D ∗(s) . Then
k
N ∗ (s) X Ai
=
∗
D (s)
s − si
i=1
where constants Ai are to be determined.
Since
⇒
L {e−λt } = (λ + s)−1 , e−λt = L−1 {(λ + s)−1 }
∗ X
k
(s)
N
−si t
=
L−1
A
e
i
D∗ (s)
i=1
79
N ∗ (s)
D∗ (s)
=
∗
N (s)
D∗ (s)
=
∗
N (s) =
k
X
Ai
s − si
i=1
Pk
i=1
k
X
Ai
Qk
Q
1 (s
Ai
i=1
Y
j6=i
j6=i (s
− si )
− sj )
(s − sj )
Substitute s = sr
∗
N (sr ) = Ar
Y
j6=r
N ∗ (sr )
Ar = Q
j6=r (sr − sj )
(sr − sj )
r = 1, 2, . . . , k
80
Since
D∗ (s)
=
k
Y
i=1
dD∗ (s)
ds
0∗
D (sr )
=
(s − si )
D0∗ (s) =
k Y
X
i=1 j6=i
=
Y
j6=r
(s − sj )
(sr − sj )
N ∗ (sr )
Ar = 0∗
D (sr )
r = 1, 2, . . . , k
81
Example:
Let Ti be independent r.v. having pdf qi (t) = λi e−λi t (λ1 6= λ2 ) and
consider T = T1 + T2 .
qi∗ (s)
=
λi /(λi + s) i = 1, 2
qT∗ (s)
=
q1∗ (s)q2∗ (s) = λ1 λ2 /(λ1 + s)(λ2 + s)
=
A2
A1
+
(λ1 + s) (λ2 + s)
N ∗ (s) = λ1 λ2
,
D∗ (s) = (λ1 + s)(λ2 + s), si = −λi
N ∗ (sr )
Ar = 0∗
D (sr )
,
D0∗ (s) = (λ1 + s) + (λ2 + s)
λ1 λ2
λ2 − λ 1
,
A2 =
A1 =
λ1 λ2
λ1 − λ 2
82
λ
1
λ
1
1 2
qT∗ (s) =
−
λ2 − λ 1 s + λ 1
s + λ2
Taking inverse transforms gives
λ1 λ2 −λ1 t
−λ2 t
q(t) =
−e
e
λ2 − λ 1
83
Multiple Roots
If D∗ (s) has multiple roots, the methods for finding the partial fraction
decomposition is more complicated. It is easier to use direct methods.
Example: N ∗ (s) = 1, D ∗ (s) = (a + s)2 (b + s)
1
A1
A2
B
N∗
=
=
+
+
D∗ (s)
(a + s)2 (b + s)
a + s (a + s)2
b+s
A1 (a + s)(b + s) + A2 (b + s) + B(a + s)2
1
=
2
(a + s) (b + s)
(a + s)2 (b + s)
1 = A1 (a + s)(b + s) + A2 (b + s) + B(a + s)2
(1)
1 = s2 [A1 + B] + s[A1 (a + b) + A2 + B(2a)] + [abA1 + A2 b + Ba2 ]
A1 + B = 0, A1 (a + b) + A2 + 2aB = 0
A1 (ab) + A2 b + Ba2 = 1
84
The above three equations result in solutions for (A1 , A2 , B). Another
way to solve for these constants is by substituting s = −a and s = −b in
(1); i.e.
substituting s = −b in (1)
⇒ B = (a − b)−2 ⇒ A1 = −B = −(a − b)−2
substituting s = −a in (1)
⇒ A2 = (b − a)−1
A1 = −(a − b)−2 , A2 = −(a − b)−1 , B = (a − b)−2
1
A1
A2
B
N ∗ (s)
=
=
+
+
D∗ (s)
(s + a)2 (s + b)
s + a (s + a)2
s+b
∗ N (s)
−2 −at
−1 −at
−2 −bt
=
−(a
−
b)
e
−
(a
−
b)
te
+
(a
−
b)
e
L−1
∗
D (s)
e−bt
−e−at
=
[1 + t(a − b)] +
2
(a − b)
(a − b)2
85
In general if D ∗ (s) = (s − s1 )d1 (s − s2 )d2 ...(s − sk )dk the
decomposition is
A1
A2
Ad 1
N ∗ (s)
=
+
+
.
.
.
+
D∗ (s)
s − s1
(s − s1 )2
(s − s1 )d1
+
B1
B2
Bd 2
+
+
.
.
.
+
s − s2
(s − s2 )2
(s − s2 )d2
+ ... . +
+
Z2
Zdk
Z1
+
+
.
.
.
+
(s − sk ) (s − sk )2
(s − sk )dk
86