EE 315 Probabilistic Methods in
Electrical Engineering
Chapter 3 – Operations on One
Random Variables-Expectation
1
3.0 Introduction
• There are important operations we can perform
on random variables.
• Most of these operations are based on the
concept of expectation
2
3.1 Expectation
• Expectation is the name given to the process
of averaging (Ex: 3.1-1) when a random
variable is involved.
Random variable X, Expactation E[X] , or X
Also known as,
“Mathematical Expectation of X”
“Expected Value of X”
“Mean value of X”
“Statistical average of X”
3
• In Example 3.1-1, X is the discrete random variable
“fractional dollar value of pocket coins”. It has 100
discrete values xi, that occurs with probabilities P(xi).
• If we have large number of people, every fractional
dollar value xi will have a non-zero number of people
having that dollar value. The ratios would approach
the corresponding probabilities.
100
E [ X ] = ∑ xi P( xi )
(3.1-1)
i =1
4
• In terms of pdf,
∞
E [ X ] = X = ∫ xf X ( x )dx
(3.1-2)
−∞
• If X is a discrete r.v. with N possible values xi with
probabilities P(xi),
N
f X ( x ) = ∑ P(xi )δ (x − xi )
(3.1-3)
i =1
• Then the above integral becomes
N
(3.1-4)
E [X ] = x P( x )
∑
i
i
i =1
• The results in Example 3.1-1 is a special case of this
for N=100. N can be infinite as well.
See Example 3.1-2.
5
Expected Value of a Function of a Random Variable
• Let g(X) is a real function of the r.v. X
• The expected value of this function can be found by
∞
(3.1-6)
E [g ( X )] =
g ( x ) f ( x )dx
∫
X
−∞
• If X is a discrete r.v.,
N
E [g ( X )] = ∑ g ( xi )P( xi )
(3.1-7)
i =1
• Example 3.1-3: Average power of random voltage
sources
• Example 3.1-4: Measure of “Information”
6
Conditional Expected Value
∞
E [ X | B ] = ∫ xf X ( x | B ) dx
−∞
B = { X ≤ b}
(3.1-8)
−∞ <b < ∞
fX ( x)
b
f X ( x | X ≤ b ) = ∫ f X ( x ) dx
−∞
0
b
∫
E [ X | X ≤ b] =
∫
−∞
b
−∞
xf X ( x ) dx
x<b
x≥b
(3.1-11)
f X ( x ) dx
7
3.2 Moments
• Moments about the Origin
g(X ) = Xn
(3.2-1)
n = 0,1, 2,
∞
mn = E X = ∫ x n f X ( x ) dx
−∞
=> m0 = 1 (area of the pdf)
n
m1 = X , the expected value of X
• Central Moments-Moments about the mean value of X
(
) n = 0,1, 2,
= E ( X − X ) = ∫ ( x − X ) f
g(X ) = X − X
µn
n
n
∞
−∞
(3.2-3)
n
X
( x ) dx
=> µ0 = 1 (area of the pdf)
µ1 = 0, why ?
8
Variance and Skew
• Second central moment μ2 is known as Variance
2
2
∞
2
of X σ X = µ2 = E ( X − X ) = ∫−∞ ( x − X ) f X ( x ) dx (3.2-5)
• The positive square root of variance is known as
Standard Deviation of X. It is a measure of the
“spread” in the pdf about the “mean”.
• Variance can be found from the knowledge of
first and second moments.
σ = E X −X
(
2
X
)
2
= E X 2 − 2X X + X 2
= E X − 2 X E [ X ] + X
2
2
(3.2-6)
2
= E X − X = m2 − m12
2
• Example 3.2-1
9
Variance and Skew
• Third central moment is a measure of the
asymmetry of fX(x) about x=E(X)=m1. It is
called the skew of the pdf.
• If the pdf is symmetric about x=E(X), then it
has zero skew.
• Skewness (coefficient of skewness)
=normalized third central moment
µ3 / σ
3
X
10
Inequalities
• A useful tool in some probability problems
• Chebychev’s Inequality
{
}
P X − X ≥ ε ≤ σ X2 / ε 2 for any ε > 0
(3.2-7)
{
(3.2-10)
}
P X − X < ε ≥ 1 − (σ X2 / ε 2 ) for any ε > 0
– From (3.2-10),
, P { X − X < ε } → 1 for any ε > 0
– Also, for arbitrarily small ε, P { X − X → 0} → 1 or P { X = X } → 1
2
σ
for X → 0
• Markov’s Inequality
P { X ≥ a} ≤ E [ X ] / a for a > 0
(3.2-11)
Markov’s Inequality applies to a nonnegative r.v. X
11
3.3 Functions that give Moments
• Characteristic Function
Φ X (ω ) = E e jω X where j = −1, − ∞ < ω < ∞
Φ X (ω ) = ∫
∞
−∞
f X ( x ) e jω x dx
(3.3-1)
(3.3-2)
φX(ω) is the Fourier Transform of the pdf, fX(x)
with sign of ω reversed; pdf can be found from
Inverse Fourier Transform (with the sign of x
reversed).
1 ∞
− jω x
fX ( x) =
Φ
ω
e
dω
(
)
(3.3-3)
X
∫
−∞
2π
12
• Differentiation of (3.3-2) n times with respect to
ω and setting ω=0, the nth moment of X is
mn = ( − j )
n
d n Φ X (ω )
dω n
(3.3-4)
ω =0
• Maximum magnitude of a characteristic function
is unity and occurs at ω=0; i.e.,
Φ X (ω ) ≤ Φ X ( 0 ) = 1
(3.3-5)
13
• Moment Generating Function
M X (ν ) = E eν X where − ∞ < ν < ∞
M X (ν ) = ∫
∞
−∞
mn =
f X ( x ) eν x dx
(3.3-6)
(3.3-7)
d n M X (ν )
dν
(3.3-8)
n
ν =0
• Example 3.3-2
• Chernoff’s Inequality and Bound – An example
application of moment generating function
Example 3.3-3
P { X ≥ a} ≤ ∫
∞
−∞
f X ( x )eν ( x − a ) dx = e −ν a M X (ν )
(3)
14
3.4 Transformations of a R.V.
• X is a random variable (r.v.)
• Y is a new r.v. by means of a transformation
Y=T(X)
(3.4-1)
• The distribution function(cdf) FX(x) or the
density function(pdf) fX(x) is known.
• Determine the cdf FY(y) or pdf fY(y) of r.v. Y.
15
Monotonic Transformation of a Continuous
Random Variable
• A transformation T is
monotonically increasing
if T(x1)<T(x2) for any x1<x2.
It is monotonically
decreasing if T(x1)>T(x2)
for any x1<x2.
• Consider increasing trasf.
Assume T is continuous
and differentiable at all
values of x for which
fX(x)≠0.
16
• For X=x0, let Y=y0. y0=T(x0) or x0=T-1(y0).
T-1 represents the inverse of the transformation T.
• Now FY ( y0 ) = P {Y ≤ y0 } = P { X ≤ x0 } = FX ( x0 ) (3.4-3)
y
x =T ( y )
or
f x ( x ) dx
(3.4-4)
∫−∞ fY ( y ) dy = ∫−∞
• Differentiate both sides w.r.t. y0 using Leibniz’s
−1
dT
( y0 )
rule to get f ( y ) = f T −1 ( y )
(3.4-5)
Y
0
X
0
dy0
• In general
−1
dT
y)
(
−1
fY ( y ) = f X T ( y )
(3.4-6)
dy
• Or, more compactly
−1
0
0
fY ( y ) = f X ( x )
0
dx
dy
(3.4-7)
17
• Consider now decreasing transformation
FY ( y0 ) = P {Y ≤ y0 } = P { X ≥ x0 } = 1 − FX ( x0 ) (3.4-8)
• Differentiation of both sides leads to
fY ( y0 ) = − f X T −1 ( y0 )
• Or,
dT −1 ( y0 )
dy0
dx
fY ( y ) = f X ( x ) −
dy
• Since dx/dy is also negative in this case fY(y) is
positive
• Therefore for either type of monotonic
transformation
dx
fY ( y ) = f X ( x )
(3.4-10)
dy
18
Nonmonotonic Transformation of a
Continuous Random Variable
There may now be more than one interval of values of X that
correspond to the event {Y≤y0}.
FY ( y0 ) = P {Y ≤ y0 } = P { x | Y ≤ y 0 } =
∫
f X ( x ) dx
(3.4-11)
d
f X ( x ) dx
∫
dy0 {x|Y ≤ y0 }
(3.4-12)
{ x|Y ≤ y0 }
fY ( y0 ) =
fY ( y ) = ∑
n
f X ( xn )
dT ( x )
dx x = x
(3.4-13)
n
where the sum is taken so as to include all the roots xn=1,2,…
which are the solutions to the equation y=T(x). Example 3.4-2
19
Transformation of a discrete R.V.
• X a discrete r.v., Y=T(X)
f X ( x ) = ∑ P ( xn ) δ ( x − xn )
(3.4-15)
n
FX ( x ) = ∑ P ( xn ) u ( x − xn )
(3.4-16)
n
• If T is monotonic, there is a one-to-one correspondence
between X and Y such that yn=T(xn) and P(yn)=P(xn). Thus
f Y ( y ) = ∑ P ( yn ) δ ( y − yn )
(3.4-17)
n
FY ( y ) = ∑ P ( yn ) u ( y − yn )
(3.4-18)
n
where yn=T(xn), P(yn)=P(xn)
• If T is not monotonic, more than one value of xn may
correspond to a value yn. P(yn) will be the sum of the
probabilities of the various xn for which yn=T(xn). Example 3.4-320
© Copyright 2026 Paperzz