equivalent events of equal probability

Functions of Random Vectors
Use “equivalent events of equal probability” as always.
Example: Let Xi be mutually independent and exponential with parameter λi .
Z = min(X1 , X2 , . . . , Xd )
FZ (z) = P r(Z ≤ z) = 1 − P r(Z > z)
= 1 − P (X1 > z)P (X2 > z) · · · P (Xd > z)
= 1 − (1 − FX1 (z))(1 − FX2 (z)) · · · (1 − FXd (z))
= 1 − e−z(
!
i
λi )
So Z is exponential with parameter α =
5
i λi
12
Special case: invertible transformations
V = g1 (X, Y ), W = g2 (X, Y ) ⇒ X = h1 (V, W ), Y = h2 (V, W )
fV W (v, w) =
1
|J(x, y)|(x,y)=h1 ,h2
fXY (h1 (v, w), hw (v, w))
where
J(x, y) = det
/
dv
dx
dw
dx
dv
dy
dw
dy
3
Example:
X = time to serve web page requests from server 1
Y = time to serve web page requests from server 2
T =X +Y
W =
Given fXY (x, y), find fT W (t, w)
13
X
X +Y
Application: Estimation of Random Variables
Problem: Given observation Y = y, estimate X, x̂ = g(y), using knowledge of the joint distribution fXY (x, y) (or its 1st and 2nd order moments)
Different possible criteria:
1. Pick the most likely x: maxx fX|Y (x|y)
2. Minimize the expected mean-squared error (MSE)
min E[(X − g(Y ))2 ] =⇒ X̂ = E[X|Y ]
3. Minimize the expected MSE with the constraint that g(Y ) is linear.
(For Gaussians, the min MSE error solution (conditional expectation) is a linear function of y,
so the solutions to (2) and (3) are the same.)
Extension to vector observations: give Y1 , . . . , Yd , estimate X. All criteria could still apply, but
let’s do the easier linear case.
0
22 
d
d
1
1
X̂ = g(Y) =
ak Yk
M SE = E  X −
ak Yk 
k=1
k=1
To min MSE, take derivatives with respect to ak and set to zero:
/0
2 3
d
1
=E
X−
ak Yk Yj = 0 for j = 1, . . . , d.
k=1
14