Homework 6 KEY

MATH/STAT 395
Homework 6 KEY
Winter 2013
This homework is due at the beginning of class on Friday, March 1.
1. A nut company markets cans of deluxe mixed nuts containing almonds,
cashews and peanuts. The net weight of each can is one pound, but the
weight contribution of each type of nut is random. Let X denote the
weight (in lbs.) of almonds, Y the weight (in lbs.) of cashews. Suppose
X and Y are absolutely continuous random variables with joint density
function
fX,Y (x, y) = 24xy, 0 < x < 1, 0 < y < 1, 0 < x + y < 1
(a) Is fX,Y (x, y) a valid probability density function?
In order to be a valid p.d.f., the functionR f (x,
y) must satisfy two
∞ R∞
conditions: (i) f (x, y) ≥ 0, ∀x, y and (ii) −∞ −∞ f (x, y)dydx = 1.
Clearly f (x, y) is non-negative as it is zero outside of the range
indicated above, and positive inside. In order to check the second
condition we need to integrate the p.d.f. over the shaded region:
y
1
y=1-x
x
1
that is, the region where y < 1.x for any x that lies between 0 and
1
1. This works out to:
Z 1 Z 1−x
Z ∞Z ∞
f (x, y)dy dx = 24
x y,
−∞ −∞
0
0
Z 1
y2
x |01−x dx,
= 24
2
Z0 1
x(1 − x)2 dx,
= 12
(1)
(2)
0
= 1
(b) Are X and Y independent? Why or why not?
X and Y cannot be independent because their “support” or the
range where f (x, y, ) is positive is not a cartesian product.
(c) Let W denote the weight of the peanuts. That is, W = 1−(X+Y ).
Find the C.D.F. of W and a density function for W .
For any 0 < w < 1 the C.D.F. of the new random variable W is
defined as:
FW (w) =
=
=
=
P (W ≤ w),
P ((1 − (X + Y )) ≤ w),
P (X + Y ≥ (1 − w)),
1 − P (X + Y ≤ (1 − w)).
This can be calculated as the integral of the joint p.d.f. over the
region X + Y ≤ (1 − w) which involves a double integration of x
between 0 and w and y between 0 and 1 − w. The expression is
identical to that on the right hand side of (1) with 1 − w − x in
place of (1 − x) for the integral over y and 1 − w in place of 1 in
the integral over x. Equation (2) then becomes
Z 1−w
y2
P (X + Y ≤ (1 − w)) = 24
x |01−w−x dx,
2
Z0 1−w
= 12
(x(1 − w)2 − 2 x2 (1 − w)2 + x3 ) dx,
0
= (1 − w)4
2
So in summary the C.D.F. of W is:

0
w≤0

4
1 − (1 − w) 0 < w < 1
FW (w) =

1
w ≥ 1.
The C.D.F. is differentiable everywhere except perhaps at w = 0
and w = 1. Differentiating gives the p.d.f.:
4(1 − w)3 0 < w < 1,
fW (w) =
0
otherwise
2. Suppose X and Y are absolutely continuous random variables with
joint density function
f (x, y) =
1
1
−
(x2 +y 2 −2ρxy)
p
e 2(1−ρ2 )
, −∞ < x, y < ∞
2π 1 − ρ2
where −1 < ρ < 1 is some fixed constant.
(a) When are X and Y independent?
X and Y are independent if and only if
f (x, y) = fX (x) × fY (y), almost everywhere
So we first need to find the marginal distributions fX (x) and fY (y).
By definition:
Z ∞
fX (x) =
f (x, y)dy, , ∞ < x < ∞
−∞
Z ∞
1
1
−
(x2 +y 2 −2ρxy)
p
e 2(1−ρ2 )
=
dy,
1 − ρ2
−∞ 2π
Z ∞
1
1
2
1
−
−
(y 2 −2ρxy)
2) x
2(1−ρ
p
=
e
e 2(1−ρ2 )
dy (3)
2π 1 − ρ2
−∞
3
Completing the square in the exponent term inside the integral
on the right hand side of (3) gives:
Z ∞
2
1
1
(y 2 −2ρxy+ρ2 x2 −ρ2 x2 )
−
− x 2
2(1−ρ
)
p
dy,
e 2(1−ρ2 )
fX (x) =
e
2π 1 − ρ2
−∞
Z ∞
2
1
1
−
(y−ρ x)2
− x2
p
=
e 2(1−ρ2 )
dy
e
2π 1 − ρ2
−∞
Making the change of variable u = √y−ρ x2 yields:
(1−ρ )
x2
1
fX (x) = √ e− 2 , −∞ < x < ∞,
2π
as the resulting integral over u will integrate to 1 being the integral
of the standard normal p.d.f.
By symmetry in calculations the same expression defines the marginal
density of Y . So in order to have independence between X and Y
we will need ρ = 0 as this is the only value of ρ for which the joint
f (x, y) will factor into the product of the marginal densities.
(b) What is the marginal density of Y ?
This has already been calculated in part (a).
(c) What is the conditional density of X given Y ?
A conditional density of X given Y = y is
f (X|Y = y) = f (x, y)/fY (y), −∞ < x < ∞
1
1
−
(x2 −2 ρ x y + y 2 ρ2 )
e 2(1−ρ2 )
,
= p
2π(1 − ρ2 )
1
1
−
(x−ρy)2
= p
e 2(1−ρ2 )
.
2π(1 − ρ2 )
So the conditional density of X given Y is also normal with mean
ρ y and variance (1 − ρ2 )
4
3. Let X and Y be independent, each with the uniform distribution on
(0,1). Let U be the smaller of X and Y . Find a density for U .
We will find the density of U using the C.D.F. method. For any u
between 0 and 1 the C.D.F. of U is
FU (u) =
=
=
=
=
=
P (U ≤ u),
P (min(X, Y ) ≤ u),
1 − P (min(X, Y ) > u),
1 − P (X > u, Y > u), definition of min.
1 − P (X > u) × P (Y > u), by independence
1 − (1 − u)2 the C.D.F. of a uniform
In summary, the C.D.F. of W is:

0
u ≤ 0,

2
1 − (1 − u) 0 < u < 1
FU (u) =

1
u ≥ 1.
This is readily differentiable except perhaps at u = 0 and u = 1. A
density of U is:
2(1 − u) 0 < u < 1
fU (u) =
0
otherwise
4. Let X and Y be independent, each with the same density f (x) =
λ2 xe−λx , x > 0.
(a) Use the convolution method to find the density for W = X + Y .
The convolution method only applies when we wish to find the
density of X + Y where X and Y are indepdendent random vari-
5
ables. Since this is the case here, we have, for any w > 0:
Z w
∂
fW (w) =
fX (x) fY (w − x) dx,
∂w
Z−∞
w
λ4 x (w − x) e−λx e−λ(w−x) dx,
=
0
Z w
4 −λ w
x (w − x) dx,
= λe
0
1 4 −λ w 3
=
λe
w .
6
(b) Use the C.D.F. method to get the same result.
This part will be graded by instructor for an extra credit of 2 pts.
6