Homework 2 - rci.rutgers.edu

Rutgers Business School, Introduction to Probability, 26:960:575
Homework 2
Prepared by Daniel Pirutinsky
Feburary 11th, 2016
Solutions to problems from Introduction to Probability and Statistics textbook (by Ross).
Problem 10 (page 132)
Question
The joint probability density function of X and Y is given by
f (x, y) =
6 2 xy x +
, 0 < x < 1, 0 < y < 2
7
2
(a) Verify that this is indeed a proper joint density function.
(b) Compute the density function of X.
(c) Find P (X > Y ).
Solution
(a)
First we notice that f (x, y) is non-negative for all ”allowed” combinations of x and y. Next we check that it integrates to
one,
Z
2
Z
1
Z
2
Z
1
6 2 xy x +
dx dy
2
0
0 7
1
Z 2 3
6x
6yx2 =
+
dy
21
28 0
0
1
Z 2 3
3yx2 2x
+
=
dy
7
14 0
0
Z 2
2 3y
+
dy
=
14
0 7
2
2y 3y 2 =
+
7
28 f (x, y) dx dy =
0
0
0
=1
1
(b)
For 0 < x < 1 we have,
Z
2
f (x, y) dy
fX (x) =
0
Z
2
6 2 xy x +
dy
2
0 7
2
6x2 y 3xy 2 =
+
7
14 =
0
12x2
6x
=
+
7
7
(c)
For x to be larger than y, y must be less than 1. So we have,
Z
P (X > Y ) =
f (x, y) dx dy
X>Y
Z 1Z 1
6 2 xy x +
dx dy
2
0
y 7
1
Z 1 3
2x
3x2 y =
+
dy
7
14 y
0
Z 1
2 3y 7y 3
+
−
dy
=
14
14
0 7
1
2y 3y 2
7y 4 =
+
−
7
28
56 =
0
2
3
7
= +
−
7 28 56
6
7
16
+
−
=
56 56 56
15
=
≈ .268
56
Problem 16 (page 133)
Question
Suppose that X and Y are independent continous random variables. Show that
Z
∞
(a) P (X + Y ≤ a) =
FX (a − y)fY (y) dy
−∞
Z
∞
(b) P (X ≤ Y ) =
FX (y)fY (y) dy
−∞
where fY is the density function of Y , and FX is the distribution function of X.
Solution
X + Y ≤ a =⇒ X ≤ a − Y
2
(a)
Z
P (X + Y ≤ a) =
fX (x)
x+y≤a
Z ∞ Z a−y
· fY (y) dx dy
fX (x) · fY (y) dx dy
=
−∞
Z ∞
−∞
Z
a−y
fX (x) dx dy
fY (y)
=
−∞
Z ∞
=
−∞
∞
Z
=
Z
−∞
a−y
dy
fY (y) FX (x)
−∞
fY (y) FX (a − y) − FX (−∞) dy
−∞
∞
FX (a − y)fY (y) dy
=
−∞
(b)
Similar to above, so I leave out some steps.
Z
P (X ≤ Y ) =
fX (x)
x≤y
Z ∞Z y
· fY (y) dx dy
fX (x) · fY (y) dx dy
=
−∞
Z ∞
=
−∞
FX (y)fY (y) dy
−∞
Problem 25 (page 134)
Question
A total of 4 buses carrying 148 students from the same school arrive at a football stadium. The buses carry, respectively, 40,
33, 25, and 50 students. One of the students is randomly selected. Let X denote the number of students that were on the
bus carrying this randomly selected student. One of the 4 bus drivers is also randomly selected. Let Y denote the number
of students on her bus.
(a) Which of E[X] or E[Y ] do you think is larger? Why?
(b) Compute E[X[ and E[Y ].
Solution
(a)
Discuss in class.
(b)
40
33
25
50
+ 33
+ 25
+ 50
148
148
148
148
5814
2907
=
=
148
74
≈ 39.3
E[X] = 40
3
1
1
1
1
E[Y ] = 40 + 33 + 25 + 50
4
4
4
4
148
=
4
= 37
Problem 29 (page 135)
Question
Let X1 , X2 , . . . , Xn be independent random variables having the common density function
(
f (x) =
1, 0 < x < 1
0, otherwise
Find (a) E[M ax(X1 , X2 , . . . , Xn )] and (b) E[M in(X1 , X2 , . . . , Xn )]
Solution
(a)
Let Y = M ax(X1 , X2 , . . . , Xn ). Then,
FY (y) = P (Y ≤ y) = P (X1 ≤ y, X2 ≤ y, . . . Xn ≤ y)
n
Y
=
P (Xi ≤ y)
=
i=1
n
Y
Fx (y)
i=1
= Fx (y)n
= yn
So,
Z
1
y · fY (y) dy
E[Y ] =
0
Z
1
y · ny n−1 dy
=
0
Z
=
1
ny n dy
0
1
ny n+1 n + 1 0
n
=
n+1
=
4
(b)
Let Z = M in(X1 , X2 , . . . , Xn ). Then,
FZ (z) = P (Z ≤ z) = 1 − P (Z > z)
= 1 − P (X1 > z, X2 , > z, . . . Xn > z)
n
Y
=1−
P (Xi > z)
=1−
=1−
i=1
n
Y
i=1
n
Y
1 − P (Xi ≤ z)
1 − FX (z)
i=1
= 1 − (1 − z)n
So,
Z
1
z · fZ (z) dz
E[Z] =
0
Z
=
1
z · n(1 − z)n−1 dz
0
Note: We can integrate using integration by parts.
R
n
n
− (1−z)
n dz
zn (1−z)
n
n
R
R
u dv = uv − v du. In our case u = zn, dv = (1 − z)n−1 dz, which yields
1
(1 − z)n+1 = z(1 − z) −
n + 1 0
1
=0− −
n+1
1
=
n+1
n
Problem 33 (page 135)
Question
Ten balls are randomly chosen from an urn containing 17 white and 23 black balls. Let X denote the number of white balls
chosen. Compute E[X]
(a) by defining appropriate indicator variables Xi , i = 1, . . . , 10 so that
X=
10
X
Xi
i=1
(b)by defining appropriate indicator variables Yi , i = 1, . . . , 17 so that
X=
17
X
Yi
i=1
Solution
(a)
Let Xi be the indicator variable denoting whether the ith choice was white (1) or black (0). Then X =
17
P (Xi = 1) =
. So,
40
5
P10
i=1
Xi , and
"
E[X] = E
10
X
#
Xi
i=1
=
10
X
E[Xi ]
i=1
= 10 · E[Xi ]
17
= 10 ·
40
= 4.25
Note: The ith ball is equally likely to be any of the 40 original balls, so its probability of being white is the same as for the
first ball.
(b)
Let Yi be the indicator variable denoting whether the ith white ball was chosen (1) or not (0). Then X =
find the probability that the ith white ball was chosen we can write,
P17
i=1
Yi , and to
P (Yi = 1) = P (White balli was chosen)
= P (White balli was chosen first) + P (White balli was chosen second (and not first)) + . . . P (White balli was chosen the
1
39 1
39 38 1
39
31 1
=
+
+
... +
...
40 40 39 40 39 38
40
32 31
3
9 1
3
9
3
8 1
3
9
3
1 1
1
+
+
... +
...
=
40 40 3
9 40 3
9
38
40
3
2
31
10
=
40
So
"
E[X] = E
17
X
#
Yi
i=1
=
17
X
E[Yi ]
i=1
= 17 · E[Yi ]
10
= 17 ·
40
= 4.25
Problem 44 (page 137)
Question
Let Xi denote the percentage of votes cast in a given election that are for candidate i, and suppose that X1 and X2 have a
jdf
(
3(x + y), if x ≥ 0, y ≥ 0, and 0 ≤ x + y ≤ 1
fX1 ,X2 (x, y) =
0,
if otherwise
(a) Find the marginal densities of X1 and X2 .
(b) Find E[Xi ] and V ar(Xi ) for i = 1, 2.
6
Solution
(a)
First we will examine X1 . The marginal distribution of X1 , fX1 (x) is the integral of the jdf, fX1 ,X2 (x, y) over all possible values
of X2 (y). By the definition of the jdf above, we know that fX1 ,X2 (x, y) will only be non-zero if x ≥ 0, y ≥ 0, and 0 ≤ x+y ≤ 1.
This implies that given some fixed value for x, 0 ≤ y ≤ 1 − x. Therefore we have,
1−x
Z
fX1 (x) =
fX1 ,X2 (x, y) dy
0
1−x
Z
=
3(x + y) dy
0
1−x
3 2 = 3xy + y 2 0
3
= (1 − x2 ) for 0 ≤ x ≤ 1
2
Using similar arguments we can show that fX2 (y) =
3
(1 − y 2 ) = fX1 (y)
2
(b)
Since the marginal distributions of X1 and X2 are identical, E[X1 ] = E[X2 ] and V ar(X1 ) = V ar(X2 ). Starting with E[X1 ],
Z
1
x · fX1 (x) dx
Z 1
3
(1 − x2 ) dx
=
x·
2
0
Z 1
3x 3x3
=
−
dx
2
2
0
1
3x2
3x4 =
−
4
8 E[X1 ] =
0
0
3
=
8
And for V ar(X1 ),
V ar(X1 ) = E[X 2 ] − E[X]2
9
= E[X 2 ] −
64
Z 1
9
2
=
x · fX1 (x) dx −
64
0
Z 1 2
3x
3x4
9
=
−
dx −
2
2
64
0
1
3x3
3x5 9
=
−
−
6
10
64
0
1
9
= −
5 64
19
=
320
7
Problem 52 (page 139)
Question
If X1 and X2 have the same pdf, show that
Cov(X1 − X2 , X1 + X2 ) = 0
Note: we are not assuming that X1 and X2 are independent
Solution
We know that Cov(X, Y ) = E[X · Y ] − E[X] · E[Y ]. Therefore we can write,
Cov(X1 − X2 , X1 + X2 ) = E[(X1 − X2 ) · (X1 + X2 )] − (E[X1 − X2 ] · E[X1 + X2 ])
= E[X12 ] + E[X1 X2 ] − E[X2 X1 ] − E[X22 ] − (E[X1 ]2 + E[X1 ]E[X2 ] − E[X2 ]E[X1 ] − E[X2 ]2 ))
= E[X12 ] − E[X22 ] − E[X1 ]2 + E[X2 ]2
= E[X12 ] − E[X1 ]2 − (E[X22 ] − E[X2 ]2 )
= V ar(X1 ) − V ar(X2 )
= V ar(X1 ) − V ar(X1 )
=0
Where the 2nd to last line follows because X1 and X2 have the same pdf, and therefore the same variance.
8