Practise problems 1) What is the probability of at least three of a kind

Practise problems
1) What is the probability of at least three of a kind in poker?
Solution: Among 5 cards we can only have three of a kind of one kind. Thus
we have 13 choices for a kind. Then we may have four of this kind: 52 − 4 = 48
possibilities. Then we choose 3 out of the four and are left with 48
for the other
2
two cards. Thus
13
48 +
4 48
3
2
52
5
.
2) In a box we have two read coins and one black coin. For a black coin the
probability of heads is p and for read q. What is the probability of k heads after n
trials?
Solution: We assume here first that in every game a coin is chosen at random.
Then
P (H) = P (H|B)P (B) + P (H|R)P (R) = p(1/3) + q(2/3) =
Then the answer is
p + 2q
= P.
3
n
k
k
P (1 − P )n−k . The problem could also be solved differently.
Assume that we choose at the beginning and then stick to that choice. Then we get
P (ksucces) = P (ksucces|B)P (B) + P (ksucces|R)P (R)
k
n p (1 − p)n−k 2q k (1 − q)n−k
+
).
=
(
3
3
k
These numbers are in general not equal.
3) Prove that A and B are independent if and only of Ac and B c are independent?
Solution: Assume that A, B are independent. Then
P (Ac B c ) = P (Ac ) − P (Ac B) = (1 − P (A)) − [P (B) − P (AB)]
= (1 − P (A)) − [P (B) − P (A)P (B)] = (1 − P (A))(1 − P (B)) = P (Ac )P (B c ) .
Now assume that Ac and B c are independent. Let à = Ac and B̃ = B c . By our
first proof we have
P (AB) = P (Ãc B̃ c ) = P (Ãc )P (B̃ c ) = P (A)P (B) .
This completes the proof.
1
4) Let Ω = {0, 1} × {0, 1}. Assume
P ({(0, 0)}) = 1/4 = P ({(1, 1)})
Find all possible values of P ({(0, 1)}) and P ({1, 0}) such that E = {(0, 0), (0, 1)}
and F = {(0, 1), (1, 1)} are independent.
Solution: Let us fix the four numbers
p00 = P ({0, 0}) =
1
4
p01 = P ({0, 1})
p10 = P ({1, 0})
p11 = P ({1, 1}) .
We need
p01 + p10 = 12
for a probability space. Also
1
P (E) = ( + p01 )
4
and
1
P (F ) = ( + p01 ) .
4
Independence means
P (E)P (F ) = P (EF ) = P ({(0, 1)}) = p0,1 .
This gives
p01
1
+
+ p20,1 = p0,1 .
16
2
Let q = p01 . Then we have to find the solutions to
q 2 − q/2 = −
1
16
or equivalently
(q − 1/4)2 = 0 .
Thus p0,1 = 1/4 = p10 is the only solution which gives independence.
5) Given is Ω = {0, 1}n and P ({(a1 , ..., an ))}) = p# of 1’s (1 − p)# of 0’s . Decide
whether the following events E and F are independent
2
1. E describes the event that the first trial is successful, F that the second event is
successful. Yes, they are independent P (E) = p, P (F ) = p and P (EF ) = p2 .
2. E describes the event that the first trial is successful, F that one of the trials
2 to n is successful. Yes, they are independent. Indeed, P (E) = p and
n−1 X
n−1 k
P (F ) =
p (1 − p)n−k .
k
k=1
On the on the hand
n−1 X
n − 1 k+1
P (EF ) =
p (1 − p)n−k = pP (F ) = P (E)P (F )
k
k=1
because we have one more success.
3. E describes the event that the first trial is successful, F that one of the trials
1 to n is successful. No, they are not independent. P (E) = p,
P (F ) =
n X
n
k=1
k
pk (1 − p)n−k
and
p = P (EF ) 6= pP (F ) .
In order to be independent we would need P (F ) = 1. Let us calculate
P (F c ) = (1 − p)n .
For large n this number converges to 0, and that is why we say that E and Fn
are weakly independent (or only weakly dependent).
4. E says that half of the trials are successful, F that two thirds of the trials are
successful.
Here we have P (E) =
n
k
pk (1 − p)n−k and we have to assume n is even. For
the other even we better assume that n is divisible by 3. So from now on we
assume that n = 6m is divisible by 6 and get
6m
6m 4m
3m
P (E) =
(p(1 − p)) , P (F ) =
p (1 − p)2m .
3m
4m
3
More interesting is the question what is P (EF ). We have
P (EF ) = P (F |E)P (E) .
This means we have chosen half of the cards successfully and have to chose
1/6 of the rest successfully, i.e.
P (F |E) =
3m m
p (1 − p)2m .
m
We get
3m m
2m 6m
(p(1 − p))3m
p (1 − p)
3m
m
P (EF ) = P (F |E)P (E) =
3m 6m 4m
=
p (1 − p)5m
m
3m
and
6m 3m
3m 6m
P (E)P (F ) =
p (1 − p)
p4m (1 − p)2m
3m
4m
6m 6m 7m
=
p (1 − p)5m .
4m 3m
Looks difficult again. Let us recall that E and F are independent if P (F |E) =
P (F ), this means
6m 4m
3m m
2m
p (1 − p)
=
p (1 − p)2m .
4m
m
Equivalently
3m
m
6m
4m
p
3m
=
.
For large m the right hand side is between 0 and 1, and hence for such m we
can find p = p(m) for which these events are independent! In general this is
wrong!
5. E is the even that the pattern 010 repeats and F that we have k successes.
Here, we assume that n = 3m is divisible by 3 and find P (E) = p2m (1 − p)m .
k
For F we have 3m
p (1 − p)3m−k and
k

p2m (1 − p)m k = m
P (E ∩ F ) =
0
else
4
Again the best we can hope for is weak independence if P (F ) =
3m
m
m
p (1 −
p)2m is very close to 1. This almost happens for very particular choices of p
and m.
6) n systems are run in parallel and the probability of success is p for each. What is
the conditional probability that the first system fails, given that one of the systems
is successful.
Solution: Let Ek be the event that system Sk is successful. Then we have to
calculate
P (E1c |E1 ∨ · · · ∨ En ) =
P (E1c (E1 ∨ · · · ∨ En ))
P (E2 ∨ · · · ∨ En )
=
.
P (E1 ∨ · · · ∨ En )
P (E1 ∨ · · · ∨ En )
Calculating the probability of at least one system achieving the job is easy:
P (E1 ∨ · · · En ) = 1 −
P (E1c
· · · Enc )
n
Y
= 1 − (1 − pi ) .
i=1
Here we used independence. Thus the answer is
Q
1 − ni=2 (1 − pi )
Q
.
1 − ni=1 (1 − pi )
For large n this is very close to 1.
7) Suppose the probability that you fail the test is 0.2. After grading the test we
know that with probability 0.9 the first problem is solved correctly if the exam is
passed, and the probability of solving problem one among those who fail is 0.2. How
have your odds changed if you solve the first problem correctly?
Solution: Without introducing the new evidence the odds for failing the exam are
o(F ) =
0.2
1
= .
0.8
4
Now we introduce the new evidence E, that problem 1 was solved correctly, and
find the conditional odds
o(F |E) =
P (F )P (E|F )
.
P (F c )P (E|F c )
We now that P (E|F c ) = P (E|passing exam) = 0.9 and P (E|F ) = 0.2. Thus we
find the new odds
o(F |E) = o(F )
0.2
12
1
=
=
0.9
49
18
5
of failing to be much more in favor of passing.
8) On a poker hand what is the conditional probability of 4 aces given you have
already 3?
Solution: That is easy. Once we have three aces, we may simply choose 48 succesful
possible sets (remaing ace and one of the other 48 cards) out of a total number of
49
, i.e.
2
48
2
=
≈ 0.04 .
49
49
2
This can also be calculated differently. How?
9) A medical test is 90% effective. 3% of the tested population have the disease
and there are 3% false positives. What is the conditional probability that a patient
has the disease given that the test is positive? What is the conditional probability
that a patient has the disease given that the test is negative? Would you buy this
medical test?
Solution: We have
P (D+)
P (+|D)P (D)
=
P (+)
P (+|D)P (D) + P (+|Dc )P (Dc )
1
0.9 × 0.03
=
=
0.03×0.97
0.9 × 0.03 + 0.03 × 0.97
1 + 0.9×0.03
90
1
=
≈ 0.91
97 =
98
1 + 90
P (D|+) =
I guess, I would buy it.
10) 3.38 on page 100
Urn A has 5 white and 7 black balls, and urn B has 3 white and 12 black balls.
After flipping a fair coin we choose A for heads and B for tail. We want to calculate
P (T W )
P (W |T )P (T )
=
P (W )
P (W |T )P (T ) + P (W |H)P (H)
3/12 × 1/2
1
1
=
=
=
3/12 × 1/2 + 5/7 × 1/2
1 + (5/7)/(3/12)
1 + 20/7
7
1
=
= .
21
3
P (T |W ) =
11) 3.43 on page 101
6
Solution: We have three coins chosen with equal probability. P (H|C1 ) = 1/2,
P (H|C2 ) = 3/4, P (H|C3 ) = 1. What is
P (C3 H)
P (H|C3 )P (C3 )
=
P (H)
P (H|C3 )P (C3 ) + P (H|C2 )P (C2 ) + P (H|C1 )P (C1 )
1
4
1
1/3
=
=
= .
=
1/3 + 3/4 × 1/3 + 1/2 × 1/3
1 + 3/4 + 1/2
8
2
P (C3 |H) =
12) 3.45 on page 101
i
.
10
Solution: We have 10 coins P (H|Ci ) =
Then
P (H|C5 )P (C5 )
P (H)
P (C5 |H) =
The coins are randomly selected. Hence we have
10
X
P (H) =
P (H|Ci )P (Ci )
i=1
10
1 X i
=
10 i=1 10
1 10 × 11
2
=
.
100
2
11
=
Here we used the familiar formula
n
X
k =
k=1
n(n + 1)
.
2
The answer is
P (C5 |H) =
5 × 11
P (H|C5 )P (C5 )
=
= 0.275 .
2/11
100 × 2
13) 3.64 on page 103
Solution: A husband and a wife are independently asked to answer a question.
Both are correct with probability p.
Strategy 1: One of them is chosen at random, and then answers. Then the probability of winning is 1/2p + 1/2p = p.
Strategy 2: In case of disagreement one of the answers is chosen at random. A priori
we have four outcomes (1 = correct) (11), (01), (10), (00). Then we have
p2
p2 + (1 − p)2
P (1|agree) =
7
Given that they disagree, the probability of one of them being right is p(1−p)/2(p(1−
p)) = 1/2. Thus chosen at random we get
P (1|disagree) =
11 11
1
+
= .
22 22
2
(No change here!). Thus
P (1) = P (1|agree)P (agree) + P (1|disagree)P (disagree)
1
= p2 + (2p(1 − p)) = p(p + (1 − p)) = p .
2
So let the wife do the job already.
14)3.86 on page 105
I will show you what I think you can do.
a) Fix the set B = {1, ..., i}.
P (A ⊂ B|B = {1, ..., i}) =
i
X
−n
P (A ⊂ B, |P | = k) = 2
k=0
because we have
i
k
i X
i
k=0
k
many of those subsets.
b) Since A and B are chosen independently, we deduce from symmetry that
−n
P (A ⊂ B) = 2
−n
=4
n X
n
i
P (A ⊂ B|B = {1, ..., i})
i=0
n X
i X
i=0 k=0
i
k
n
.
i
Now the book claims this to be equal to (3/4)n , they are probably right.
15)3.90 on page 106
Solution: We have three judges which each find a person guilty with probability
0.7. Given that the person is innocent this drops to 0.2.
If innocent and guilt were indeed determined before then the judges actions would
be independent.
The problem here, however, interprets guilty, as judged guilty, i.e. 2/3 of the judges
have said so.
We let Ei be the event that judge i votes guilty.
8
a) We have
P (E3 |E1 E2 ) =
P (E1 E2 E3 )
P (E2 E3 )
Now let G be the outcome guilty, then
P (E2 E3 ) = P (E2 E3 |G)P (G) + P (E2 E3 |Gc )P (Gc ) = (0.7)2 P (G) + (0.2)2 P (Gc ) .
What on earth is P (G)? We know that
P (G) = P (E1 E2 E3 ) + 3P (E1 E2 E3c )
= P (E1 E2 E3 |G)P (G) + P (E1 E2 E3 |Gc )P (Gc ) + 3[P (E1 E2 E3c G) + P (E1 E2 E3c Gc )]
= (0.7)3 P (G) + (0.2)3 P (Gc ) + 3[P (E1 E2 E3c G) + P (E1 E2 E3c Gc )]
Now P (E1 E2 E3c Gc ) = 0 and
P (E1 E2 E3c G) = P (E1 E2 E3c G|E1 E2 )P (E1 E2 ) = P (E1 E2 )
= P (E1 E2 |G)P (G) + P (E1 E2 |Gc )P (Gc ) = (0.7)2 P (G) + (0.2)2 P (Gc ) .
Therefore we have
P (G) = (0.7)3 P (G) + (0.2)3 P (Gc ) + 3(0.7)2 P (G) + 3(0.2)2 P (Gc )
= 3.7(0.7)2 P (G) + (3.2)(0.2)2 (1 − P (G)) .
This means
[1 − 3.7(0.7)2 + 3.2(0.2)2 ]P (G) = (3.2)(0.2)2
or
P (G) =
(3.2)(0.2)2
0.128
= p =
≈ 0.121 .
2
2
[1 − 3.7(0.7) − 3.2(0.2) ]
1.05548
Once we have this number the rest can be computed. The judges are no longer
‘independent’.
9