s independent r: s is random r s is random x L

Probabilistic Turing Machines
• Probabilistic TMs
have an “extra”
tape: the random
tape
“standard” TMs
M(x)
content of
input tape
probabilistic TMs
Prr[M(x,r)]
content of
random tape
1
Does It Really Capture The Notion of
Randomized Algorithms?
It doesn’t matter if you toss
all your coins in advance or
throughout the computation…
2
BPP
(Bounded-Probability Polynomial-Time)
Definition: BPP is the class of all
languages L which have a probabilistic
polynomial time TM M, s.t
L(x)=1  xL
x Prr[M(x,r) = L(x)]  2/3
such TMs are called
‘Atlantic City’
3
BPP Illustrated
For any input x,
Note: TMs which are right for most x’s
(e.g for PRIMES: always say ‘NO’) are
NOT acceptable!
all random
strings
random strings
for which M is
right
4
Amplification
Claim: If LBPP, then there exists a
probabilistic polynomial TM M’, and a
polynomial p(n) s.t
x{0,1}n Prr{0,1}p(n)[M’(x,r)L(x)] < 1/(3p(n))
We can get better amplifications,
but this will suffice here...
5
Proof Idea
• Repeat
– Pick r uniformly at
random
– Simulate M(x,r)
• Output the majority
answer
r
M(x,r)
0111001
Yes
1011100
Yes
0001001
No
1100000
Yes
0010011
No
0110001
Yes
6
Relations to P and NP
P

?
BPP 
NP
ignore the
random input
7
Does BPPNP?
We may have considered saying:
“Use the random string as a witness”
Why is that wrong?
Because non-members may be recognized as members
8
“Some Comfort”
Theorem (Sipser,Lautemann): BPP2
Underlying observation:
LBPP  there exists a poly. probabilistic TM M,
s.t for any n and x{0,1}n let m=p(n) s.t
xL  s1,…,sm{0,1}m r{0,1}m 1imM(x,rsi)=1
Make sure you understand why the theorem follows
9
Normally for a random choice test
error bounded polynomialy
True predict
Prob: 1-1/Poly
False predict
Prob: 1/Poly
10
We prove exist list of
test element Si s.t
Test
succeed
for xL
Prob = 1
Prob:
1-1/Poly
Test fail
for xL
Prob << 1
Prob:
1/Poly
11
For a ‘Good’ element …any choice is good
12
While for a ‘Bad’ element … many random
choices will not be covered
13
Our Starting Point
n bits
• LBPP
• By amplification, there’s a
poly-time machine M which
– uses m random coins
– errs w.p < 1/3m
x
m bits
r
M
xL?
false for less than
1/3m of the r’s
14
Proving the Underlying
Observation
We will follow the Probabilistic Method
Prr[r has property P] > 0   r with property P
15
First Direction
• Let xL.
• We want s1,…,sm{0,1}m s.t
r{0,1}m 1imM(x,rsi)=1
• So we’ll bound the probability over si’s
that it doesn’t hold.
16
Bounding The Probability Random si’s Do
Not Satisfy This
m
Prs ,..., s
unionbound
1
m
m
[

r

{
0
,
1
}
,  M ( x, r  si )  0]
 {0,1}m
R

si’s independent

i 1
r{0,1}m

Prs ,..., s
1
m
m
m R {0 ,1}
[ M ( x, r  si )  0]
i 1
m
  Pr
r{0 ,1}m i 1
s1 ,..., sm R {0 ,1}m
[ M ( x, r  si )  0]
m
 2 m   Prs {0,1}m [ M ( x, s)  0]
r: s is random 
rs is random
R
i 1
xL
m
 1 
 2 
 1
 3m 
m
17
Second Direction
• Let xL.
• Let s1,…,sm{0,1}m .
• We want r{0,1}m s.t
1imM(x,rsi)=0
• So we’ll bound the probability over r
that it doesn’t hold.
18
Bounding The Probability Random r
Does Not Satisfy This
m
unionbound
Prr {0,1}m [ M ( x, r  si )  1]
R
m
i 1
  Prr{0,1}m [ M ( x, r  si )  1]
i 1
xL
1
 m
1
3m
19
Q.E.D!
It follows that:
LBPP  there’s a poly. prob. TM M, s.t
for any x there is m s.t
xL  s1,…,sm r 1imM(x,rsi)=1
Thus, L2
 BPP2
20
Summary

• We defined the polynomial-time
hierarchy
– Saw NP  PH  PSPACE
– NP=coNP  PH=NP (“the hierarchy
collapses”)
21
Summary

• We presented probabilistic TMs
– We defined the complexity class BPP
– We saw how to amplify randomized
computations
– We proved P  BPP  2
22
Summary

• We also presented a new paradigm
for proving existence utilizing the
algebraic tools of probability theory
The probabilistic method
Prr[r has property P] > 0   r with property P
23