Circuit Complexity meets the
Theory of Randomness
Eric Allender
Rutgers University
SUNY Buffalo, November 11, 2010
Today’s Goal:
To raise awareness of the tight connection
between two fields:
– Circuit Complexity
– Kolmogorov Complexity (the theory of
randomness)
And to show that this is useful.
More generally, to spread the news about
some exciting developments in the field of
derandomization.
Eric Allender: Circuit Complexity meets the Theory of Randomness
<2>
Derandomization
Where do these random bits come
from?
Regular input
Random bits b1,b2,…
Probabilistic algorithm
Eric Allender: Circuit Complexity meets the Theory of Randomness
<3>
Derandomization
Random bits are:
Expensive
Low-quality
Hard to find
Possibly non-existent!
Regular input
Random bits b1,b2,…
Probabilistic algorithm
Eric Allender: Circuit Complexity meets the Theory of Randomness
<4>
Derandomization
seed
Generator g
Regular input
PseudoRandom bits b1,b2,…
Probabilistic algorithm
Eric Allender: Circuit Complexity meets the Theory of Randomness
<5>
A Jewel of Derandomization
[Impagliazzo, Wigderson, 1997]: If there is a
problem computable in time 2n that requires
circuits of size 2εn, then there is a “good”
generator that takes seeds of length O(log n).
Thus, for example, if your favorite NPcomplete problem requires big circuits (as we
expect), then any probabilistic algorithm can
be simulated deterministically with modest
slow-down.
(Run the generator on all of the poly-many
seeds, and take the majority vote.)
Eric Allender: Circuit Complexity meets the Theory of Randomness
<6>
A Jewel of Derandomization
[Impagliazzo, Wigderson, 1997]: If there is a
problem computable in time 2n that requires
circuits of size 2εn, then there is a “good”
generator that takes seeds of length O(log n).
Stated another way (and introducing some
notation), a very plausible hypothesis implies
P = BPP.
Eric Allender: Circuit Complexity meets the Theory of Randomness
<7>
A Jewel of Derandomization
[Impagliazzo, Wigderson, 1997]: If there is a
problem computable in time 2n that requires
circuits of size 2εn, then there is a “good”
generator that takes seeds of length O(log n).
The proof is deep, and incorporates clever
algorithms, deep combinatorics, and
innovations in coding theory.
…and it provides some great tools that shed
insight into the nature of randomness.
Eric Allender: Circuit Complexity meets the Theory of Randomness
<8>
Randomness
Which of these sequences did I obtain by
flipping a coin?
0101010101010101010101010101010
0110100111111100111000100101100
1101010001011100111111011001010
1/π =0.3183098861837906715377675267450
Each of these sequences is equally likely, in the
sense of probability theory – which does not provide
us with a way to talk about “randomness”.
Eric Allender: Circuit Complexity meets the Theory of Randomness
<9>
Kolmogorov Complexity
C(x) = min{|d| : U(d) = x}
– U is a “universal” Turing machine
Important property
– Invariance: The choice of the universal
Turing machine U is unimportant (up to an
additive constant).
x is random if C(x) ≥ |x|.
CA(x) = min{|d| : UA(d) = x}
Let’s make a connection between Kolmogorov
complexity and circuit complexity.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 10 >
Circuit Complexity
Let D be a circuit of AND and OR gates (with
negations at the inputs). Size(D) = # of wires
in D.
Size(f) = min{Size(D) : D computes f}
We may allow oracle gates for a set A, along
with AND and OR gates.
SizeA(f) = min{Size(D) : DA computes f}
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 11 >
Oracle Gates
An
“oracle gate” for B in a circuit:
B
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 12 >
K-complexity ≈ Circuit Complexity
There are some obvious similarities in the
definitions.
C(x) = min{|d| : U(d) = x}
Size(f) = min{Size(D) : D computes f}
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 13 >
K-complexity ≈ Circuit Complexity
There are some obvious similarities in the
definitions. What are some differences?
A minor difference: Size gives a measure of
the complexity of functions, C gives a measure
of the complexity of strings.
– Given any string x, let fx be the function
whose truth table is the string of length 2log|x|,
padded out with 0’s, and define Size(x) to be
Size(fx).
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 14 >
K-complexity ≈ Circuit Complexity
There are some obvious similarities in the
definitions. What are some differences?
A minor difference: Size gives a measure of
the complexity of functions, C gives a measure
of the complexity of strings.
A more fundamental difference:
– C(x) is not computable; Size(x) is.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 15 >
Lightning Review: Computability
Here’s all we’ll need to know about
computability:
– The halting problem, everyone’s favorite
uncomputable problem:
– H = {(i,x) : Mi halts on input x}
– Every r.e. problem is poly-time many-one
reducible to H.
– One such r.e. problem: {x : C(x) < |x|}.
– There is no time-bounded many-reduction in
other direction, but there is a (large) time t
Turing reduction (and hence C is not
computable).
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 16 >
K-complexity ≈ Circuit Complexity
There are some obvious similarities in the
definitions. What are some differences?
A minor difference: Size gives a measure of
the complexity of functions, C gives a measure
of the complexity of strings.
A more fundamental difference:
– C(x) is not computable; Size(x) is.
In fact, there is a fascinating history, regarding
the complexity of the Size function.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 17 >
MCSP
MCSP = {(x,i) : Size(x) < i}.
MCSP is in NP, but is not known to be NPcomplete.
Some history: NP-completeness was
discovered independently by in the 70s by
Cook (in North America) and Levin (in Russia).
Levin delayed publishing his results, because
he wanted to show that MCSP was NPcomplete, thinking that this was the more
important problem.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 18 >
MCSP
MCSP = {(x,i) : Size(x) < i}.
Why was Levin so interested in MCSP?
In the USSR in the 70’s, there was great
interest in problems requiring “perebor”, or
“brute-force search”. For various reasons,
MCSP was a focal point of this interest.
It was recognized at the time that this was
“similar in flavor” to questions about resourcebounded Kolmogorov complexity, but there
were no theorems along this line.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 19 >
MCSP
MCSP = {(x,i) : Size(x) < i}.
3 decades later, what do we know about
MCSP?
If it’s complete under the “usual” poly-time
many-one reductions, then BPP = P.
MCSP is not believed to be in P. We showed:
– Factoring is in BPPMCSP.
– Every cryptographically-secure one-way
function can be inverted in PMCSP/poly.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 20 >
So how can K-complexity and
Circuit complexity be the same?
C(x) ≈ SizeH(x), where H is the halting
problem.
For one direction, let U(d) = x. We need a
small circuit (with oracle gates for H) for fx,
where fx(i) is the i-th bit of x. This is easy,
since {(d,i,b) : U(d) outputs a string whose i-th
bit is b} is computably-enumerable.
For the other direction, let SizeH(fx) = m. No
oracle gate has more than m wires coming
into it. Given a description of D (size not much
bigger than m) and the m-bit number giving
the size of {y in H : |y| ≤ m}, U can simulate DH
and produce fx.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 21 >
So how can K-complexity and
Circuit complexity be the same?
C(x) ≈ SizeH(x), where H is the halting
problem.
So there is a connection between C(x) and
Size(x) …
…but is it useful?
First, let’s look at decidable versions of
Kolmogorov complexity.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 22 >
Time-Bounded Kolmogorov
Complexity
The usual definition:
Ct(x) = min{|d| : U(d) = x in time t(|x|)}.
Problems with this definition
– No invariance! If U and U’ are different
universal Turing machines, CtU and CtU’ have
no clear relationship.
– (One can bound CtU by Ct’U’ for t’ slightly
larger than t – but nothing can be done for
t’=t.)
No nice connection to circuit complexity!
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 23 >
Time-Bounded Kolmogorov
Complexity
Levin’s definition:
Kt(x) = min{|d|+log t : U(d) = x in time t}.
Invariance holds! If U and U’ are different
universal Turing machines, KtU(x) and KtU’(x)
are within log |x| of each other.
And, there’s a connection to Circuit
Complexity:
– Let A be complete for E = Dtime(2O(n)). Then
Kt(x) ≈ SizeA(x).
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 24 >
Time-Bounded Kolmogorov
Complexity
Levin’s definition:
Kt(x|φ) = min{|d|+log t : U(d,φ)=x in time t}.
Why log t?
– This gives an optimal search order for NP
search problems.
– This algorithm finds satisfying assignments
in poly time, if any algorithm does:
– On input φ, search through assignments y in
order of increasing Kt(y|φ).
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 25 >
Time-Bounded Kolmogorov
Complexity
Levin’s definition:
Kt(x) = min{|d|+log t : U(d) = x in time t}.
Why log t?
– This gives an optimal search order for NP
search problems.
– Adding t instead of log t would give every
string complexity ≥ |x|.
…So let’s look for a sensible way to allow the
run-time to be much smaller.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 26 >
Revised Kolmogorov Complexity
C(x) = min{|d| : for all i ≤ |x| + 1, U(d,i,b) = 1 iff
b is the i-th bit of x} (where bit # i+1 of x is *).
– This is identical to the original definition.
Kt(x) = min{|d|+log t : for all i ≤ |x| + 1, U(d,i,b)
= 1 iff b is the i-th bit of x, in time t}.
– The new and old definitions are within O(log
|x|) of each other.
Define KT(x) = min{|d|+t : for all i ≤ |x| + 1,
U(d,i,b) = 1 iff b is the i-th bit of x, in time t}.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 27 >
Kolmogorov Complexity is Circuit
Complexity
C(x) ≈ SizeH(x).
Kt(x) ≈ SizeE(x).
KT(x) ≈ Size(x).
Other measures of complexity can be
captured in this way, too:
– Branching Program Size ≈ KB(x) =
min{|d|+2s : for all I ≤ |x| + 1, U(d,i,b) = 1 iff b
is the i-th bit of x, in space s}.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 28 >
Kolmogorov Complexity is Circuit
Complexity
C(x) ≈ SizeH(x).
Kt(x) ≈ SizeE(x).
KT(x) ≈ Size(x).
Other measures of complexity can be
captured in this way, too:
– Formula Size ≈ KF(x) =
min{|d|+2t : for all I ≤ |x| + 1, U(d,i,b) = 1 iff b
is the i-th bit of x, in time t}, for an alternating
Turing machine U.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 29 >
…but is this interesting?
The result that Factoring is in BPPMCSP was
first proved by observing that, in PMCSP, one
can accept a large set of strings having large
KT complexity: RKT = {x : KT(x) ≥ |x|}.
(Basic Idea): There is a pseudorandom
generator based on factoring, such that
factoring is in BPPT for any test T that
distinguishes truly random strings from
pseudorandom strings. RKT is just such a test
T.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 30 >
Given a hard f, g is good
seed
Generator g
Regular input
PseudoRandom bits b1,b2,…
Probabilistic algorithm
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 31 >
This idea has many variants.
Consider RKT, RKt, and RC.
RKT is in coNP, and not known to be coNP hard.
RC is not hard for NP under poly-time many-one
reductions, unless P=NP.
– How about more powerful reductions?
– Is there anything interesting that we could compute
quickly if C were computable, that we can’t already
compute quickly?
– Proof uses PRGs, Interactive Proofs, and the fact
that an element of RC of length n can be found in
But RC is undecidable! Perhaps H is in P relative to
R C?
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 32 >
This idea has many variants.
Consider RKT, RKt, and RC.
RKT is in coNP, and not known to be coNP hard.
RC is not hard for NP under poly-time many-one
reductions, unless P=NP.
–
–
–
–
How about more powerful reductions? We show:
PSPACE is in P relative to RC.
NEXP is in NP relative to RC.
Proof uses PRGs, Interactive Proofs, and the fact
that an element of RC of length n can be found in
poly time, relative to RC [BFNV].
But RC is undecidable! Perhaps H is in P relative to
RC?Complexity meets the Theory of Randomness
Eric Allender: Circuit
< 33 >
Relationship between H and RC
Perhaps H is in P relative to RC?
This is still open. It is known that there is a
computable time bound t such that H is in
DTime(t) relative to RC [Kummer].
…but the bound t depends on the choice of U
in the definition of C(x).
We also showed: H is in P/poly relative to RC.
Thus, in some sense, there is an “efficient”
reduction of H to RC.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 34 >
This idea has many variants.
Consider RKT, RKt, and RC.
What about RKt?
RKt, is not hard for NP under poly-time manyone reductions, unless E=NE.
– How about more powerful reductions?
– We show: EXP = NP(RKt).
– …and RKt is complete for EXP under P/poly
reductions.
– Open if RKt is in P!
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 35 >
A Very Different Complete Set
RKt, is not hard for NP under poly-time manyone reductions, unless E=NE.
And it’s provably not complete for EXP under
poly-time many-one reductions.
Yet it is complete under P/poly reductions and
under NP-Turing reductions.
First example of a “natural” complete set that’s
complete only using more powerful reductions.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 36 >
A Very Different Complete Set
RKt, is not hard for NP under poly-time manyone reductions, unless E=NE.
And it’s provably not complete for EXP under
poly-time many-one reductions.
(Sketch): Assume that RKt is complete under
poly-time many-one reductions, and let A be a
language over 1* in EXP that’s not in P. (Such
sets A exist.) Let f reduce A to RKt. Thus f(1n)
is in RKt iff 1n is in A. But Kt(f(1n)) < O(log n),
and thus the only way it can be in RKt is if f(1n)
is short (O(log n) bits). Thus A is in P, contrary
to assumption.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 37 >
Playing the Game in PSPACE
We can also define a space-bounded measure
KS, such that Kt(x) < KS(x) < KT(x).
RKS is complete for PSPACE under BPP
reductions (and, in fact, even under “ZPP”
reductions, which implies completeness under
NP reductions and P/poly reductions).
[This makes use of an even stronger form of
the Impagliazzo-Wigderson generator, which
relies on the “hard” function being “downward
self-reducible and random self-reducible”.]
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 38 >
Playing the Game Beyond EXP
For essentially any “large” DTIME, NTIME, or
DSPACE complexity class C, one can show
that the problem of computing (or
approximating) the SIZEA function (where A is
the standard complete set for C) is complete
for C under P/poly reductions.
However, we obtain completeness under
uniform notions of reducibility (such as NP- or
ZPP- reducibility) only inside NEXP.
Is this inherent? (E.g., is the inclusion “NEXP
is contained in NP(RC)” optimal?)
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 39 >
Some Speculation
Is this inherent? (E.g., is the inclusion “NEXP
is contained in NP(RC)” optimal?)
If so, then this could lead to characterizations
of complexity classes in terms of efficient
reductions to the (undecidable) set RC.
…which would sufficiently strange, that it
would certainly have to be useful!
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 40 >
Some Speculation
Is this inherent? (E.g., is the inclusion “NEXP
is contained in NP(RC)” optimal?)
If so, then this could lead to characterizations
of complexity classes in terms of efficient
reductions to the (undecidable) set RC.
New development: evidence that complexity
classes can be characterized in terms of
reductions to RC (or RK).
We know: NEXP is in NP relative to RK (no
matter which univ. TM we use to define K(x)).
No class bigger than EXPSPACE has this
property.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 41 >
Things to Remember
There has been impressive progress in
derandomization, so that now most people in
the field would conjecture that BPP = P.
The tools of derandomization can be viewed
through the lens of Kolmogorov complexity,
because of the close connection that exists
between circuit complexity and Kolmogorov
complexity.
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 42 >
Things to Remember
This has led to new insights in complexity
theory, such as:
– Clarification of the complexity of Levin’s Kt
function.
– Presentation of fundamentally different
classes of complete sets for PSPACE and
EXP.
– Clarification of the complexity of MCSP.
It has also led to new insights in Kolmogorov
complexity (such as the efficient reduction of H
to RC).
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 43 >
A Sampling of Open Questions
Is MCSP complete for NP under P/poly
reductions? (The analogous sets in PSPACE,
EXP, NEXP, etc. are complete under P/poly
reductions.)
Is MCSP “complete” in some sense for
cryptography? (I.e., can one show that secure
crypto is possible ↔ MCSP is not in P/poly?
The → direction is known. How about the ←
direction? Can one build a crypto scheme
based on MCSP?)
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 44 >
A Sampling of Open Questions
Can one prove (unconditionally) that RKt is not
in P? (I know of no reason why this should be
hard to prove.)
– Note in this regard, that EXP = ZPP iff there
is a “dense” set in P that contains no strings
of Kt complexity < √n.
Can one show that RKt is not complete for EXP
under poly-time Turing reductions?
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 45 >
A Sampling of Open Questions
Can one improve the inclusions:
– PSPACE is in P relative to RC.
– NEXP is in NP relative to RC.
Can one show that H is not poly-time Turing
reducible to RC?
Eric Allender: Circuit Complexity meets the Theory of Randomness
< 46 >
© Copyright 2026 Paperzz