The Bright Side of Hardness
Relating Computational Complexity
and Cryptography
Oded Goldreich
Weizmann Institute of Science
Background: The P versus NP Question
Solving problems is harder than checking solutions
Proving theorems is harder than verifying proofs
systems that are
easy to use
but hard to abuse
CRYPTOGRAPHY
useful problems
are infeasible
to solve
One-Way Functions
The good/bad news depends on typical (average-case) hardness
This leads to the def. of one-way functions (OWF)
= creating images for which it is hard to find a preimage
DEF: f :{0,1}*{0,1}* is a one-way function if
E.g..: MULT
1. f is easy to evaluate
6783 8471
= 57458793
2. f is hard to invert in an average-case sense;
that is, for every efficient algorithm A
Prx{0,1}n [ A( f ( x)) f 1 ( f ( x))] negl (n)
???? ????
= 45812601
Applications to Cryptography
Using any OWF (e.g., assuming intractability of factoring):
• Private-Key Encryption and Message Authentication;
• Signature Schemes
• Commitment Schemes
and more…
Using any Trapdoor Permutation (e.g., assuming intractability of factoring):
• Public-Key Encryption;
• General Secure Multi-Party Computation
and more…
Amplifying Hardness
Is some “part of the preimage” of a OWF extremely hard
to predict from the image?
easy
x
f(x)
HARD
Recall:
by def. it
is hard to
retrieve
the entire
preimage.
b(x)
DEF: b :{0,1}*{0,1} is a hardcore of f
1. b is easy to evaluate
2. b(x) is hard to predict from f(x) in an
average-case sense; i.e., for every eff. alg. A
Prx{0,1}n [ A( f ( x)) b( x)]
1
negl (n)
2
Suppose
that f is 1-1.
Then, if f
has a hardcore,
then f is hard
to invert.
The Existence of a Hardcore
THM: For every OWF f , the predicate b(x,r)= i[n] xi ri mod 2
is a hardcore of f’(x,r)=(f(x),r).
COR (to the proof): If y
then (y,S)
H(y) {0,1}m is hard to effect,
iS H(y)i mod 2 is hard to predict (better than 50:50).
Warm-up: show that b is moderately hard to predict in the sense
that for every eff. alg. A
Pr
[ A( f ( x)) b( x)] 0.76
x{0,1}n
b(x,e(i))=xi
=
w.p. 0.76
=
O.w., obtain each (i.e., ith) bit of x as follows. On input f(x), repeat:
Select random r {0,1}n and guess A(f(x),r+e(i))+A(f(x),r).
b(x,r)+xi
b(x,r)
w.p. 0.76
1st Application: Pseudorandom Generators
Deterministic programs (alg’s) that stretch short random seeds to
long(er) pseudorandom sequences.
seed
(random)
G
?
totally random
(mental experiment)
THM: PRG if and only if OWF
Pseudorandom Generators (form.)
seed
Deterministic alg’s that stretch short random
seeds to long(er) pseudorandom sequences.
G
?
totally random
DEF: G is a pseudorandom generator (PRG) if
1. It is efficiently computable: s
G(s) is easy.
2. It stretches: |G(s)| > |s| (or better |G(s)| >> |s|).
3. Its output is comput. indistinguishable from random;
that is, for every efficient alg’ D
Prs{0,1}n [ D(G(s)) 1] Pry{0,1} [ D( y) 1] negl (n)
(where | G(1n ) | )
THM: PRG if and only if OWF
PRG iff OWF (“Hardness vs Randomness”)
THM: PRG if and only if OWF
PRG implies OWF: Let G be a PRG (with doubling stretch).
Define f(x)=G(x).
Note: inverting f yields distinguishing G‘s output from random,
since random 2|x|-bit strings are unlikely to have a preimage under f.
OWF implies PRG (special case): Let f be a 1-1 OWF, with hardcore b.
Then G(s)=f(s)b(s) is a PRG (with minimal stretch (of a single bit)).
Recall (THM): PRGs with minimal stretch (of a single bit) imply
PRGs with maximal stretch (i.e., allowing efficient map 1|s|
1|G(s)|).
Pseudorandom Functions (PRF)
DEF: {fs:{0,1}|s|{0,1}|s|} is a pseudorandom function (PRF) if
1. Easy to evaluate: (s,x)
fs(x) is easy.
2. It passes the “Turing Test (of randomness)”:
q1
either fs
a1
qt
at
or totally
random
function
THM: PRG imply PRF.
Historical notes (re Turing)
Cryptography: private-key encryption
(based on PRF)
(same)
key
E
msg
D
key
ciphertext
Ek(msg) =
(1) rR{0,1}n
(2) ciphertext = (r,fk(r)msg)
ciphertext = (r,y)
Dk(r,y) = y fk(r)
msg
Cryptography: message authentication
(based on PRF)
(same)
key
S
msg
tag
key
Sk(msg) = fk(msg) = tag
V
msg + tag
Vk(msg,tag) = 1
iff tag = fk(msg)
yes
or no
More Cryptography: Sign and Commit
Signature scheme message authentication except that it allows
universal verification (by parties not holding the signing key).
THM: OWF imply Signature schemes.
Commitment scheme = commit phase + reveal phase s.t.
•Hiding: commit phase hides the value being committed.
•Binding: commit phase determines a single value
that can be later revealed.
THM: PRG imply Commitment schemes.
A generic cryptographic task:
forcing parties to follow prescribed instructions
Party
Public info.: y
private input: x
Prescribed instruction: for a predetermined f, send f(x,y).
z
x s.t. z = f(x,y) ?
The Party can prove the correctness of z by revealing x,
but this will violate the privacy of x.
Prove in “zero-knowledge” that x exists (w.o. revealing anything else).
THM: Commitment schemes imply ZK proofs as needed.
Zero-Knowledge proof systems
E.g., for graph 3-colrability (which is NP-complete).
Prove that a graph G=(V,E) is 3-colorable w.o. revealing
anything else (beyond what follows easily from this fact).
The protocol = repeats the following steps suff. many (i.e., |E|2) times:
1. Prover commits to a random relabeling of a (fixed) 3-coloring
(i.e., commit to the color of each vertex separately).
2. Verifier requests to reveal the colors of the endpoints of a random edge.
3. Prover reveals the corresponding colors.
4. Verifier checks that the colors are legal and different.
THM: OWF imply ZK proofs for 3-colorability.
Universal Results: general secure
multi-party computations
Any desired multi-party functionality
can be implemented securely.
(represents a variety of THMs in various models,
some relying on computational assumptions (e.g., OWF etc)).
P1
x1
f1(x)
P2
x2
f2(x)
Pm
xm
fm(x)
P1: x1
P2: x2
Pm: xm
f1(x)
fm(x)
trusted
party
(predetermined fi’s)
(local inputs: x=(x1,x2,…,xm))
(desired local outputs)
The effect of a trusted party
can be securely emulated by
distrustful parties.
The End
Note: the slides of this talk are available at
http://www.wisdom.weizmann.ac.il/~oded/T/ecm08.ppt
Material on the Foundations of Cryptography
(e.g., surveys and a two-volume book)
is available at http://www.wisdom.weizmann.ac.il/~oded/foc.html
Historical notes relating PRFs to Turing
1. The term “Turing Test of Randomness” is analogous to the
famous “Turing Test of Intelligence” which refers to
distinguishing a machine from Human via interaction.
2. Even more related is the following quote from Turing’s work (1950):
“I have set up on a Manchester computer a
small programme using only 1000 units of
storage, whereby the machine supplied with
one sixteen figure number replies with
another within two seconds. I would defy
anyone to learn from these replies
sufficient about the programme to be able
to predict any replies to untried values.”
Back to the PRF slide
© Copyright 2026 Paperzz