Document

Public-Key Encryption from
Different Assumptions
Benny Applebaum
Boaz Barak
Avi Wigderson
Plan
• Background
• Our results
- assumptions & constructions
• Proof idea
• Conclusions and Open Problems
Private Key Cryptography
k
(2000BC-1970’s)
Secret key
Public Key Cryptography
(1976-…)
Public Key Crypto
Talk Securely with no shared key
Beautiful Math
Private Key Crypto
Share key and then talk securely
Unstructured
Few Candidates
Many Candidates
• Discrete Logarithm
• DES
[DiffieHellman76,Miller85,Koblitz87]
[Feistel+76]
• RC4
“Beautiful structure”[Rivest87]
may lead to
[RivestShamirAdleman77,Rabin79]
• Blowfish
unforeseen attacks!
[Schneie93]
• Integer Factorization
• Error Correcting Codes
[McEliece78,Alekhnovich03,Regev05]
• Lattices
[AjtaiDwork96, Regev04]
• AES
[RijmenDaemen98]
• Serpent
[AndersonBihamKnudsen98]
• MARS
[Coppersmith+98]
The ugly side of beauty
Factorization of n bit integers
Trial Division
~exp(n/2)
300BC
Continued Fraction
~exp(n1/2)
1974 1975
Pollard’s Alg
~exp(n/4)
1977
RSA
invented
Quadratic Sieve
~exp(n1/2)
1985
Shor’s Alg
~poly*(n)
1990
Number Field Sieve
~exp(n1/3)
1994
The ugly side of beauty
Factorization of n bit integers
Trial Division
~exp(n/2)
Continued Fraction
~exp(n1/2)
300BC
1974 1975
Pollard’s Alg
~exp(n/4)
Quadratic Sieve
~exp(n1/2)
1977
RSA
invented
1985
Shor’s Alg
~poly*(n)
1990
1994
Number Field Sieve
~exp(n1/3)
Are there “ugly” public key cryptosystems?
Cryptanalysis of DES
DES invented
Trivial 256 attack
1976
Linear Attack [Matsui]
243 time+examples
1990
1993
Differntial Attack [Biham Shamir]
247 time+examples
Complexity Perspective
Our goals as complexity theorists are to prove that:
• NP P
• NP is hard on average
•  one-way functions
•  public key cryptography
(clique hard)
(clique hard on avg)
(planted clique hard)
(factoring hard)
What should be done?
Ultimate Goal: public-key cryptosystem from one-way function
• Goal: PKC based on more combinatorial problems
–
–
–
increase our confidence in PKC
natural step on the road to Ultimate-Goal
understand avg-hardness/algorithmic aspects of natural problems
• This work: Several constructions based on combinatorial problems
• Disclaimer: previous schemes are much better in many (most?) aspects
–
–
–
Efficiency
Factoring: old and well-studied
Lattice problems based on worst-case hardness (e.g., n1.5-GapSVP)
Plan
• Background
• Our results
- assumptions & constructions
• Proof idea
• Conclusions and Open Problems
Assumption DUE
Decisional-Unbalanced-Expansion:
Hard to distinguish G from H
• Can’t approximate vertex expansion in random unbalanced bipartite graphs
• Well studied problem though not exactly in this setting (densest-subgraph)
G random (m,n,d) graph
H random (m,n,d) graph
+ planted shrinking set
m
n

d
m
n
T
of size<q/3
S
d
of size q
Assumption DUE
Decisional-Unbalanced-Expansion:
Hard to distinguish G from H
We prove:
• Thm. Can’t distinguish via cycle-counting / spectral techniques
• Thm. Implied by variants of planted-clique in random graphs
G random (m,n,d) graph
H random (m,n,d) graph
+ planted shrinking set
m
n

d
m
n
T
of size q/3
S
d
of size q
Assumption DSF
Decisional-Sparse-Function:
Let G be a random (m,n,d) graph. Then, y is pseudorandom.
• Hard to solve random sparse (non-linear) equations
• Conjectured to be one-way function when m=n [Goldreich00]
• Thm: Hard for: myopic-algorithms, linear tests, low-depth circuits (AC0)
(as long as P is “good” e.g., 3-majority of XORs)
m
n
y1
x1
P is (non-linear)
predicate
yi =P(x2,x3,x6)
xn
random
input
d
ym

random
string
Assumption SLIN
SearchLIN : Let G be a random (m,n,d) graph.
Given G and y, can’t recover x.
• Hard to solve sparse “noisy” random linear equations
• Well studied hard problem, sparseness doesn’t seem to help.
• Thm: SLIN is Hard for:
low-degree polynomials (via [Viola08])
low-depth circuits (via [MST03+Brav
n-order Lasserre SDP’s [Schoen08]
m
n
x1
y1
yi =x2+x3+x6 +err
xn
random
input
ym
 - noisy
bit
Goal: find x.
Main Results
PKC from:
Thm 1: DUE(m, q= log n, d)+DSF(m, d)
e.g., m=n1.1 and d= O(1)
pro: “combinatorial/private-key” nature
con: only n log n security
-
DUE: graph looks random
n
q/3
m
DSF: output looks random
output
input
x1
dLIN: can’t find x
output
input
x1
x2+x3+x6+err
P(x2,x3,x6)
q
xn
d
xn
d
Main Results
PKC from:
Thm 1: DUE(m, q= log n, d)+DSF(m, d)
Thm 2: SLIN(m=n1.4,=n-0.2,d=3)
DUE: graph looks random
n
q/3
m
DSF: output looks random
output
input
x1
dLIN: can’t find x
output
input
x1
x2+x3+x6+err
P(x2,x3,x6)
q
xn
d
xn
d
Main Results
PKC from:
Thm 1: DUE(m, q= log n, d)+DSF(m, d)
Thm 2: SLIN(m=n1.4,=n-0.2,d=3)
Thm 3: SLIN(m=n log n, ,d)
+ DUE(m=10000n, q=1/, d)
DUE: graph looks random
n
q/3
m
DSF: output looks random
output
input
x1
dLIN: can’t find x
output
input
x1
x2+x3+x6+err
P(x2,x3,x6)
q
xn
d
xn
d
3LIN vs. Related Schemes
Our scheme
[Alekhnovich03]
[Regev05]
#equations
O(n1.4)
O(n)
O(n)
noise rate
1/n0.2
1/n
1/n
degree (locality)
3
n/2
n/2
field
binary
binary
large
evidence
resists SDPs,
related to refute3SAT
implied by n1.5-SVP
[Regev05,Peikert09]
Our intuition:
• 1/n noise was a real barrier for PKC construction
• 3LIN is more combinatorial (CSP)
• low-locality-noisy-parity is “universal” for low-locality
dLIN: can’t find x
output
input
x1
x2+x3+x6+err
xn
d
Plan
• Background
• Our results
- assumptions & constructions
• Proof idea
• Conclusions and Open Problems
S3LIN(m=n1.4,=n-0.2)  PKE
Goal: find x
n n
x1
1
m
1 1
M
y1
 - noisy
bit
x
yi =x2+x3+x+
6+err
e
xn
ym
randomrandom
input
3-sparse matrix
err vector
of rate 
=
y
Our Encryption Scheme
=n-0.2
Params: m=10000n1.4
Public-key: Matrix M
|S|=0.1n0.2
Private-key: S s.t Mm=iSMi
Encrypt(b): choose x,e and output z=(y1, y2,…, ym+b)
x
S
+
e=
=
y
y
+ b
M
Decryption: Given ciphertext z output iS zi
• w/p (1-)|S| >0.9 no noise in eS  iS yi=0  iS zi=b
zz
Our Encryption Scheme
=n-0.2
Params: m=10000n1.4
Public-key: Matrix M
|S|=0.1n0.2
Private-key: S s.t Mm=iSMi
Encrypt(b): choose x,e and output z=(y1, y2,…, ym+b)
x
S
+
e=
=
y
y
zz
+ b
M
Thm. (security): If M is at most 0.99-far from uniform
S3LIN(m, ) hard  Can’t distinguish E(0) from E(1)
Proof outline: Search  Approximate Search  Prediction
 Prediction over planted distribution  security
Search  Approximate Search
S3LIN(m,): Given M,y find x whp
AS3LIN(m,): Given M,y find w 0.9 x whp
Lemma: Solver A for AS3LIN(m,) allows to solve S3LIN(m+10n lg n ,)
random
n-bit vector
n
1
m
1 1
M

random
3-sparse matrix
x
+ e
=
y
err vector
of rate 
search app-search  prediction  prediction over planted  PKC
Search  Approximate Search
S3LIN(m,): Given M,y find x whp
AS3LIN(m,): Given M,y find w 0.9 x whp
Lemma: Solver A for AS3LIN(m,) allows to solve S3LIN(m+10n lg n ,)
• Use A and first m equations to obtain w.
• Use w and remaining equations to recover x as follows.
• Recovering x1:
–
–
for each equation x1+xi+xk=y compute a vote x1=x
=w
i+x
k+y
i+w
k+y
Take majority
Analysis:
• Assume wS = xS for set S of size 0.9n
• Vote is good w/p>>1/2 as Pr[iS], Pr[kS], Pr[yrow is not noisy]>1/2
• If x1 appears in 2log n distinct equations. Then, majority is correct w/p 1-1/n2
• Take union bound over all variables
Approximate Search  Prediction
AS3LIN(m,): Given M,y find w 0.9 x w/p 0.8
P3LIN(m,): Given M,y, (i,j,k) find xi+xj+xk w/p 0.9
Lemma: Solver A for P3LIN(m,) allows to solve AS3LIN(m+1000 n ,)
n
m
M
1
1 1

x
+ e
=
y
?
search app-search  prediction  prediction over planted  PKC
Approximate Search  Prediction
Proof:
m
T
z
1000n
M
y
Approximate Search  Prediction
Proof:
Do 100n times
Invoke
m Predictor A
z
T
 0.2 noisy
1
1000n
+
1 1
M
11
y
1
111
11
1111
 xi
 2 noisy
i
Analysis:
• By Markov, whp T, z are good i.e., Prt,j,k[A(T,z,(t,j,k))=xt+xj+xk]>0.8
• Conditioned on this, each red prediction is good w/p>>1/2
• whp will see 0.99 of vars many times – each prediction is independent
Prediction over Related Distribution
P3LIN(m,): Given M,y, r=(i,j,k) find xi+xj+xk w/p 0.9
D = distribution over (M,r) which at most 0.99-far from uniform
Lemma: Solver A for P3LIND(m,) allows to solve P3LINU(O(m) ,)
• Problem: A might be bad predictor over uniform distribution
• Sol:
Test that (M,r) is good for A with respect to random x and random noise
 Good prediction w/p 0.01
Otherwise, “I don’t know”
Uniform
D
M
r
1
1 1

x
+ e
=
y
?
search app-search  prediction  prediction over planted  PKC
Prediction over Related Distribution
Lemma: Solver A for P3LIND(m,) allows to solve P3LINU(O(m) ,)
Sketch: Partition M,y to many pieces Mi,yi then invoke A(Mi,yi,r) and take majority
• Problem: All invocations use the same r and x
• Sol: Re-randmization !
x
M
r
1
1 1

+
e
=
y
?
Distribution with Short Linear Dependency
Hq,n = Uniform over matrices with q-rows each with 3 ones
and n cols each with either 0 ones or 2 ones
1 1 1
1 11
11
1
1 1 1
Pm,nq = (m,n,3)-uniform conditioned on existence of sub-matrix H Hq
that touches the last row
stat
Lemma : Let m=n1.4 and q=n0.2
Then, (m,n,3)-uniform and Pm,nq are at most 0.999-statistially far
Proof: follows from [FKO06].
Plan
• Background
• Our results
- assumptions & constructions
• Proof idea
• Conclusions and Open Problems
Other Results
•Assumptions  Oblivious-Transfer
 General secure computation
• New construction of PRG with large stretch + low locality
• Assumptions  Learning k-juntas requires time n(k)
Conclusions
• New Cryptosystems with arguably “less structured” assumptions
Future Directions:
• Improve assumptions
- use random 3SAT ?
• Better theoretical understanding of public-key encryption
-public-key cryptography can be broken in “NP  co-NP” ?
Thank You !