G - UCSD CSE

Search to Decision Reductions for
Knapsacks and LWE
Daniele Micciancio, Petros Mol
UCSD Theory Seminar
October 3, 2011
1
Number Theoretic Cryptography
• Number theory: standard source of hard problems
– Factoring: given N = pq, find p (or q) (p, q: large primes)
– Discrete Log: given g, gx (mod p) , find x
• Very influential over the years
• Extensively studied (algorithms & constructions)
• Two major concerns
i.
ii.
How do we know that a “random” N is hard to factor ?
Algorithmic progress
- subexponential (classical) algorithms
- polynomial quantum algorithms
2
Cryptography from Learning Problems
secret s
integers n, q (public)
…
Finding s easy
just need ~ n samples
3
New Problem: Learning With Errors (LWE)
secret s
integers n, q (public)
small error from a known distribution
…
Finding s possibly
much harder
noise
Compactly…
random
m
secret
A
n
b
,
=
A
S
+
small error vector
e (mod q)
4
The many interpretations of LWE
• Algebraic: Solving noisy random linear
equations
• Geometric: Bounded Distance Decoding in
Lattices
• Learning Theory: Learning linear functions over
under random “classification” noise
• Coding theory: Decoding random linear q-ary
codes
5
LWE Background
• Introduced by Regev [R05]
• q = 2, Bernoulli noise -> Learning Parity with Noise (LPN)
• Extremely successful in Cryptography
•
•
•
•
•
•
•
IND-CPA Public Key Encryption [Regev05]
Injective Trapdoor Functions/ IND-CCA encryption [PW08]
Strongly Unforgeable Signatures [GPV08, CHKP10]
(Hierarchical) Identity Based Encryption [GPV08, CHKP10, ABB10]
Circular- Secure Encryption [ACPS09]
Leakage-Resilient Cryptography [AGV09, DGK+10, GKPV10]
(Fully) Homomorphic Encryption [GHV10, BV11]
6
Why LWE ?
• Rich and Intuitive structure
• Linear operations over fields (or rings)
• Certain homomorphic properties
• Ability to embed a trapdoor
• Wealth of Cryptographic applications
• Worst-case/average-case reduction
•
Solving LWE* (average problem) at least as hard as
solving certain hard lattice problems in the worst-case
• High resistance to algorithmic attacks
•
•
no quantum attacks known
no subexponential algorithms known
*for specific error distribution
7
LWE: Search & Decision
Public parameters
n: size of the secret, m: #samples
q: modulus, :error dis/ion
Find
Given:
Goal: find s (or e)
Distinguish
Given:
Goal: decide if
or
8
S-to-D: Worst-Case vs Average Case
• Clique (search): Given graph G and integer k, find a
clique of size k in G, or say none exists.
• Clique (decision): Given graph G and integer k, output
YES if G has a clique of size k and NO otherwise.
Theorem: Search <= Decision
Proof: Assume decider D.
for v in V
if D((G \ v, k)) = YES then G G \ v
return G
Crucial: D is a worst-case (perfect) decider
9
S-to-D: Worst-Case vs Average Case
D is an average (imperfect) decider
this talk
Pr[ D(A, As + e) = YES] - Pr[ D(A, t) = YES] = δ > 1/poly()
probability taken over sampling the instance and D’s randomness
10
Search-to-Decision reductions (S-to-D)
Why do we care?
decision
problems
- all LWE-based constructions
rely on decisional LWE
- strong indistinguishability
flavor of security definitions
search
problems
- their hardness is better
understood
11
Search-to-Decision reductions (S-to-D)
Why do we care?
decision
problems
- all LWE-based constructions
rely on decisional LWE
- strong indistinguishability
flavor of security definitions
search
problems
- their hardness is better
understood
 S-to-D reductions: “Primitive Π is ABC-Secure assuming
search problem P is hard”
12
Our results
• Toolset for studying Search-to-Decision reductions
for LWE with polynomially bounded noise.
- Subsume and extend previously known ones
- Reductions are in addition sample-preserving
• Powerful and usable criteria to establish Search-toDecision equivalence for general classes of
knapsack functions
• Use known techniques from Fourier analysis in a
new context. Ideas potentially useful elsewhere
13
Our results
• Toolset for studying Search-to-Decision reductions
for LWE with polynomially bounded noise.
- Subsume and extend previously known ones
- Reductions are in addition sample-preserving
• Powerful and usable criteria to establish Search-toDecision equivalence for general classes of
knapsack functions
• Use known techniques from Fourier analysis in a
new context. Ideas potentially useful elsewhere
14
Bounded knapsack functions over groups
Parameters
- integer m
- finite abelian group G
- set S = {0,…, s - 1} of integers, s: poly(m)
(Random) Knapsack family
Sampling
where
Evaluation
Example
(random) modular subset sum:
15
Knapsack functions: Computational problems
distribution over
invert
(search)
Distinguish
(decision)
public
Input:
Goal: Find x
Input: Samples from either:
Goal: Label the samples
Notation:
family of knapsacks over G with distribution
Glossary: If decision problem is hard, function is pseudorandom (PsR)
If search problem is hard, function is One-Way
16
Search-to-Decision: Known results
Decision as hard as search when…
[Impagliazzo, Naor 89] : (random) modular subset sum
, cyclic group
uniform over
[Fischer, Stern 96]: syndrome decoding
, vector group
uniform over all m-bit vectors with Hamming weight w.
17
Our contribution: S-to-D for general knapsack
: knapsack family with range G and input
s: poly(m)
distribution over
+
One-Way PsR
PsR
18
Our contribution: S-to-D for general knapsack
: knapsack family with range G and input
s: poly(m)
distribution over
Main Theorem
One-Way
+
PsR
PsR
19
Our contribution: S-to-D for general knapsack
Main Theorem
One-Way
+
PsR
✔
PsR
PsR
Much less restrictive than it seems
In most interesting cases holds in a strong information
theoretic sense
20
S-to-D for general knapsack: Examples
One-Way
PsR
Subsumes
[IN89,FS96]
and more
Any group G and any distribution over
Any group G with prime exponent and any distribution
And many more…
using known information theoretical tools (LHL, entropy bounds etc)
21
Proof overview
Inverter
Distinguisher
Input: g , g.x
Goal: Find x
Input:
Goal: Distinguish
Reminder
22
Proof overview
Inverter
Input: g , g.x
Goal: Find x
<=
Predictor
<=
Input: g , g.x, r
Goal: find x.r (mod t)
Distinguisher
Input:
Goal: Distinguish
• Approach not (entirely) new [GL89, IN89]
• Making it work in a general setting requires
more advanced machinery and new technical
ideas
23
Proof Sketch: Step 1
Inverter
Input: g , g.x
Goal: Find x
<=
Predictor
Input: g , g.x, r
Goal: find x.r (mod t)
24
Proof Sketch: Step 1
Inverter
Input: f , f(x)
Goal: Find x
Predictor
<=
Input: f , f(x), r
Goal: find x.r (mod t)
This step holds for any function with domain
General conditions for inverting given noisy predictions for
x.r (mod t) for possibly composite t
-Goldreich–Levin: t=2
-Goldreich, Rubinfeld, Sudan: t prime
25
Digression: Fourier Transform
Basis functions:
where
Fourier Representation:
Fourier Transform:
26
Digression: Learning heavy coefficients [AGS03]
Energy:
τ
xi
h(xi)
LHFC
LHFC is given query access to h. It outputs a list L s.t:
- if
, then L will most likely include α
- LHFC runs in poly(m, 1/τ)
27
Inverting <= Predicting (predictor)
Input: f, f(x)
Inverter
Goal: Find x
Quality of
Predictor
perfect
Predictor
Error distribution
e = guess – x.r (mod t)
Input: f, f(x), r
Goal: guess x.r (mod t)
bias:
Imperfect
Imperfect
(but useful)
(and useless)
28
Inverting <= Predicting (main idea)
Predictor
Theorem
Inverter
t >= s
Run. time: TP
Bias: ε
Run. time: poly(m, 1/ε) TP
Succes Prob.: c.ε
inverter
Pred.
Proof
Idea
Red.
f, f(x)
LHFC
29
Inverting <= Predicting (main idea)
Predictor
Theorem
Inverter
t >= s
Run. time: TP
Bias: ε,
Run. time: poly(m, 1/ε) TP
Succes Prob.: c.ε
h
inverter
Pred.
Proof
Idea
Red.
f, f(x)
LHFC
When the predictor’s bias is ε, the unknown x
has energy
30
Proof Sketch: Step 2
Predictor
Input: g , g.x, r
Goal: find x.r (mod t)
<=
Distinguisher
Input:
Goal: Distinguish
• Reduction specific to knapsack functions
• Proof follows outline of [IN89] but technical details very
different.
31
Predicting <= Distinguishing
Input: g , g.x, r
Predictor
Goal: Guess x.r (mod t)
Proof
Idea
•
•
•
•
Dist/er
Input:
Goal: Distinguish
Predictor makes an initial guess for x.r (mod t)
Uses guess to create some input
to the distinguisher
If dist/er outputs “knapsack”, predictor outputs guess
Otherwise, it revises the guess
Key
• Correct guess 
Property • Incorrect guess 
“closer” to
• [IN89] used same idea for subset sum
• For arbitrary groups proof significantly more involved
32
Our results
• Toolset for studying Search-to-Decision reductions
for LWE with polynomially bounded noise.
- Subsume and extend previously known ones
- Reductions are in addition sample-preserving
• Powerful and usable criteria to establish Search-toDecision equivalence for general classes of
knapsack functions
• Use known techniques from Fourier analysis in a
new context. Ideas potentially useful elsewhere
33
What about LWE?
m
A ,A
s
+
e
n
g1 g2…gm
,
g1 g2…gm
e
G
G is the parity check matrix for the code generated by A
Error e from LWE  unknown input of the knapsack
If A is “random”, G is also “random”
34
What about LWE?
g1 g2…gm
,
g1 g2…gm
A ,A
e
G
s
+
e
The transformation works in the other direction as well
Putting all the pieces together…
Search
Search
Decision
Decision
(A, As +e ) <= (G, Ge) <= (G’, G’e) <= (A’,A’s’ + e)
S-to-D for knapsack
35
LWE Implications
LWE reductions follow from knapsacks reductions over
All known Search-to-Decision results for LWE/LPN with
bounded error [BFKL93, R05, ACPS09, KSS10] follow as a direct
corollary
Search-to-Decision for new instantiations of LWE
36
LWE: Sample Preserving S-to-D
All reductions are in addition sample-preserving
If we can distinguish m LWE samples from uniform with 1/poly(n)
advantage, we can find s with 1/poly(n) probability given m LWE
samples
Caveat: Inverting probability goes down (seems unavoidable)
Previous reductions
poly(m)
search
A
b
,
<=
m
A’ ,
b’
decision
37
Why care about #samples?
 LWE-based schemes often expose a certain number of
samples, say m
 With sample-preserving S-to-D we can base their security
on the hardness of search LWE with m samples
 Concrete algorithmic attacks against LWE [MR09, AG11]
are sensitive to the number of exposed samples
• for some parameters, LWE is completely broken by [AG11] if
number of given samples above a certain threshold
38
Open problems
Sample preserving reductions for
1. LWE with unbounded noise
- Used in various settings [Pei09, GKPV10, BV11b, BPR11]
- reductions are known [Pei09] but are not sample-preserving
2. ring LWE
- Samples (a, a*s+e) where a, s, e drawn from R=Zq[x]/<f(x)>
- non sample-preserving reductions known [LPR10]
39