π Solving CVP in 2 time via Discrete Gaussian Sampling Divesh Aggarwal École Polytechnique Fédérale de Lausanne (EPFL) Daniel Dadush Centrum Wiskunde en Informatica (CWI) Noah Stephens-Davidowitz New York University (NYU) Mathematics of Cryptography Simons Institute 2015 Lattices A lattice β β βπ is all integral combinations of some basis B = π1 , β¦ , ππ . β(π΅) denotes lattice generated by π΅. Define π΅ = max ππ . π π2 π1 β Closest Vector Problem (CVP) Given: Lattice basis π΅ π βπ×π , target π‘ π βπ . Goal: Compute π¦ π β(π΅) minimizing π‘ β π¦ π¦ π‘ β 2. Applications of SVP & CVP Optimization: Integer and Linear Programming Number Theory: Factoring Polynomials, Number Field Sieve Communication Theory: Decoding Gaussian Channels Database Search: Approximate Nearest Neighbor Search Cryptanalysis: RSA with Small Exponent, Knapsack Crypto Systems Cryptography: Lattice based Crypto (hardness of LWE / SIS) Hardness of CVP πΌ-SVP β€ πΌ-CVP (CVP is the ``hardestββ lattice problem) 1 ππ/log log π NP-Hard π log π π NP β© coAM NP β© coNP 2ππ log log π P log π Main Result Method Basis Reduction Apx π π π π 1 Time Space 22π poly π 2π poly(π) ππ/2 2π LLL 83, Kan. 87, β¦, HS 08 1 π π(π) 2 π AKS 01, AKS 02, BN 07, β¦ 1+π Voronoi Cell 1 22π 2π Discrete Gaussian 1 2π 2π π LLL 83, Sch. 85, Bab. 86, MV 10 poly(π) Randomized Sieve 2π(π) 1 Authors SFS 09, MV 13 ADS 15 Outline 1. Approximate CVP via (shifted) DGS sampling. Relation between parameter and approx. factor. 2. A shifted DGS sampler. Number of samples we can generate at desired parameters. 3. Sample clustering & recursion for exact CVP. Learning the coordinates of a closest vector. Shifted Discrete Gaussian ππ π΄ β π¦βπ΄ π βπ π¦ π 2 . π·ββπ‘,π β discrete Gaussian distribution over β β π‘ with parameter π , Pr πβΌπ·ββπ‘,π π=π¦ = ππ (π¦) for π¦ ππ (ββπ‘) β β β π‘. Shifted Discrete Gaussian π = 10 π =4 The discrete Gaussian is more concentrated as the parameter decreases. Discrete Gaussian and CVP Closest vectors to π‘ in β correspond to shortest vectors in β β π‘. Question: Can we hit closest vectors by sampling from π·ββπ‘,π for small enough π ? Discrete Gaussian and CVP Problem: Can have arbitrarily many approximate closest vectors! Let π β, π‘ = min π¦ β π‘ denote distance of β to π‘. π¦ββ π‘ π (1+π)π β Discrete Gaussian and CVP Problem: Have little chance that π·ββπ‘,π hits a closest vector unless π is tiny. π‘ π (1+π)π β Approximate CVP via DGS Lemma: For π βΌ π·ββπ‘,π , if π β€ 2π π , then 2 2 2 βπ2 Pr βπβ β₯ π + π π β€π . Note π2 + π π 2 π‘ β€π 1+ π π π π π 2 + π π 2 β Approximate CVP via DGS To get 1 + π -approximate closest vector to π‘ it suffices to sample once from π·ββπ‘,π for π β€ ππ . π π‘ π π 2 + π π 2 β Hermite-Korkine-Zolotarev Basis For a basis π΅ = (π1 , β¦ , ππ ), define the projections ππ β orthog. projection onto span π1 , β¦ , ππβ1 β₯ . Define the GSO π΅ = (π1 , β¦ , ππ ) of π΅ by ππ = ππ ππ for π β [π]. π΅ is a Hermite-Korkine-Zolotarev (HKZ) basis for β if ππ = π1 (ππ β ) for π β [π]. Computable with π calls to an SVP oracle. Shifted DGS Sampler Theorem: β an π-dimensional lattice, π‘ β βπ , and π β₯ 2βπ π log π π΅ . There is an algorithm which generates at least ππ ββπ‘ max πββ/2β ππ πβπ‘ β₯ 1 samples with joint distribution π-close to i.i.d. π·ββπ‘,π π+π(π) in time 2 βππ(1) for any π = 2 . Can hit all βhigh weightβ cosets in β / 2β. Shifted DGS Sampler Theorem: β an π-dimensional lattice, π‘ β βπ , and π β₯ 2βπ π log π π΅ . There is an algorithm which generates at least ππ ββπ‘ max πββ/2β ππ πβπ‘ β₯ 1 samples with joint distribution π-close to i.i.d. π·ββπ‘,π π+π(π) in time 2 If π β, π‘ β₯ π΅ poly(π) βππ(1) for any π = 2 gives (1 + 2βπ π log π . )-approx! Shifted DGS Sampler Theorem: β an π-dimensional lattice, π‘ β βπ , and π β₯ 2βπ π log π π΅ . There is an algorithm which generates at least ππ ββπ‘ max πββ/2β ππ πβπ‘ β₯ 1 samples with joint distribution π-close to i.i.d. π·ββπ‘,π π+π(π) in time 2 βππ(1) for any π = 2 . More importantly, will suffice to solve exact CVP. Averaging Discrete Gaussians Let π‘1 , π‘2 β βπ , π‘ + = π‘1 + π‘2 2, π‘ β = π‘1 β π‘2 2. Then for π1 βΌ π·β+π‘1,π , π2 βΌ π·β+π‘2,π and π¦ β β + π‘ + , Pr π1 +π2 2 =π¦ = ππ 2 π¦ ππ 2 (β+π‘ β ) . ππ β+π‘1 ππ (β+π‘2 ) Hence π1 + π2 2 conditioned on landing in β + π‘ + is distributed as π·β+π‘ + ,π 2 . Averaging Discrete Gaussians Let π‘1 , π‘2 β βπ , π‘ + = π‘1 + π‘2 2, π‘ β = π‘1 β π‘2 Then for π1 βΌ π·β+π‘1,π , π2 βΌ π·β+π‘2,π Pr π1 +π2 2 β β + π‘+ = = 2. ππ 2 β+π‘ + ππ 2 (β+π‘ β ) ππ β+π‘1 ππ (β+π‘2 ) ππ π+π‘1 ππ π+π‘2 πββ/2β π β+π‘ π (β+π‘ ) . π 1 π 2 With above identity can show amazing inequalities. Shifted DGS Combiner Input: π1 , β¦ , ππ i.i.d. samples from π·ββπ‘,π . Output: π1 , β¦ , ππΏπ i.i.d. samples from π·ββπ‘,π / 2 . Initialization: Apply SVP solver to compute HKZ basis π΅ for β. Use [GPV08,BLPRS13] sampler on π΅ to produce π·ββπ‘,π samples at π = π π΅ in polytime. Shifted DGS Combiner Input: π1 , β¦ , ππ i.i.d. samples from π·ββπ‘,π . Output: π1 , β¦ , ππΏπ i.i.d. samples from π·ββπ‘,π / 2 . Meta Procedure: Repeat πΏπ times 1. Sample π β β/2β with probability ππ πβπ‘ 2 . 2 π§ββ/2β ππ π§βπ‘ 2. Pick unused ππ , ππ β π β π‘, return (ππ + ππ )/2. Shifted DGS Combiner Input: π1 , β¦ , ππ i.i.d. samples from π·ββπ‘,π . Output: π1 , β¦ , ππΏπ i.i.d. samples from π·ββπ‘,π / 2 . Question: How big can πΏ be? Should at least not exhaust supply on expectation. 2ππ πβπ‘ 2 2 π§ββ/2β ππ π§βπ‘ πΏπ β€ ππ πβπ‘ ππ ββπ‘ π β π β β/2β Shifted DGS Combiner Input: π1 , β¦ , ππ i.i.d. samples from π·ββπ‘,π . Output: π1 , β¦ , ππΏπ i.i.d. samples from π·ββπ‘,π / 2 . Question: How big can πΏ be? Worst case for π β β β/2β maximizing ππ π β β π‘ . 2ππ πβπ‘ 2 2 π§ββ/2β ππ π§βπ‘ πΏπ β€ ππ πβπ‘ ππ ββπ‘ π β π β β/2β Shifted DGS Combiner Input: π1 , β¦ , ππ i.i.d. samples from π·ββπ‘,π . Output: π1 , β¦ , ππΏπ i.i.d. samples from π·ββπ‘,π / 2 . Question: How big can πΏ be? Worst case for π β β β/2β maximizing ππ π β β π‘ . πΏβ€ 2 π π§βπ‘ π π§ββ 2β 2ππ π β βπ‘ ππ ββπ‘ = ππ 2 β ππ 2 (ββπ‘) 2ππ π β βπ‘ ππ ββπ‘ Shifted DGS Combiner Let π π = π π 2 , ππβ β β/2β maximizes ππ π ππβ β π‘ . Theorem: The loss after π steps ππ π β ππ π (ββπ‘) β βπ‘ π πβ[π] 2π π π πβ1 πβ1 π πβ1 ββπ‘ Need at least 2 π+π β₯2 β π+π ππ π ββπ‘ ππ π ππβ βπ‘ initial samples to go π steps. Key Inequality π 2 , ππβ β β/2β maximize ππ π ππβ β π‘ . Let π π = π Lemma: β ππ π ππ β π‘ 2 β€ β ππ π+1 ππ+1 β π‘ ππ π+1 β . Proof: β Take π‘1 β ππ β π‘ and π‘2 = 0. Then β ππ π ππ β π‘ 2 = ππ π 2β + π‘1 2 = ππ π+2 β + π‘1 /2 = β€ πββ/2β ππ π+1 β ππ π+1 ππ+1 β π + π‘1 ππ π+1 (π) π‘ πββ 2β ππ π+1 π 2 Key Inequality π 2 , ππβ β β/2β maximize ππ π ππβ β π‘ . Let π π = π Lemma: β ππ π ππ β π‘ 2 β€ β ππ π+1 ππ+1 β π‘ ππ π+1 β . Proof: β Take π‘1 β ππ β π‘ and π‘2 = 0. Then β ππ π ππ β π‘ 2 = ππ π 2β + π‘1 2 = ππ π+2 β + π‘1 /2 = β€ πββ/2β ππ π+1 β ππ π+1 ππ+1 β π + π‘1 ππ π+1 (π) π‘ ππ π+1 (β) 2 Hope for Exact CVP From approximate CVP solutions can try to learn subspaces that must contain the closest vector. 2 lattice subspaces π‘ β Clustering approx. closest vectors Lemma: Assume that π₯, π¦ β β, π₯ β‘ π¦ πππ 2β , are at distance at most π β, π‘ 2 + π 2 from π‘. Then (π₯ β π¦)/2 2 β€ π 2 . π₯ π¦ π₯+π¦ 2 π‘ π(β, π‘) π β Clustering approx. closest vectors Lemma: Assume that π₯, π¦ β β, π₯ β‘ π¦ πππ 2β , are at distance at most π β, π‘ 2 + π 2 from π‘. Then (π₯ β π¦)/2 2 β€ π 2 . Proof: Since π₯ + π¦ 2 β β, π₯ + π¦ 2 β π‘ β₯ π β, π‘ . (π₯ β π¦)/2 2 = π₯ β π‘ 2 /2 + π¦ β π‘ 2 /2 β π₯ + π¦ 2 β π‘ 2 β€ π β, π‘ 2 + π 2 /2 + π β, π‘ 2 + π 2 /2 β π β, π‘ = π2 2 How many closest vectors? Corollary: For π-dimension lattice β and target π‘ β βπ there are most 2π closest vectors to π‘. (0,1) Bound is trivially tight. (1,1) (1 2 , 1 2) (0,0) (1,0) How many closest vectors? Corollary: For π-dimension lattice β and target π‘ β βπ there are most 2π closest vectors to π‘. Proof: By lemma, any two closest vectors in the same coset of β 2β are equal (their distance is 0). Furthermore, the number of distinct cosets is 2π . Dimension Reduction via Clustering Let π΅ = (π1 , β¦ , ππ ) be an HKZ basis of β. Lemma: Assume that π₯, π¦ β β, π₯ β‘ π¦ πππ 2β , are at distance at most π β, π‘ 2 + π 2 from π‘. Then if π < ππβπ+1 , π₯, π¦ have the same last π coordinates w.r.t. π΅. Proof: Suffices to show ππβπ+1 (π₯ β π¦)/2 = 0. If not, then ππβπ+1 (π₯ β π¦)/2 β ππβπ+1 (β) is non-zero and π₯βπ¦ ππβπ+1 2 β€ π₯βπ¦ 2 β€ π < ππβπ+1 . Exact CVP Main Idea: Given HKZ basis π1 , β¦ , ππ of β will show that for π chosen carefully, the last π coordinates of any close enough vector to π‘ are determined by their parity. For π₯ = πβ[π] ππ ππ β β close enough to π‘, will show that (ππβπ+1 , β¦ , ππ ) is essentially determined by (ππβπ+1 (πππ 2), β¦ , ππ (πππ 2)). Exact CVP Can group approx. closest vectors by their coefficients with respect to ππβπ+1 , β¦ , ππ . Indexes at most β 2π shifts of π β π dimensional sublattice β(π1 , β¦ , ππβπ ), which we recurse on. π1 2 lattice subspaces π2 = 0 π‘ π2 π2 = 1 β High Level Algorithm Input: π-dimensional lattice β and target π‘. Output: Closest lattice vectors in β to π‘. 1. Compute HKZ basis π΅ of β, and number π of βhigh order coordinatesβ. 2. Sample many approx. closest vectors via DGS. 3. Group them according to last π coordinates with respect to π΅ = (π1 , β¦ , ππ ) and recurse on associated shifts of β(π1 , β¦ , ππβπ ) . Complexity Sketch Initialization: (one shot 2π+π(π) time) Compute short basis π΅ of β, and number π of βhigh order coordinatesβ (can compute for each rec. level). Per level work: (2π+π(π) time) Sample many approx. closest vectors via DGS. Recursion: (β 2π subproblems of dim. π β π) Group them according to last π coordinates with respect to π΅ and recurse. Total runtime: 2π+π(π) Key Challenges Runtime: 1. Getting many DGS samples at low parameters. 2. Show last π coeffs β determined by their parity. 3. Deal with β 2π subproblems in recursion analysis. Correctness: Show that we hit last π coeffs of an exact closest vector with high probability. (will show that we hit exact parity) Conclusions 1. Fastest algorithm for CVP: 2π+π(π) time. 2. Explicitly / implicitly use ideas from all known algorithm types: basis reduction, sieving, Voronoi. 3. The discrete Gaussian is a very powerful tool! Many of its properties are still poorly understood... Open Problems 1. Is 2π optimal under SETH? (matches # closest vectors) 2. Is there a deterministic / Las Vegas algorithm? (2π time once the Voronoi cell is computed [BD 15]) π£1 π£2 π£3 π£6 0 π£4 π£5 3. Find a simpler & cleaner algorithmβ¦ β THANK YOU!
© Copyright 2024 Paperzz