Solving SVP and CVP in 2^n time with Discrete Gaussian Sampling

𝑛
Solving CVP in 2 time
via Discrete Gaussian Sampling
Divesh Aggarwal
École Polytechnique Fédérale de Lausanne (EPFL)
Daniel Dadush
Centrum Wiskunde en Informatica (CWI)
Noah Stephens-Davidowitz
New York University (NYU)
Mathematics of Cryptography
Simons Institute 2015
Lattices
A lattice β„’ βŠ† ℝ𝑛 is all integral
combinations of some basis
B = 𝑏1 , … , 𝑏𝑛 .
β„’(𝐡) denotes lattice
generated by 𝐡.
Define 𝐡 = max 𝑏𝑖 .
𝑖
𝑏2
𝑏1
β„’
Closest Vector Problem (CVP)
Given: Lattice basis 𝐡 πœ– β„šπ‘›×𝑛 , target 𝑑 πœ– β„šπ‘› .
Goal: Compute 𝑦 πœ– β„’(𝐡) minimizing 𝑑 βˆ’ 𝑦
𝑦
𝑑
β„’
2.
Applications of SVP & CVP
Optimization: Integer and Linear Programming
Number Theory: Factoring Polynomials, Number Field Sieve
Communication Theory: Decoding Gaussian Channels
Database Search: Approximate Nearest Neighbor Search
Cryptanalysis: RSA with Small Exponent, Knapsack Crypto Systems
Cryptography: Lattice based Crypto (hardness of LWE / SIS)
Hardness of CVP
𝛼-SVP ≀ 𝛼-CVP (CVP is the ``hardest’’ lattice problem)
1
𝑛𝑐/log log 𝑛
NP-Hard
𝑛 log 𝑛
𝑛
NP ∩ coAM NP ∩ coNP
2𝑐𝑛 log log 𝑛
P
log 𝑛
Main Result
Method
Basis
Reduction
Apx
𝑛
𝑂 π‘˜
π‘˜
1
Time
Space
22π‘˜ poly 𝑛 2π‘˜ poly(𝑛)
𝑛𝑛/2
2𝑛
LLL 83, Kan. 87,
…, HS 08
1 𝑛
𝑂(𝑛)
2
πœ–
AKS 01, AKS 02,
BN 07, …
1+πœ–
Voronoi Cell
1
22𝑛
2𝑛
Discrete
Gaussian
1
2𝑛
2𝑛
πœ–
LLL 83, Sch. 85,
Bab. 86, MV 10
poly(𝑛)
Randomized
Sieve
2𝑂(𝑛) 1
Authors
SFS 09, MV 13
ADS 15
Outline
1. Approximate CVP via (shifted) DGS sampling.
Relation between parameter and approx. factor.
2. A shifted DGS sampler.
Number of samples we can generate at desired
parameters.
3. Sample clustering & recursion for exact CVP.
Learning the coordinates of a closest vector.
Shifted Discrete Gaussian
πœŒπ‘  𝐴 ≔
π‘¦βˆˆπ΄ 𝑒
βˆ’πœ‹ 𝑦 𝑠 2
.
π·β„’βˆ’π‘‘,𝑠 ≔ discrete Gaussian distribution over
β„’ βˆ’ 𝑑 with parameter 𝑠,
Pr
π‘‹βˆΌπ·β„’βˆ’π‘‘,𝑠
𝑋=𝑦 =
πœŒπ‘  (𝑦)
for 𝑦
πœŒπ‘  (β„’βˆ’π‘‘)
∈ β„’ βˆ’ 𝑑.
Shifted Discrete Gaussian
𝑠 = 10
𝑠=4
The discrete Gaussian
is more concentrated
as the parameter
decreases.
Discrete Gaussian and CVP
Closest vectors to 𝑑 in β„’ correspond to
shortest vectors in β„’ βˆ’ 𝑑.
Question: Can we hit closest vectors by sampling from
π·β„’βˆ’π‘‘,𝑠 for small enough 𝑠?
Discrete Gaussian and CVP
Problem: Can have arbitrarily many approximate
closest vectors!
Let 𝑑 β„’, 𝑑 = min 𝑦 βˆ’ 𝑑 denote distance of β„’ to 𝑑.
π‘¦βˆˆβ„’
𝑑
𝑑 (1+πœ–)𝑑
β„’
Discrete Gaussian and CVP
Problem: Have little chance that π·β„’βˆ’π‘‘,𝑠 hits a
closest vector unless 𝑠 is tiny.
𝑑
𝑑 (1+πœ–)𝑑
β„’
Approximate CVP via DGS
Lemma: For 𝑋 ∼ π·β„’βˆ’π‘‘,𝑠 , if 𝑑 ≀ 2𝑛 𝑠, then
2
2
2
βˆ’π‘›2
Pr ‖𝑋‖ β‰₯ 𝑑 + 𝑠𝑛
≀𝑒 .
Note
𝑑2
+ 𝑠𝑛
2
𝑑
≀𝑑 1+
𝑑
𝑠𝑛
𝑑
𝑑 2 + 𝑠𝑛
2
β„’
Approximate CVP via DGS
To get 1 + πœ– -approximate closest vector to 𝑑
it suffices to sample once from π·β„’βˆ’π‘‘,𝑠 for 𝑠 ≀ πœ–π‘‘
.
𝑛
𝑑
𝑑
𝑑 2 + 𝑠𝑛
2
β„’
Hermite-Korkine-Zolotarev Basis
For a basis 𝐡 = (𝑏1 , … , 𝑏𝑛 ), define the projections
πœ‹π‘– ≔ orthog. projection onto span 𝑏1 , … , π‘π‘–βˆ’1 βŠ₯ .
Define the GSO 𝐡 = (𝑏1 , … , 𝑏𝑛 ) of 𝐡 by
𝑏𝑖 = πœ‹π‘– 𝑏𝑖 for 𝑖 ∈ [𝑛].
𝐡 is a Hermite-Korkine-Zolotarev (HKZ)
basis for β„’ if 𝑏𝑖 = πœ†1 (πœ‹π‘– β„’ ) for 𝑖 ∈ [𝑛].
Computable with 𝑛 calls to an SVP oracle.
Shifted DGS Sampler
Theorem: β„’ an 𝑛-dimensional lattice, 𝑑 ∈ ℝ𝑛 ,
and 𝑠 β‰₯ 2βˆ’π‘œ 𝑛 log 𝑛 𝐡 .
There is an algorithm which generates at least
πœŒπ‘  β„’βˆ’π‘‘
max
π‘βˆˆβ„’/2β„’ πœŒπ‘  π‘βˆ’π‘‘
β‰₯ 1 samples
with joint distribution πœ–-close to i.i.d. π·β„’βˆ’π‘‘,𝑠
𝑛+π‘œ(𝑛)
in time 2
βˆ’π‘›π‘‚(1)
for any πœ– = 2
.
Can hit all β€œhigh weight” cosets in β„’ / 2β„’.
Shifted DGS Sampler
Theorem: β„’ an 𝑛-dimensional lattice, 𝑑 ∈ ℝ𝑛 ,
and 𝑠 β‰₯ 2βˆ’π‘œ 𝑛 log 𝑛 𝐡 .
There is an algorithm which generates at least
πœŒπ‘  β„’βˆ’π‘‘
max
π‘βˆˆβ„’/2β„’ πœŒπ‘  π‘βˆ’π‘‘
β‰₯ 1 samples
with joint distribution πœ–-close to i.i.d. π·β„’βˆ’π‘‘,𝑠
𝑛+π‘œ(𝑛)
in time 2
If 𝑑 β„’, 𝑑 β‰₯
𝐡
poly(𝑛)
βˆ’π‘›π‘‚(1)
for any πœ– = 2
gives (1 + 2βˆ’π‘œ
𝑛 log 𝑛
.
)-approx!
Shifted DGS Sampler
Theorem: β„’ an 𝑛-dimensional lattice, 𝑑 ∈ ℝ𝑛 ,
and 𝑠 β‰₯ 2βˆ’π‘œ 𝑛 log 𝑛 𝐡 .
There is an algorithm which generates at least
πœŒπ‘  β„’βˆ’π‘‘
max
π‘βˆˆβ„’/2β„’ πœŒπ‘  π‘βˆ’π‘‘
β‰₯ 1 samples
with joint distribution πœ–-close to i.i.d. π·β„’βˆ’π‘‘,𝑠
𝑛+π‘œ(𝑛)
in time 2
βˆ’π‘›π‘‚(1)
for any πœ– = 2
.
More importantly, will suffice to solve exact CVP.
Averaging Discrete Gaussians
Let 𝑑1 , 𝑑2 ∈ ℝ𝑛 , 𝑑 + = 𝑑1 + 𝑑2 2, 𝑑 βˆ’ = 𝑑1 βˆ’ 𝑑2 2.
Then for 𝑋1 ∼ 𝐷ℒ+𝑑1,𝑠 , 𝑋2 ∼ 𝐷ℒ+𝑑2,𝑠 and 𝑦 ∈ β„’ + 𝑑 + ,
Pr
𝑋1 +𝑋2
2
=𝑦 =
πœŒπ‘  2 𝑦 πœŒπ‘  2 (β„’+𝑑 βˆ’ )
.
πœŒπ‘  β„’+𝑑1 πœŒπ‘  (β„’+𝑑2 )
Hence 𝑋1 + 𝑋2 2 conditioned on landing in
β„’ + 𝑑 + is distributed as 𝐷ℒ+𝑑 + ,𝑠 2 .
Averaging Discrete Gaussians
Let 𝑑1 , 𝑑2 ∈ ℝ𝑛 , 𝑑 + = 𝑑1 + 𝑑2 2, 𝑑 βˆ’ = 𝑑1 βˆ’ 𝑑2
Then for 𝑋1 ∼ 𝐷ℒ+𝑑1,𝑠 , 𝑋2 ∼ 𝐷ℒ+𝑑2,𝑠
Pr
𝑋1 +𝑋2
2
∈ β„’ + 𝑑+ =
=
2.
πœŒπ‘  2 β„’+𝑑 + πœŒπ‘  2 (β„’+𝑑 βˆ’ )
πœŒπ‘  β„’+𝑑1 πœŒπ‘  (β„’+𝑑2 )
πœŒπ‘  𝑐+𝑑1 πœŒπ‘  𝑐+𝑑2
π‘βˆˆβ„’/2β„’ 𝜌 β„’+𝑑 𝜌 (β„’+𝑑 ) .
𝑠
1 𝑠
2
With above identity can show amazing inequalities.
Shifted DGS Combiner
Input: 𝑋1 , … , 𝑋𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠 .
Output: 𝑋1 , … , 𝑋𝐿𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠/ 2 .
Initialization:
Apply SVP solver to compute HKZ basis 𝐡 for β„’.
Use [GPV08,BLPRS13] sampler on 𝐡 to produce
π·β„’βˆ’π‘‘,𝑠 samples at 𝑠 = 𝑂
𝐡
in polytime.
Shifted DGS Combiner
Input: 𝑋1 , … , 𝑋𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠 .
Output: 𝑋1 , … , 𝑋𝐿𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠/ 2 .
Meta Procedure: Repeat 𝐿𝑀 times
1. Sample 𝑐 ∈ β„’/2β„’ with probability
πœŒπ‘  π‘βˆ’π‘‘ 2
.
2
π‘§βˆˆβ„’/2β„’ πœŒπ‘  π‘§βˆ’π‘‘
2. Pick unused 𝑋𝑖 , 𝑋𝑗 ∈ 𝑐 βˆ’ 𝑑, return (𝑋𝑖 + 𝑋𝑗 )/2.
Shifted DGS Combiner
Input: 𝑋1 , … , 𝑋𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠 .
Output: 𝑋1 , … , 𝑋𝐿𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠/ 2 .
Question: How big can 𝐿 be?
Should at least not exhaust supply on expectation.
2πœŒπ‘  π‘βˆ’π‘‘ 2
2
π‘§βˆˆβ„’/2β„’ πœŒπ‘  π‘§βˆ’π‘‘
𝐿𝑀 ≀
πœŒπ‘  π‘βˆ’π‘‘
πœŒπ‘  β„’βˆ’π‘‘
𝑀 βˆ€ 𝑐 ∈ β„’/2β„’
Shifted DGS Combiner
Input: 𝑋1 , … , 𝑋𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠 .
Output: 𝑋1 , … , 𝑋𝐿𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠/ 2 .
Question: How big can 𝐿 be?
Worst case for 𝑐 βˆ— ∈ β„’/2β„’ maximizing πœŒπ‘  𝑐 βˆ— βˆ’ 𝑑 .
2πœŒπ‘  π‘βˆ’π‘‘ 2
2
π‘§βˆˆβ„’/2β„’ πœŒπ‘  π‘§βˆ’π‘‘
𝐿𝑀 ≀
πœŒπ‘  π‘βˆ’π‘‘
πœŒπ‘  β„’βˆ’π‘‘
𝑀 βˆ€ 𝑐 ∈ β„’/2β„’
Shifted DGS Combiner
Input: 𝑋1 , … , 𝑋𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠 .
Output: 𝑋1 , … , 𝑋𝐿𝑀 i.i.d. samples from π·β„’βˆ’π‘‘,𝑠/ 2 .
Question: How big can 𝐿 be?
Worst case for 𝑐 βˆ— ∈ β„’/2β„’ maximizing πœŒπ‘  𝑐 βˆ— βˆ’ 𝑑 .
𝐿≀
2
𝜌
π‘§βˆ’π‘‘
𝑠
π‘§βˆˆβ„’ 2β„’
2πœŒπ‘  𝑐 βˆ— βˆ’π‘‘ πœŒπ‘  β„’βˆ’π‘‘
=
πœŒπ‘  2 β„’ πœŒπ‘  2 (β„’βˆ’π‘‘)
2πœŒπ‘  𝑐 βˆ— βˆ’π‘‘ πœŒπ‘  β„’βˆ’π‘‘
Shifted DGS Combiner
Let π‘ π‘˜ = 𝑠
π‘˜
2 , π‘π‘˜βˆ— ∈ β„’/2β„’ maximizes πœŒπ‘ π‘˜ π‘π‘˜βˆ— βˆ’ 𝑑 .
Theorem: The loss after π‘˜ steps
πœŒπ‘ π‘˜ β„’ πœŒπ‘ π‘˜ (β„’βˆ’π‘‘)
βˆ— βˆ’π‘‘ 𝜌
π‘–βˆˆ[π‘˜] 2𝜌
𝑐
π‘ π‘˜βˆ’1 π‘˜βˆ’1
π‘ π‘˜βˆ’1 β„’βˆ’π‘‘
Need at least 2
𝑛+π‘˜
β‰₯2
βˆ’ 𝑛+π‘˜
πœŒπ‘ π‘˜ β„’βˆ’π‘‘
πœŒπ‘ π‘˜ π‘π‘˜βˆ— βˆ’π‘‘
initial samples to go π‘˜ steps.
Key Inequality
π‘˜
2 , π‘π‘˜βˆ— ∈ β„’/2β„’ maximize πœŒπ‘ π‘˜ π‘π‘˜βˆ— βˆ’ 𝑑 .
Let π‘ π‘˜ = 𝑠
Lemma:
βˆ—
πœŒπ‘ π‘˜ π‘π‘˜ βˆ’ 𝑑
2
≀
βˆ—
πœŒπ‘ π‘˜+1 π‘π‘˜+1
βˆ’ 𝑑 πœŒπ‘ π‘˜+1 β„’ .
Proof:
βˆ—
Take 𝑑1 ∈ π‘π‘˜ βˆ’ 𝑑 and 𝑑2 = 0. Then
βˆ—
πœŒπ‘ π‘˜ π‘π‘˜ βˆ’ 𝑑 2 = πœŒπ‘ π‘˜ 2β„’ + 𝑑1 2 = πœŒπ‘ π‘˜+2 β„’ + 𝑑1 /2
=
≀
π‘βˆˆβ„’/2β„’ πœŒπ‘ π‘˜+1
βˆ—
πœŒπ‘ π‘˜+1 π‘π‘˜+1
βˆ’
𝑐 + 𝑑1 πœŒπ‘ π‘˜+1 (𝑐)
𝑑
π‘βˆˆβ„’ 2β„’ πœŒπ‘ π‘˜+1
𝑐
2
Key Inequality
π‘˜
2 , π‘π‘˜βˆ— ∈ β„’/2β„’ maximize πœŒπ‘ π‘˜ π‘π‘˜βˆ— βˆ’ 𝑑 .
Let π‘ π‘˜ = 𝑠
Lemma:
βˆ—
πœŒπ‘ π‘˜ π‘π‘˜ βˆ’ 𝑑
2
≀
βˆ—
πœŒπ‘ π‘˜+1 π‘π‘˜+1
βˆ’ 𝑑 πœŒπ‘ π‘˜+1 β„’ .
Proof:
βˆ—
Take 𝑑1 ∈ π‘π‘˜ βˆ’ 𝑑 and 𝑑2 = 0. Then
βˆ—
πœŒπ‘ π‘˜ π‘π‘˜ βˆ’ 𝑑 2 = πœŒπ‘ π‘˜ 2β„’ + 𝑑1 2 = πœŒπ‘ π‘˜+2 β„’ + 𝑑1 /2
=
≀
π‘βˆˆβ„’/2β„’ πœŒπ‘ π‘˜+1
βˆ—
πœŒπ‘ π‘˜+1 π‘π‘˜+1
βˆ’
𝑐 + 𝑑1 πœŒπ‘ π‘˜+1 (𝑐)
𝑑 πœŒπ‘ π‘˜+1 (β„’)
2
Hope for Exact CVP
From approximate CVP solutions can try to learn
subspaces that must contain the closest vector.
2 lattice subspaces
𝑑
β„’
Clustering approx. closest vectors
Lemma: Assume that π‘₯, 𝑦 ∈ β„’, π‘₯ ≑ 𝑦 π‘šπ‘œπ‘‘ 2β„’ , are at
distance at most 𝑑 β„’, 𝑑 2 + π‘Ÿ 2 from 𝑑.
Then (π‘₯ βˆ’ 𝑦)/2 2 ≀ π‘Ÿ 2 .
π‘₯
𝑦
π‘₯+𝑦
2
𝑑
𝑑(β„’, 𝑑)
π‘Ÿ
β„’
Clustering approx. closest vectors
Lemma: Assume that π‘₯, 𝑦 ∈ β„’, π‘₯ ≑ 𝑦 π‘šπ‘œπ‘‘ 2β„’ , are at
distance at most 𝑑 β„’, 𝑑 2 + π‘Ÿ 2 from 𝑑.
Then (π‘₯ βˆ’ 𝑦)/2 2 ≀ π‘Ÿ 2 .
Proof:
Since π‘₯ + 𝑦 2 ∈ β„’, π‘₯ + 𝑦 2 βˆ’ 𝑑 β‰₯ 𝑑 β„’, 𝑑 .
(π‘₯ βˆ’ 𝑦)/2 2
= π‘₯ βˆ’ 𝑑 2 /2 + 𝑦 βˆ’ 𝑑 2 /2 βˆ’ π‘₯ + 𝑦 2 βˆ’ 𝑑 2
≀ 𝑑 β„’, 𝑑 2 + π‘Ÿ 2 /2 + 𝑑 β„’, 𝑑 2 + π‘Ÿ 2 /2 βˆ’ 𝑑 β„’, 𝑑
= π‘Ÿ2
2
How many closest vectors?
Corollary: For 𝑛-dimension lattice β„’ and target 𝑑 ∈ ℝ𝑛
there are most 2𝑛 closest vectors to 𝑑.
(0,1)
Bound is trivially tight.
(1,1)
(1 2 , 1 2)
(0,0)
(1,0)
How many closest vectors?
Corollary: For 𝑛-dimension lattice β„’ and target 𝑑 ∈ ℝ𝑛
there are most 2𝑛 closest vectors to 𝑑.
Proof: By lemma, any two closest vectors in the same
coset of β„’ 2β„’ are equal (their distance is 0).
Furthermore, the number of distinct cosets is 2𝑛 .
Dimension Reduction via Clustering
Let 𝐡 = (𝑏1 , … , 𝑏𝑛 ) be an HKZ basis of β„’.
Lemma: Assume that π‘₯, 𝑦 ∈ β„’, π‘₯ ≑ 𝑦 π‘šπ‘œπ‘‘ 2β„’ , are at
distance at most 𝑑 β„’, 𝑑 2 + π‘Ÿ 2 from 𝑑.
Then if π‘Ÿ < π‘π‘›βˆ’π‘˜+1 , π‘₯, 𝑦 have the same last π‘˜
coordinates w.r.t. 𝐡.
Proof: Suffices to show πœ‹π‘›βˆ’π‘˜+1 (π‘₯ βˆ’ 𝑦)/2 = 0. If not,
then πœ‹π‘›βˆ’π‘˜+1 (π‘₯ βˆ’ 𝑦)/2 ∈ πœ‹π‘›βˆ’π‘˜+1 (β„’) is non-zero and
π‘₯βˆ’π‘¦
πœ‹π‘›βˆ’π‘˜+1
2
≀
π‘₯βˆ’π‘¦
2
≀ π‘Ÿ < π‘π‘›βˆ’π‘˜+1 .
Exact CVP
Main Idea: Given HKZ basis 𝑏1 , … , 𝑏𝑛 of β„’ will show that
for π‘˜ chosen carefully, the last π‘˜ coordinates of any
close enough vector to 𝑑 are determined by their parity.
For π‘₯ =
π‘–βˆˆ[𝑛] π‘Žπ‘– 𝑏𝑖
∈ β„’ close enough to 𝑑, will show that
(π‘Žπ‘›βˆ’π‘˜+1 , … , π‘Žπ‘› )
is essentially determined by
(π‘Žπ‘›βˆ’π‘˜+1 (π‘šπ‘œπ‘‘ 2), … , π‘Žπ‘› (π‘šπ‘œπ‘‘ 2)).
Exact CVP
Can group approx. closest vectors by their coefficients
with respect to π‘π‘›βˆ’π‘˜+1 , … , 𝑏𝑛 .
Indexes at most β‰ˆ 2π‘˜ shifts of 𝑛 βˆ’ π‘˜ dimensional
sublattice β„’(𝑏1 , … , π‘π‘›βˆ’π‘˜ ), which we recurse on.
𝑏1
2 lattice subspaces
π‘Ž2 = 0
𝑑
𝑏2
π‘Ž2 = 1
β„’
High Level Algorithm
Input: 𝑛-dimensional lattice β„’ and target 𝑑.
Output: Closest lattice vectors in β„’ to 𝑑.
1. Compute HKZ basis 𝐡 of β„’, and number π‘˜ of β€œhigh
order coordinates”.
2. Sample many approx. closest vectors via DGS.
3. Group them according to last π‘˜ coordinates with
respect to 𝐡 = (𝑏1 , … , 𝑏𝑛 ) and recurse on
associated shifts of β„’(𝑏1 , … , π‘π‘›βˆ’π‘˜ ) .
Complexity Sketch
Initialization: (one shot 2𝑛+π‘œ(𝑛) time)
Compute short basis 𝐡 of β„’, and number π‘˜ of β€œhigh
order coordinates” (can compute for each rec. level).
Per level work: (2𝑛+π‘œ(𝑛) time)
Sample many approx. closest vectors via DGS.
Recursion: (β‰ˆ 2π‘˜ subproblems of dim. 𝑛 βˆ’ π‘˜)
Group them according to last π‘˜ coordinates with
respect to 𝐡 and recurse.
Total runtime: 2𝑛+π‘œ(𝑛)
Key Challenges
Runtime:
1. Getting many DGS samples at low parameters.
2. Show last π‘˜ coeffs β‰ˆ determined by their parity.
3. Deal with β‰ˆ 2π‘˜ subproblems in recursion analysis.
Correctness:
Show that we hit last π‘˜ coeffs of an exact closest
vector with high probability.
(will show that we hit exact parity)
Conclusions
1.
Fastest algorithm for CVP: 2𝑛+π‘œ(𝑛) time.
2. Explicitly / implicitly use ideas from all known
algorithm types: basis reduction, sieving, Voronoi.
3. The discrete Gaussian is a very powerful tool!
Many of its properties are still poorly understood...
Open Problems
1. Is 2𝑛 optimal under SETH? (matches # closest vectors)
2. Is there a deterministic / Las Vegas algorithm?
(2𝑛 time once the Voronoi cell is computed [BD 15])
𝑣1
𝑣2
𝑣3
𝑣6
0
𝑣4
𝑣5
3. Find a simpler & cleaner algorithm…
β„’
THANK YOU!