1 / 39

Search to Decision Reductions for Knapsacks and LWE

Search to Decision Reductions for Knapsacks and LWE. Daniele Micciancio, Petros Mol. UCSD Theory Seminar. October 3, 2011. Number Theoretic Cryptography. Number theory: standard source of hard problems Factoring: given N = pq, find p (or q) (p, q: large primes)

kimball
Download Presentation

Search to Decision Reductions for Knapsacks and LWE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Search to Decision Reductions for Knapsacks and LWE Daniele Micciancio, Petros Mol UCSD Theory Seminar October 3, 2011

  2. Number Theoretic Cryptography • Number theory: standard source of hard problems • Factoring: given N = pq, find p (or q) (p, q: large primes) • Discrete Log: given g, gx (mod p), find x • Very influential over the years • Extensively studied (algorithms & constructions) • Two major concerns • How do we know that a “random” N is hard to factor ? • Algorithmic progress • subexponential (classical) algorithms • polynomial quantum algorithms

  3. Cryptography from Learning Problems integers n, q (public) secret s Finding s easy just need ~ n samples …

  4. New Problem: Learning With Errors (LWE) integers n, q (public) secret s small error from a known distribution … Finding s possibly much harder noise Compactly… secret b A e A S random = + m (mod q) , n small error vector

  5. The many interpretations of LWE • Algebraic: Solving noisy random linear equations • Geometric: Bounded Distance Decoding in Lattices • Learning Theory: Learning linear functions over under random “classification” noise • Coding theory: Decoding random linear q-ary codes

  6. LWE Background • Introduced by Regev [R05] • q = 2, Bernoulli noise -> Learning Parity with Noise (LPN) • Extremely successful in Cryptography • IND-CPA Public Key Encryption [Regev05] • Injective Trapdoor Functions/ IND-CCA encryption [PW08] • Strongly Unforgeable Signatures [GPV08, CHKP10] • (Hierarchical) Identity Based Encryption [GPV08, CHKP10, ABB10] • Circular- Secure Encryption [ACPS09] • Leakage-Resilient Cryptography [AGV09, DGK+10, GKPV10] • (Fully) Homomorphic Encryption [GHV10, BV11]

  7. Why LWE ? • Rich and Intuitive structure • Linear operations over fields (or rings) • Certain homomorphic properties • Ability to embed a trapdoor • Wealth of Cryptographic applications • Worst-case/average-case reduction • Solving LWE* (average problem) at least as hard as solving certain hard lattice problems in the worst-case • High resistance to algorithmic attacks • no quantum attacks known • no subexponential algorithms known *for specific error distribution

  8. LWE: Search & Decision n: size of the secret, m: #samples q: modulus, :error dis/ion Public parameters Find Given: Goal: find s (or e) Distinguish Given: Goal: decide if or

  9. S-to-D: Worst-Case vs Average Case • Clique (search): Given graph G and integer k, find a clique of size k in G, or say none exists. • Clique (decision): Given graph G and integer k, output YES if G has a clique of size k and NO otherwise. Theorem:Search <= Decision • Proof: Assume decider D. • for v in V • if D((G \ v, k)) = YES then G G \ v • return G Crucial:D is a worst-case (perfect) decider

  10. S-to-D: Worst-Case vs Average Case this talk D is an average (imperfect) decider Pr[D(A, As + e) = YES] - Pr[D(A, t) = YES] = δ > 1/poly() probability taken over sampling the instance and D’s randomness

  11. Search-to-Decision reductions (S-to-D) Why do we care? decision problems search problems - all LWE-based constructions rely on decisional LWE - strong indistinguishabilityflavor of security definitions - their hardness is better understood

  12. Search-to-Decision reductions (S-to-D) Why do we care? decision problems search problems - all LWE-based constructions rely on decisional LWE - strong indistinguishability flavor of security definitions - their hardness is better understood • S-to-D reductions: “Primitive Π is ABC-Secure assuming search problem P is hard”

  13. Our results • Toolset for studying Search-to-Decision reductions for LWE with polynomiallybounded noise. • Subsume and extend previously known ones • Reductions are in addition sample-preserving • Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions • Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere

  14. Our results • Toolset for studying Search-to-Decision reductions for LWE with polynomiallybounded noise. • Subsume and extend previously known ones • Reductions are in addition sample-preserving • Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions • Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere

  15. Bounded knapsack functions over groups Parameters - integer m - finite abelian group G - set S = {0,…, s - 1} of integers, s: poly(m) (Random) Knapsack family Samplingwhere Evaluation Example (random) modular subset sum:

  16. Knapsack functions: Computational problems distribution over public invert (search) Input: Goal: Find x Input: Samples from either: Goal: Label the samples Distinguish (decision) Notation: family of knapsacks over G with distribution Glossary: If decision problem is hard, function is pseudorandom (PsR) If search problem is hard, function is One-Way

  17. Search-to-Decision: Known results Decision as hard as search when… [Impagliazzo, Naor 89] : (random) modular subset sum , cyclic group uniform over [Fischer, Stern 96]: syndrome decoding , vector group uniform over all m-bit vectors with Hamming weight w.

  18. Our contribution:S-to-D for general knapsack One-Way s: poly(m) : knapsack family with range G and input distribution over PsR + PsR

  19. Our contribution: S-to-D for general knapsack One-Way s: poly(m) : knapsack family with range G and input distribution over Main Theorem PsR + PsR

  20. Our contribution: S-to-D for general knapsack One-Way Main Theorem ✔ PsR Much less restrictive than it seems + PsR PsR In most interesting cases holds in a strong information theoretic sense

  21. S-to-D for general knapsack: Examples One-Way Subsumes [IN89,FS96] and more Any group G and any distribution over PsR Any groupG with prime exponent and any distribution And many more… using known information theoretical tools (LHL, entropy bounds etc)

  22. Proof overview Inverter Distinguisher Input: Goal: Distinguish Input:g , g.x Goal: Find x Reminder

  23. Proof overview Inverter Predictor Distinguisher <= • Approach not (entirely) new [GL89, IN89] <= Input: Goal: Distinguish Input:g , g.x Goal: Find x Input:g , g.x, r Goal: find x.r (mod t) • Making it work in a general setting requires more advanced machinery and new technical ideas

  24. Proof Sketch: Step 1 Inverter Predictor <= Input:g , g.x Goal: Find x Input:g , g.x, r Goal: find x.r (mod t)

  25. Proof Sketch: Step 1 Inverter Predictor <= Input: f , f(x) Goal: Find x Input: f , f(x), r Goal: find x.r (mod t) This step holds for any function with domain • General conditions for inverting given noisy predictions for x.r (mod t) for possibly composite t • Goldreich–Levin: t=2 • Goldreich, Rubinfeld, Sudan: t prime

  26. Digression: Fourier Transform where Basis functions: Fourier Representation: Fourier Transform:

  27. Digression: Learning heavy coefficients [AGS03] Energy: τ xi h(xi) LHFC • LHFC is given query access to h. It outputs a list L s.t: • if , then L will most likely include α • LHFC runs in poly(m, 1/τ)

  28. Inverting <= Predicting (predictor) Inverter Predictor Input: f, f(x), r Goal: guess x.r (mod t) Input: f, f(x) Goal: Find x Quality of Predictor Error distribution e = guess – x.r (mod t) bias: Imperfect (but useful) Imperfect (and useless) perfect

  29. Inverting <= Predicting (main idea) Predictor t >= s Inverter Theorem Run. time: TP Bias:ε Run. time: poly(m, 1/ε) TP Succes Prob.: c.ε inverter Pred. Red. f, f(x) LHFC Proof Idea

  30. Inverting <= Predicting (main idea) Predictor t >= s Inverter Theorem Run. time: TP Bias:ε, Run. time: poly(m, 1/ε) TP Succes Prob.: c.ε h inverter Pred. Red. f, f(x) LHFC Proof Idea When the predictor’s bias is ε, the unknown x has energy

  31. Proof Sketch: Step 2 Predictor Distinguisher <= Input:g , g.x, r Goal: find x.r (mod t) Input: Goal: Distinguish • Reduction specific to knapsack functions • Proof follows outline of [IN89] but technical details very different.

  32. Predicting <= Distinguishing Predictor Dist/er Input: Goal: Distinguish Input:g , g.x, r Goal: Guess x.r (mod t) • Predictor makes an initial guess for x.r (mod t) • Uses guess to create some input to the distinguisher • If dist/er outputs “knapsack”, predictor outputs guess • Otherwise, it revises the guess Proof Idea Key Property • Correct guess  • Incorrect guess “closer” to • [IN89] used same idea for subset sum • For arbitrary groups proof significantly more involved

  33. Our results • Toolset for studying Search-to-Decision reductions for LWE with polynomiallybounded noise. • Subsume and extend previously known ones • Reductions are in addition sample-preserving • Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions • Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere

  34. What about LWE? A A s e g1 g2…gm g1 g2…gm e m + , , G n G is the parity check matrix for the code generated by A Error e from LWE  unknown input of the knapsack If A is“random”, G is also “random”

  35. What about LWE? A A s e g1 g2…gm g1 g2…gm e + , , G The transformation works in the other direction as well Putting all the pieces together… Search Search Decision Decision (A, As +e ) <= (G, Ge) <= (G’, G’e) <= (A’,A’s’ + e) S-to-D for knapsack

  36. LWE Implications LWE reductions follow from knapsacks reductions over • All known Search-to-Decision results for LWE/LPNwith bounded error[BFKL93, R05, ACPS09, KSS10] follow as a direct corollary • Search-to-Decision for new instantiations of LWE

  37. LWE: Sample Preserving S-to-D All reductions are in addition sample-preserving If we can distinguish m LWE samples from uniform with 1/poly(n) advantage, we can find s with 1/poly(n) probability given m LWEsamples Caveat: Inverting probability goes down (seems unavoidable) Previous reductions b A poly(m) <= decision A’ b’ m , search ,

  38. Why care about #samples? • LWE-based schemes often expose a certain number of samples, say m • With sample-preserving S-to-D we can base their security on the hardness of search LWE with m samples • Concrete algorithmic attacks against LWE [MR09, AG11] are sensitive to the number of exposed samples • for some parameters, LWE is completely broken by [AG11] if number of given samples above a certain threshold

  39. Open problems Sample preserving reductions for • 1. LWE with unbounded noise • - Used in various settings [Pei09, GKPV10, BV11b, BPR11] • - reductions are known [Pei09] but are not sample-preserving • 2. ring LWE • - Samples (a, a*s+e) where a, s, e drawn from R=Zq[x]/<f(x)> • - non sample-preserving reductions known [LPR10]

More Related