1 / 26

Rafael Pass Cornell University

Limits of Provable Security From Standard Assumptions. Rafael Pass Cornell University. Modern Cryptography. Precisely define security goal (e.g., secure encryption) Precisely stipulate computational intractability assumption (e.g., hardness of factoring)

clarke
Download Presentation

Rafael Pass Cornell University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Limits of Provable Security From Standard Assumptions Rafael PassCornell University

  2. Modern Cryptography Precisely define security goal (e.g., secure encryption) Precisely stipulate computational intractability assumption(e.g., hardness of factoring) Security Reduction:provethat any attacker A that break security of scheme π can be used to violate the intractability assumption.

  3. A Celebrated Example:Commitments from OWFs [Naor,HILL] • Task: Commitment Scheme • Binding + Hiding • Non-interactive • Intractability Assumption: existence of OWF f • f is easy to compute but hard to invert • Security reduction [Naor,HILL]: Comf,PPT R s. t. for every algorithm A that breaks hiding of Comf, RAinverts f • Reduction R only uses attacker A as a black-box; i.e., R is a Turing Reduction.

  4. Turing Reductions C f(r) RA r Security reduction:RAbreaks C whenever Abreaks Hiding Reduction R may rewind and restart A.

  5. Provable Security • In the last three decades, lots of amazing tasks have been securely realized under well-studied intractability assumptions • Key Exchange, Public-Key Encryption, Secure Computation, Zero-Knowledge, PIR, Secure Voting, Identity based encryption, Fully homomorphic Encryption, Leakage-resilient Encryption… • But: several tasks/schemes have resisted security reductions under well-studied intractability assumptions.

  6. Schnorr’s Identification Scheme [Sch’89] • One of the most famous and widely employed identification schemes (e.g., Blackberry router protocol) • Secure under a passive “eaves-dropper” attack based on the discrete logarithm assumption • What about active attacks? • [BP’02] proven it secure under a new type of “one-more” inversion assumption • Can we based security on more standard assumptions?

  7. Commitment Schemes under Selective Opening [DNRS’99] • A commits to n values v1, …, vn • B adaptively asks A to open up, say, half of them. • Security: Unopened commitments remain hidden • Problem originated in the distributed computing literature over 25 years ago • Can we base selective opening security of non-interactive commitments on any standard assumption?

  8. One-More Inversion Assumptions [BNPS’02] • You get n target points y1,…, ynin group G with generator g. • Can you find the discrete logarithm to all n of them if you may make n-1 queries to a discrete logarithm oracle (for G and g) • One-more DLOG assumption states that no PPT algorithm can succeed with non-negligible probability • [BNPS] and follow-up work: Very useful for proving security of practical schemes • Can the one-more DLOG assumption be based on more standard assumptions? • What about if we weaken the assumption and only give the attacker n^eps queries?

  9. Unique Non-interactive Blind Signatures [Chaum’82] • Signature Scheme where a user U may ask the signer S to sign a message m, while keeping m hidden from S. • Futhermore, there only exists a single valid signature per message • Chaum provided a first implementation in 1982; very useful in e.g., E-cash • [BNPS] give a proof of security in the Random Oracle Model based on a one-more RSA assumption. • Can we base security of Chaum’s scheme, or any other unique blind signature scheme, on any standard assumption?

  10. Sequential Witness Hiding of O(1)-round public-coin protocols • Take any of the classic O(1)-round public-coin ZK protocols (e.g., GMW, Blum) • Repeat them in parallel to get negligible soundness error. • Do they suddenly leak the witness to the statement proved? [Feige’90] • Sequential WH: No verifier can recover the witness after sequentially participating in polynomially many proofs. • Can sequential WH of those protocols be based on any standard assumption?

  11. Main Result • For a general class of intractability assumptions, there do NOT exists Turing security reductions for demonstrating security of any those schemes/tasks/assumptions • Any security reduction R itself must constitutes an attack on assumption

  12. Intractability Assumptions • Following [Naor’03], we model an intractability assumption as a interaction between a Challenger C and an attacker A. • The goal of A is to make C accept • C may be computationally unbounded (different from [Naor’03], [GW’11]) • The only restriction is that the number of communication rounds is an a-priori bounded polynomial. C f(r) A r Intractability assumption (C,t) : “no PPT can make C output 1 w.p. significantly above t” E.g., 2-round: f is a OWF, Factoring, G is a PRG, DDH, Factoring, … O(1)-round:Enc is semantically secure (FHE), (P,V) is WH, O(1)-round with unbounded C: (P,V) is sound

  13. Main Theorem Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. If there exists a PPT reduction R for basing security of any of previously mentioned schemes, on the hardness of (C,t), then there exists a PPT attacker B that breaks (C,t) Note: restriction on C being bounded-round is necessary; otherwise we include the assumptions that the schemes are secure!

  14. Related Work • Several earlier lower bounds: • One-more inversion assumptions [BMV’08] • Selective opening [BHY’09] • Witness Hiding [P’06,HRS’09,PTV’10] • Blind Signatures [FS’10] • But they only consider restricted types of reductions (a la [FF’93,BT’04]), or (restricted types of) black-box constructions(a la [IR’88]) • Only exceptions [P’06,PTV’10] provide conditional lower-bounds on constructions of certain types of WH proofs based on OWF • Our result applies to ANYTuring security reduction and also non-black-box constructions.

  15. Proof Outline • Sequential Witness Hiding is “complete” • A positive answer to any of the questions implies the existence of a “special” O(1)-round sequential WH proof/argument for a language with unique witnesses. • Sequential WH of “special” O(1)-round proofs/arguments for languages with unique witnesses cannot be based on poly-round intractability assumptions using a Turing reduction.

  16. Special-sound proofs[CDS,Bl] X is true X is true a a b0 b1 b0, b1R {0,1}n c0 c1 Can extract a witness w • Relaxations: • multiple rounds • computationally sound protocols (a.k.a. arguments) • need p(n) transcripts (instead of just 2) to extract w Generalized special-sound

  17. Main Lemma • Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. Let (P,V) be a O(1)-round generalized special-sound proof of a language L with unique witnesses. • If there exists a PPT reduction R for basing sequential WH of (P,V) on the hardness of (C,t), then there exists a PPT attacker B that breaks (C,t)

  18. Proof Idea C f(r) RA r Assume RAbreaks C whenever A completely recovers witness of any statement x it hears sufficiently many sequential proofs of. Goal:Emulate in PPT a successful A’ for R thus break C in PPT (the idea goes back to [BV’99] “meta-reduction”, and even earlier [Bra’79])

  19. Proof Idea x C f(r) R w r Assume RAbreaks C whenever Abreaks seq WH of some special-sound proof for language with unique witness • Assume reduction R is “nice” [BMV’08,HRS’09,FS’10] • Only asks a single query to its oracle (or asks queries sequentially) • Then, simply “rewind” R feeding it a new “challenge” and extract witness Unique witness requirement crucial to ensure we emulate a good oracle A’

  20. General Reductions: Problem I x1 x2 x3 rewinding here: redo work of nested sessions R w3 w2 w1 Problem: R might nest its oracle calls. “naïve extraction” requires exponential time (c.f., Concurrent ZK [DNS’99]) Solution: If we require R to provide many sequential proofs, then we can find (recursively) find one proof where nesting depth is “small” Use Techniques reminiscent of Concurrent ZK a la [RK’99], [CPS’10]

  21. General Reductions: Problem II Problem: R might not only nest its oracle calls, but may also rewind its oracle Special-soundness might no longer hold under such rewindings. Solution: Pick messages from oracle using hashfunction. Use Techniques reminiscent of Black-box ZK lower-bound of [GK’90],[P’06] O(1)-round restriction on (P,V) is here crucial

  22. General Reductions: Problem III x C R w Problem: Oracle calls may be intertwined with interaction with C Solution: If we require R to provide many sequential proofs, then at least one proof is guaranteed not to intertwine

  23. In Sum Security of several “classic” cryptographic tasks/schemes---which are believed to be secure---cannot be proven secure (using Turing reduction) based on “standard” intractability assumptions. Establish a connection between lower-bounds for security reductions and Concurrent Security

  24. The GOOD: Provably secure under standard assumptions The ANNOYING : not broken, not provably secure* …but very efficient  The BAD: broken

  25. Ways Around It? • Super Polynomial Security Reductions: • Basing security on “super-poly” intractability assumptions • Possible to overcome some, but not all, lower-bounds • Full characterization in the paper. • Non-black-box security reductions: • Allow R to look at the code of A • Our lower-bound do NOT apply • Possible to overcome the Main Lemma [B’01,PR’06] • PPT Turing security reductions provide stronger security guarantees: • any attacker---even if I don’t know the description of his brain--with • reproducible behavior can be be efficiently used to break challenge New types of assumptions? Instead of intractability, tractability [W’10]? “knowledge”-assumptions? Hard to “falsify” [Naor’03]

  26. Thank You

More Related