1 / 94

Impossibility Results for Concurrent Two-Party Computation

Impossibility Results for Concurrent Two-Party Computation. Yehuda Lindell IBM T.J.Watson. A Basic Problem of Cryptography: Secure Computation. A set of parties with private inputs. Parties wish to jointly compute a function of their inputs so that (amongst other things):

kylan-chan
Download Presentation

Impossibility Results for Concurrent Two-Party Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impossibility Results for Concurrent Two-Party Computation Yehuda Lindell IBM T.J.Watson

  2. A Basic Problem of Cryptography:Secure Computation • A set of parties with private inputs. • Parties wish to jointly compute a function of their inputs so that (amongst other things): • Privacy: each party receives its output and nothing else. • Correctness: the output is correctly computed • Properties must be ensured even if some of the parties maliciously attack the protocol.

  3. Security Requirements • Consider a secure auction (with secret bids): • An adversary may wish to learn the bids of all parties – to prevent this, require PRIVACY • An adversary may wish to win with a lower bid than the highest – to prevent this, require CORRECTNESS • But, the adversary may also wish to ensure that it always gives the highest bid – to prevent this, require INDEPENDENCE OF INPUTS

  4. Defining Security • Option 1: • Analyze security concerns for each specific problem • Auctions: as in previous slide • Elections: privacy and correctness only (?) • Problems: • How do we know that all concerns are covered? • Definitions are application dependent (no general results, need to redefine each time).

  5. Defining Security – Option 2 • The real/ideal model paradigm: • Ideal model: parties send inputs to a trusted party, who computes the function and sends the outputs. • Real model: parties run a real protocol with no trusted help. • Informally: a protocol is secure if any attack on a real protocol can be carried out in the ideal model. • Since no attacks can be carried out in the ideal model, security is implied.

  6. adversary A adversary S ? Protocol interaction Trusted party The Security Definition: REAL IDEAL

  7. “Formal” Security Definition • A protocol securely computes a function f if: • For every real-model adversary A, there exists an ideal-model adversary S, such that • the result of a real execution of  with A is indistinguishable from the result of an ideal execution with S (where the trusted party computes f).

  8. Why This Definition? • General – it captures ALL applications. • The specifics of an application are defined by its functionality, security is defined as above. • The security guarantees achieved are easily understood (because the ideal model is easily understood). • We can be confident that we did not “miss” any security requirements.

  9. Remark • All the results presented here are according to this definitional paradigm.

  10. Proving Security of Protocols • “REQUIREMENTS”: • The output of the ideal-model adversary must have the same distribution as the output of the real-model adversary. • The output of the honest party in the ideal model (with the ideal adversary) must have the same distribution as the output the honest party in the real model (with the real adversary).

  11. Proving Security of Protocols • The ideal-model adversary’s output must be like that of the real-model adversary: • Internally invoke the real adversary, simulate a real execution, output whatever the real adversary does. • The honest party’s output must be the “same” in the real and ideal executions: • In the above simulation, “extract” the input used by the real adversary and send it to the trusted party.

  12. Proving Security of Protocols • Given a real-model adversary, construct an ideal-model adversary that does the following: • Internally invoke the real-model adversary • Simulate a real execution for the real adversary • Extract the input used by the real adversary, and send it to the trusted party • Obtain the output from the trusted party and cause the simulated real execution to terminate with this output.

  13. The Stand-Alone Model Alice Bob One set of parties executing a single protocol in isolation.

  14. Feasibility for Stand-Alone • Any multi-party functionality can be securely computed: • honest majority: information theoretic [BGW88,CCD88,RB89] • no honest majority: assuming trapdoor permutations [Y86,GMW87] • That is: any distributed task can be carried out securely!

  15. Stand-Alone Computation? • This setting does not realistically model the security concerns of modern networks. • A more realistic model:

  16. The Concurrent Model Alice Bob Many parties running many protocol executions.

  17. Composition vs Stand-Alone • Security in the stand-alone setting does not imply security under composition. • Therefore, the feasibility results of the late 80’s do not apply. • Conclusion: the question of feasibility for secure computation needs to be re-examined for the setting of protocol composition.

  18. Protocol Composition - Overview • A taxonomy of composition • 4 parameters: • The context • The participating parties • The scheduling • The number of executions

  19. The Context • What else is happening in the network (or with which protocols should the secure protocol compose): • Self Composition: many executions of a single protocol (protocols runs with itself – e.g. ZK) • General Composition: secure protocol runs together with arbitrary other protocols (e.g. UC) • Crucial difference regarding network control

  20. The Participating Parties • Who is running the executions: • A single set of parties: same set of parties (and often same roles – e.g., ZK). • Arbitrary sets of parties: possibly different and intersecting sets.

  21. The Scheduling • The order of executions: • Sequential • Parallel • Concurrent • Hybrid type: concurrent with timing

  22. Number of Executions • Standard notion: • Unbounded Concurrency: the number of secure protocol executions can be any polynomial • More restricted notion: • Bounded Concurrency: a priori bound on the number of executions (and protocol can rely on this bound).

  23. Classifying Some Known Work • Concurrent zero-knowledge [DNS98]: • Model of concurrent self composition with a single set of parties • Feasibility with arbitrary scheduling [RK99] • Much work on the round complexity of black-box and non-black-box zero-knowledge • Universal composition [Ca01]: • UC is a stringent security definition that guarantees secure under concurrent general composition with arbitrary sets of parties.

  24. Universal Composability-Feasibility • Positive Results - Any multi-party functionality can be securely computed: • honest majority: no setup assumptions [Ca01] • no honest majority: in the common reference string model (and assuming trapdoor permutations) [CLOS02] • Negative Results: • Impossibility for specific zero-knowledge and commitment functionalities without setup assumptions [CF01,Ca01]

  25. Remark • Security definitions vs composition operations: • UC security implies security under concurrent general composition • UC security = security definition • Concurrent general composition = composition operation • Sometimes can be the same (by defining security directly by the desired composition operation). • For UC (and other cases), it is not.

  26. Fundamental Questions • What functionalities can and cannot be UC computed without setup assumptions, in the no honest majority case? • Are the impossibility results for commitment and zero-knowledge (and possibly others) due to quirks of the UC definition, or are they inherent to concurrent general composition? • What about other definitions and other settings of concurrency? That is, can some type of concurrent two-party computation (e.g., self composition) be achieved without setup assumptions?

  27. Feasibility No honest majority and no setup:

  28. Our Results

  29. Feasibility of UC • Question 1: • What functionalities can and cannot be UC computed without setup assumptions, in the no honest majority case?

  30. Feasibility of UC • Setting:no honest majority and no trusted setup phase. • We focus on the important two-party case. • Recall: UC zero-knowledge and commitment already ruled out (but for specific definition of these functionalities).

  31. Impossibility Results • Example: consider deterministic two-party functions where both parties receive the same output: • Such a function can be UC computed IF AND ONLY IF it depends on only one parties’ input and is efficiently invertible. • Therefore, Yao’s millionaire’s problem cannot be UC computed. • We also have broad results for general functions (where parties may receive different output) and for probabilistic functionalities.

  32. Definition: Secure Computation • Recall the ideal/real model simulation paradigm: • Ideal model: parties send inputs to a trusted party, who computes the function and sends the outputs. • Real model: parties run the protocol with no trusted help. • Informally: a protocol is secure if any attack on a real protocol can be carried out in the ideal model.

  33. UC Definition • Introduces an additional adversarial entity called the environment Z. • Z provides inputs to the parties, reads their outputs and interacts with the adversary throughout the execution. • Z represents the external environment, and acts an an interactive distinguisher.

  34. Environment Arbitrary interaction write inputs/ read outputs Protocol interaction UC real model

  35. Environment Trusted party UC ideal model Arbitrary interaction write inputs/ read outputs

  36. Protocol interaction UC Security Environment ? Trusted party REAL IDEAL

  37. UC Definition – Remarks • The real-model and ideal model adversaries interact with the environment in the same way. • This interaction is on-line: • The adversary cannot rewind the environment • The adversary does not have access to the environment’s code (i.e., access is black-box) • This property is essential to the proof of the UC composition theorem, and to our impossibility results.

  38. Key Observation • Our impossibility results are derived from the following observation: • In the plain model and with no honest majority, the UC ideal-model simulator has no advantage over a real adversary. • In other words, whatever the simulator can do (e.g., extraction), a real adversary can also do.

  39. Proving the Observation • IDEA: What happens if the environment just plays the role of an honest party in the protocol? • Environment runs code of honest party P1. • All messages are forwarded by the adversary between the environment and honest party P2. • Otherwise, adversary does nothing.

  40. Recall: The Real Model Environment Protocol interaction

  41. extract input Recall: Ideal Model Simulation Environment

  42. Back to The Real Model Environment Protocol interaction

  43. The Real Execution Environment Protocol interaction Protocol interaction

  44. extract input The Ideal Simulation Environment Protocol interaction Trusted party

  45. extract input The Ideal Simulation Environment Protocol interaction Trusted party

  46. extract input The Ideal Simulation Environment Protocol interaction NOTE: Input extraction happens before any interaction with the trusted party. Trusted party

  47. extract input Equivalently Protocol interaction • Conclusion: the ideal-model simulator simulates (including extraction) while in a real protocol execution; exactly like a real adversary.

  48. An Attack Protocol interaction

  49. extract input An Attack Protocol interaction • Conclusion: a real adversary can use the ideal-model simulator in order to extract the honest party’s input. • E.g., this rules out computing any function that does not reveal the honest party’s input.

  50. UC Feasibility – Conclusions • Ruled out large classes of two-party functionalities. • Note 1: we do not have complete characterizations for feasibility • Note 2: there do exist interesting 2-party functions that can be UC computed: • Key exchange, secure message transmission… • However, these are of a “non-standard” nature.

More Related