1 / 27

Partial Fairness in Secure Two-Party Computation

Partial Fairness in Secure Two-Party Computation. Dov Gordon & Jonathan Katz University of Maryland. What is Fairness?. Before the days of secure computation… (way back in 1980) It meant a “fair exchange”: of two signatures of two secret keys of two bits certified mail

glenharden
Download Presentation

Partial Fairness in Secure Two-Party Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Partial Fairness in Secure Two-Party Computation Dov Gordon & Jonathan Katz University of Maryland

  2. What is Fairness? • Before the days of secure computation… • (way back in 1980) • It meant a “fair exchange”: • of two signatures • of two secret keys • of two bits • certified mail • Over time, developed to include general computation: • F(x,y): X × Y → Z(1) × Z(2)

  3. Exchanging Signatures[Even-Yacobi80] Does that verify? NO. Does that verify? NO. Does that verify? Yes!! Sucker! Does that verify? NO. Does that verify? NO.  Impossible: if we require both players to receive the signature “at the same time” Impossible: later, in 1986, Cleve would show that exchanging two bits is impossible!

  4. “Gradual Release” • Reveal it “bit by bit”! (halve the brute force time.) • Prove each bit is correct and not junk. • Assume that the resulting “partial problem” is still (relatively) hard. • Notion of fairness: almost equal time to recover output on an early abort. • [Blum83, Even81, Goldreich83, EGL83, Yao86, GHY87, D95, BN00, P03, GMPY06]

  5. “Gradual Convergence” • Reduce the noise, increase the confidence; (probability of correctness increases over time) • E.g., resulti = output  ci, where ci→ 0 with increasing i. • Removes assumptions about resources. • Notion of fairness: almost equal confidence at the time of an early abort. • [LMR83, VV83, BG89, GL90]

  6. Drawbacks (release, convergence) • Key decisions are external to the protocol: • Should a player brute force the output? Should a player trust the output? • If the adversary knows how the decision is made, can violate fairness. • Fairness can be violated by an adversary who is willing to: run slightly longer than the honest parties are willing to run. accept slightly less confidence in the output. • No a priori bound on honest parties’ running time. • Assumes known computational resources for each party. • If the adversary has prior knowledge, they will receive “useful output” first.

  7. Our Results • We demonstrate a new framework for partial fairness. • We place the problem in the real/ideal paradigm. • We demonstrate feasibility for a large class of functions. • We show that our feasibility result is tight.

  8. Real world: view protocol x y x F2(x, y) F1(x, y) Defining Security (2 parties) output Ideal world: view x F1(x, y)

  9. view output view F1(x, y) Defining Security (2 parties) Real world: “Security with Complete Fairness” Indistinguishable! Ideal world:

  10. Real world: view protocol x Ideal world: x y x  “continue” “abort” F2(x, y) F1(x, y) The Standard Relaxation output view F1(x, y)/

  11. view output view F1(x, y)/ The Standard Relaxation Real world: “Security with abort” Note: no fairness at all! Indistinguishable! Ideal world:

  12. relaxed-ideal   -indistinguishable* Our Relaxation • Stick with real/ideal paradigm “Security with abort” “Full security” “-Security” Real world and ideal world are indistinguishable Can be achieved for any poly-time function, but it offers no fairness! *I.e., For all PPT A, |Pr[A(real)=1] – Pr[A(ideal)=1]| < (n) + negl (Similar to: [GL01], [Katz07]) Offers complete fairness, but it can only be achieved for a limited set of functions.

  13. x y a1(1), …, ar(1) b1(1), …, br(1) a1(2), …, ar(2) b1(2), …, br(2) a1, …, ar b1, …, br Protocol 1 ShareGen ai(1) ai(2) = ai bi(1)  bi(2) = bi ai: output of Alice if Bob aborts in round i+1. bi: output of Bob if Alice aborts in round i+1. To compute F(x,y): X × Y → Z(1) × Z(2)

  14. Protocol 1 similar to: [GHKL08], [MNS09] a1 a1 b1 a1 b1 b1 a2 a2 b2 a2 b2 b2 a3 a3 b3 a3 b3 b3 . . . . . . . . . . . . . . . . x y . . . . . . . . ai ai bi ai bi bi ar ar br ar br br

  15. Protocol 1 s1 a1 s1 b1 bi-1 s2 a2 b2 s2 a3 s3 s3 b3 . . . . . . . . . . . . x y . . . . . . ai si bi bi ar br ar br

  16. Protocol 1 How does we choose ? Choose round i* uniformly at random. a1 a1 b1 b1 For i ≥ i* ai = bi = F(x,y) a2 a2 b2 b2 a3 a3 For i ˂ i*: ai = F(x,Y) where Y is uniform For i ˂ i*: bi = F(X,y) where X is uniform b3 b3 . . . . . . . . . . . . x y . . . . . . ai ai bi = F1(x,y) bi bi F2(x,y) = ar ar br ar br br = F1(x,y) F2(x,y) =

  17. Protocol 1: analysis • What are the odds that aborts in round i*? • If she knows nothing about F1(x, y), it is at most 1/r. • But this is not a reasonable assumption! • Probability that F1(x, Y) = z or F1(x, Y) = z’ may be small! • Identifying F1(x, y) in round i* may be simple. I know the output is z or z’ a1 a2 a3 z’ z a6 a7 z’

  18. A Key Lemma • Consider the following game, (parameterized by (0,1] and r ≥ 1): • Fix distributions D1 and D2 s.t. for every z Pr[D1=z] ≥   Pr[D2=z] • Challenger chooses i* uniformly from {1, …, r} • For i < i* choose ai according to D1 • For i ≥ i* choose ai according to D2 • For i = 1 to r, give ai to the adversary in iteration i • The adversary wins if it stops the game in iteration i* Lemma: Pr[Win] ≤ 1/r

  19. Protocol 1: analysis α= 1/|Y| • D1 = F1(x, Y) for uniform Y • D2 = F1(x, y) • So Pr[D1 = F1(x, y)] ≥ Pr[Y=y] = 1/|Y| • Probability that P1 aborts in iteration i* is at most |Y|/r • Setting r = |Y|-1 gives -security • Need |Y| to have polynomial size • Need  to be 1/poly

  20. Protocol 1: summary • Theorem: Fix function F and  = 1/poly: If F has poly-size domain (for at least one player) then there is an -secure protocol computing F (under standard assumptions). • The protocol is private • Also secure-with-abort (after a small tweak)

  21. Handling large domains • With the previous approach,  = 1/|Y| becomes negligibly small: • this causes r to become exponentially large • Solution: if the range of Alice’s function is poly-size • With probability 1-, choose ai as before: ai = F1(x, Y) • With probability , choose ai  Z(1)(uniformly) •  is polynomial again! I know the output is z or z’ α= ε/|Z(1)| but… Pr[ai = z] ≥ ε/|Z(1)|

  22. Protocol 2: summary • Theorem: Fix function F and  = 1/poly: If F has poly-size range (for at least one player) then there is an -secure protocol computing F (under standard assumptions). • The protocol is private • The protocol is not secure-with-abort anymore 

  23. Our Results are Tight (wrt I/O size) Theorem: There exists a function with super-polynomial size domain and range that cannot be efficiently computed with -security. Theorem: There exists a function with super-polynomial size domain and poly-size range that cannot be computed with -security and with security-with-abort simultaneously.

  24. Summary • We suggest a clean notion of partial fairness. • Based on the real/ideal paradigm. • Parties have well defined outputs at all times. • We show feasibility for functions with poly-size domain/range, and infeasibility for certain functions outside that class. • Open: can we find a definition of partial fairness that has the above properties, and can be achieved for all functions?

  25. Thank You!

  26. Gradual Convergence: equality x y b ⊕ c2 = 1 Hope I’m lucky! b ⊕ c1 = 0 b ⊕ c3 = 1 Can’t trust that output ⊥ Suppose b = f(x,y) = 0whp Allice can bias Bob to output 1 For small i, ci has a lot of entropy! Bob’s output is (almost) random Accordingly, [BG89] instructs Bob to always respond by aborting. But what if Alice runs until the last round! 1 if x = y F(x,y) = 0 if x ≠ y

  27. Gradual Convergence: drawbacks • If parties always trust their output, adversary can induce a bias. • Decision of whether an honest party should trust the output is external to the protocol: • If made explicit, the adversary can abort just at that point. • If the adversary is happy with less confidence, he can receive “useful” output alone. • If the adversary has higher confidence a priori, he will receive “useful” output first.

More Related