1 / 46

Multiparty Differential Privacy

Multiparty Differential Privacy. Salil Vadhan Center for Research on Computation & Society School of Engineering & Applied Sciences Harvard University. Differential Privacy Meets Multi-Party Computation (DPMPC) Workshop Boston University June 4-5, 2018. Outline.

bmcelroy
Download Presentation

Multiparty Differential Privacy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multiparty Differential Privacy Salil Vadhan Center for Research on Computation & Society School of Engineering & Applied Sciences Harvard University Differential Privacy Meets Multi-Party Computation (DPMPC) WorkshopBoston UniversityJune 4-5, 2018

  2. Outline • Differential Privacy in the Centralized Model • Multiparty Differential Privacy • Benefits of Differential Privacy & MPC

  3. Data Privacy: The Problem Given a dataset with sensitive information, such as: • Census data • Health records • Social network activity • Telecommunications data How can we: • enable “desirable uses” of the data • while protecting the “privacy” of the data subjects? • Academic research • Informing policy • Identifying subjects for drug trial • Searching for terrorists • Market analysis • … ????

  4. Approach 1: Encrypt the Data Problems?

  5. Approach 2: Anonymize the Data [Sweeney `97] “re-identification” often easy Problems?

  6. Approach 3: Mediate Access C q1 a1 q2 C a2 q3 a3 data analysts trusted “curator” Problems? Even simple “aggregate” statistics can reveal individual info. [Dinur-Nissim `03, Homer et al. `08, Mukatran et al. `11, Dwork et al. `15]

  7. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 data analysts curator Requirement: effect of each individual should be “hidden”

  8. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary curator

  9. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary curator Requirement: an adversary shouldn’t be able totell if any one person’s data were changed arbitrarily

  10. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary curator Requirement: an adversary shouldn’t be able totell if any one person’s data were changed arbitrarily

  11. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary curator Requirement: an adversary shouldn’t be able totell if any one person’s data were changed arbitrarily

  12. Simple approach: random noise C “What fraction of people are type B and HIV positive?” M Answer + Noise()) Error as • Very little noise needed to hide each person as • Note: this is just for one query

  13. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary randomizedcurator Requirement: for all D, D’ differing on one row, and allq1,…,qt Distribution of C(D,q1,…,qt) Distribution of C(D’,q1,…,qt)

  14. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary randomizedcurator Requirement: for all D, D’ differing on one row, and allq1,…,qt Distribution of C(D,q1,…,qt) Distribution of C(D’,q1,…,qt)

  15. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary randomizedcurator Requirement: for all D, D’ differing on one row, and allq1,…,qt  sets T, Pr[C(D,q1,…,qt)T] (1+)  Pr[C(D’,q1,…,qt)T]

  16. Differential privacy [Dinur-Nissim ’03+Dwork, Dwork-Nissim ’04, Blum-Dwork-McSherry-Nissim ’05, Dwork-McSherry-Nissim-Smith ’06] C q1 a1 q2 C a2 q3 a3 adversary randomizedcurator negligible Requirement: for all D, D’ differing on one row, and allq1,…,qt  sets T, Pr[C(D,q1,…,qt)T]  Pr[C(D’,q1,…,qt)T]

  17. Simple approach: random noise C “What fraction of people are type B and HIV positive?” C Answer + Laplace() Density • Very little noise needed to hide each person as • Note: this is just for one query

  18. Answering multiple queries C “What fraction of people are type B and HIV positive?” C Answer + Laplace() Error if Composition Thm[Dwork-Rothblum-V. `10]: independent -DP algs -DP

  19. Some Differentially Private Algorithms • histograms [DMNS06] • contingency tables [BCDKMT07, GHRU11, TUV12, DNT14], • machine learning [BDMN05,KLNRS08], • regression & statistical estimation [CMS11,S11,KST11,ST12,JT13] • clustering [BDMN05,NRS07] • social network analysis [HLMJ09,GRU11,KRSY11,KNRS13,BBDS13] • approximation algorithms [GLMRT10] • singular value decomposition [HR12, HR13, KT13, DTTZ14] • streaming algorithms [DNRY10,DNPR10,MMNW11] • mechanism design [MT07,NST10,X11,NOS12,CCKMV12,HK12,KPRU12] • … See Simons Institute Workshop on Big Data & Differential Privacy 12/13

  20. Differential Privacy: Interpretations Distribution of C(D,q1,…,qt)Distribution of C(D’,q1,…,qt) • Whatever an adversary learns about me, it could have learned from everyone else’s data. • Mechanism cannot leak “individual-specific” information. • Above interpretations hold regardless of adversary’s auxiliary information. • Composes gracefully (k repetitions ) k differentially private) But • No protection for information that is not localized to a few rows. • No guarantee that subjects won’t be “harmed” by results of analysis.

  21. Amazing possibility: synthetic data [Blum-Ligett-Roth ’08, Hardt-Rothblum `10] Utility: preserves fraction of people with every set of attributes! C C “fake” people Problem: uses computation time exponential in .

  22. Amazing Possibility II: Statistical Inference & Machine Learning Theorem [KLNRS08,S11]: Differential privacy for vast array of machine learning and statistical estimation problems with little loss in convergence rate as . • Optimizations & practical implementations for logistic regression, ERM, LASSO, SVMs in [RBHT09,CMS11,ST13,JT14]. Hypothesisor model about world, e.g. rule for predicting disease from attributes C

  23. Major Deployments of DP Centralized model: • Census “OnTheMap” commuter data (2006) • Census plans: all releases from 2020 Census Multiparty differential privacy: • Google “RAPPOR” for Chrome Statistics (2014) • Apple in iOS10 and Safari (2016)

  24. The Privacy Tools Project http://privacytools.seas.harvard.edu/ Computer Science, Law, Social Science, Statistics Any opinions, findings, and conclusions or recommendations expressed here are those of the author(s) and do not necessarily reflect the views of the funders of the work.

  25. Target: Data Repositories

  26. PSI: a Private data-Sharing Interface[Gaboardi-Honaker-King-Murtagh-Nissim-Ullman-Vadhan `16]

  27. Integration w/Statistical Tools for Social Science

  28. m1 m2 m3 2-Party Differential Privacy[Dwork-Nissim `04,…] • each party has a sensitive dataset, want to do a joint computation f(DA,DB) mk-1 mk out(m1,…,mk)f(DA,DB)

  29. m1 m2 m3 2-Party Differential Privacy mk-1 mk 0/1 • (, differential privacy for B:  adversary A*,  databases DB, D’Bthat differ on one row,Pr[outA*(A*,B(DB))=1]  e Pr[outA*(A*,B(D’B))=1] + ) • and similarly for A

  30. Multiparty Differential Privacy Require:  adversary P-i*,  databases Di, D’ithat differ on one row,Pr[outP-i*(P-i*,Pi(Di))=1]  e Pr[outP-i*(P-i*,Pi(D’i))=1] + P-5*` P2(D2) P3(D3) 0/1 P1(D1) P4(D4) P5(D5)

  31. The Local Model[Dwork-Kenthapadi-McSherry-Mironov-Naor `06] P-5*` P2(D2) P3(D3) Require:  adversary P-i*,  databases Di, D’ithat differ on one row,Pr[outP-i*(P-i*,Pi(Di))=1]  e Pr[outP-i*(P-i*,Pi(D’i))=1] + 0/1 P1(D1) P4(D4) P5(D5) parties, each holds a single row (their own data) Implemented by Apple & Google

  32. The Local Model P2(D2) P3(D3) P4(D4) M P1(D1) Pn(Dn) • parties, each holds a single row (their own data) • Often with mediator • untrusted for privacy • trusted for correctness • like broadcast channel & shared public randomness, but different efficiency goals (eg time for )

  33. Multiparty Differential Privacy: variants • Computationally bounded vs. unbounded adversaries (computational vs. information-theoretic security) • Passive/honest-but-curious vs. Active/malicious adversaries • Private communication channels vs. broadcast • Threshold adversaries • Correctness/Fairness/G.O.D. under malicious behavior

  34. Multiparty DP vs. Secure MPC

  35. Constructing Multiparty DP Protocols Given a function f(D1,…,Dn) we wish to compute • Example: each Di {0,1} and f(D1,…,Dn)=iDi • Step 1: Design a centralized mechanism C • Example: C(D1,…,Dn) = iDi + Lap(1/) • Step 2: Use secure multiparty computation [Yao86,GMW86] to implement C with a distributed protocol (P1,…,Pn). • Adversary’s view can be simulated (up to computational indistinguishability) given only access to “ideal functionality” C computational differential privacy • Can be done more efficiently in specific cases [DKMMN06,BNO08]. • Q: Did we really need computational differential privacy?

  36. Local DP Protocol for SUM:“Randomized Response” [W65] P1,…,Pn have bits D1,… ,Dn {0,1}, want to estimate iDi • Each Pi broadcasts Ni = • Everyone computes Z = (1/)i (Ni-(1-)/2) • Differential Privacy: ((1+)/2)/((1-)/2) = 1+O(). • Accuracy: E[Ni] = (1-)/2+Diwhp error is O(n1/2/) • Nontrivial but worse than O(1/) with computational d.p. Di w.p. (1+)/2 Di w.p. (1-)/2

  37. Lower Bound for Computing SUM Thm: Every local DP protocol for SUM incurs error (n1/2) whp • Assuming =O(1), =o(1/n) • Improves lower bound of (n1/2/(# rounds)) of [BNO08]. Proof: • Assume =0 for simplicity. • Let (D1,…,Dn) be uniform, independent bits, T=transcript(P1(D1) ,…,Pn(Dn)) • Claim: conditioned T=t, D1,…,Dn are still independent bits, each with bias O()

  38. Lower Bound for Computing SUM Thm: Every local DP protocol for SUM incurs error (n1/2) whp Proof: • (D1,…,Dn) = uniformly random, T= trans(P1(D1) ,…,Pn(Dn)) • Claim: conditioned T=t, D1,…,Dn are still independent bits, each with bias O() • Independence true for any communication protocol • Bias by Bayes Rule & Differential Privacy: = 1+

  39. Lower Bound for Computing SUM Thm: Every local DP protocol for SUM incurs error (n1/2) whp Proof: • (D1,…,Dn) = uniformly random, T= trans(P1(D1) ,…,Pn(Dn)) • Claim: conditioned T=t, D1,…,Dn are still independent bits, each with bias O() • Claim: sum of n independent bits, each with constant bias, falls outside any interval of size o(n1/2) whp. WhpiDi  [output(T) – o(n1/2), output(T)+o(n1/2)]

  40. Separation for Two-Party Protocols A’s input: x=(x1,…,xn)  {0,1}n, B’s input: y=(y1,…,yn)  {0,1}n Goal [MPRV09]: estimate <x,y> (set intersection) • Can be computed by a computational differentially private protocol with error O(1/) • Can be computed by an differentially private protocol with error O(n1/2/) • Thm [MMPTRV10]: Every 2-party DP protocol for <x,y> incurs error (n1/2/log n) whp. -

  41. Lower Bound for Inner Product A’s input: x=(x1,…,xn)  {0,1}n, B’s input: y=(y1,…,yn)  {0,1}n Thm: Every 2-party DP protocol for <x,y> incurs error (n1/2/log n) whp Proof: • X, Y = uniformly random, T= trans(A(X),B(Y)) • Claim: conditioned T=t, X,Y are independent unpredictable (Santha-Vazirani) sources = 1+

  42. Lower Bound for Inner Product Thm: Every 2-party DP protocol for <x,y> incurs error (n1/2/log n) Proof: • X, Y = uniformly random, T= trans(A(X),B(Y)) • Claim: conditioned T=t, X,Y are independent unpredictable (Santha-Vazirani) sources. • Claim: If X,Y independent, unpredictable sources on {0,1}n, then <X,Y> mod m almost-uniform in Zm for some m = (n1/2/log n) • Randomness extractor! • Generalizes [Vaz86] result for m=2. • Fourier analysis over Zm

  43. Lower Bound for Inner Product Thm: Every 2-party DP protocol for <x,y> incurs error (n1/2/log n) whp Proof: • X, Y = uniformly random, T= trans(A(X),B(Y)) • Claim: conditioned T=t, X,Y are independent unpredictable (Santha-Vazirani) sources. • Claim: If X,Y independent, unpredictable sources on {0,1}n, then <X,Y> mod m almost-uniform in Zm for some m = (n1/2/log n) Whp<X,Y> [output(T)–o(m), output(T)+o(m)]

  44. A Starker Separation Thm [KLNRS `08]: When data is iid samples from unknown distribution , local DP is equivalent to the statistical query (SQ) model. • estimating for bounded . In particular, PARITY functions are not PAC-learnable in the local model [K93]. Thm [KLNRS `08]: PARITY functions can be PAC-learned with centralized DP.

  45. The Need for DP+MPC • Information-theoretic multiparty/local DP has severe limitations compared with centralized DP • Incurs significantly larger error () for simple funcs. • Impossible to achieve for some complex tasks. • Consequences • only companies w/huge data have deployed local DP • motivating hybrid models, where some users get local DP and opt-in users get centralized DP [AHKLZ17] • With MPC & computational security, there is no gap!

  46. Some Research Directions • Efficient/practical constructions of MPC implementations of DP algorithms. • Minimal MPC security or complexity assumptions needed for multiparty DP [HOZ13,KMS14,GKMPS16]. • Develop theory of information-theoretic multiparty DP similar to centralized case.

More Related