1 / 6

Measuring Anonymity: A Tale of Two Distributions

Measuring Anonymity: A Tale of Two Distributions. Nikita Borisov UIUC PET 2006 Rump Session. Anonymity measures. High level problem: characterize a probability distribution D1: 1/128, 1/128 …, 1/128 D2: 1/2, 1/8192, …, 1/8192 Which is better?. Answers. Reiter-Rubin: D1 is better

pekelo
Download Presentation

Measuring Anonymity: A Tale of Two Distributions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Anonymity: A Tale of Two Distributions Nikita Borisov UIUC PET 2006 Rump Session

  2. Anonymity measures • High level problem: characterize a probability distribution • D1: 1/128, 1/128 …, 1/128 • D2: 1/2, 1/8192, …, 1/8192 • Which is better?

  3. Answers • Reiter-Rubin: D1 is better • D1 is beyond suspicion, D2 has only probable innocence • Anonymity sets: D2 is better • Anonymity set is 4097 instead of 128 • Entropy metric (Shannon): D1 = D2 • H(D1) = H(D2) = 7 • Entropy (min): D1 is better • H_min(D1) = 7, H_min(D2) = 1

  4. Single message case • D1 is better • Imagine hiring hit man to attack k most likely people • With D1, your ROI is low for any k • With D2, k=1 is pretty good • Is min entropy what we want? • H_min(D2) = H_min([1/2, 1/2]) • Perhaps guessing entropy is better

  5. Multiple message case • Suppose attacker observes 2 independent messages • Model anon. system as a noisy channel • Mutual information tells us about channel capacity • I(X; Y) = H(X) - H(X | Y) • Two observations: • I(X; Y1, Y2) <= I(X; Y1) + I(X; Y2) • (equality when Y1, Y2 are independent)

  6. Anonymity degree • It takes at least I(X;Y) / H(X) messages to learn X precisely • This is in fact 1/(1-d) where d is the Diaz et al degree of anonymity • (d = H(X|Y) / H(X)) • Normalizing makes sense!

More Related