Differential privacy on linked data theory and implementation
This presentation is the property of its rightful owner.
Sponsored Links
1 / 72

Differential Privacy on Linked Data: Theory and Implementation PowerPoint PPT Presentation


  • 99 Views
  • Uploaded on
  • Presentation posted in: General

Differential Privacy on Linked Data: Theory and Implementation. Yotam Aron. Table of Contents. Introduction Differential Privacy for Linked Data SPIM implementation Evaluation. Contributions. Theory on how to apply differential privacy to linked data.

Download Presentation

Differential Privacy on Linked Data: Theory and Implementation

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Differential privacy on linked data theory and implementation

Differential Privacy on Linked Data: Theory and Implementation

YotamAron


Table of contents

Table of Contents

  • Introduction

  • Differential Privacy for Linked Data

  • SPIM implementation

  • Evaluation


Contributions

Contributions

  • Theory on how to apply differential privacy to linked data.

  • Experimental implementation of differential privacy on linked data.

  • Overall privacy module for SPARQL queries.


Introduction

Introduction


Overview why privacy risk

Overview: Why Privacy Risk?

  • Statistical data can leak privacy.

  • Mosaic Theory: Different data sources harmful when combined.

  • Examples:

    • Netflix Prize Data set

    • GIC Medical Data set

    • AOL Data logs

  • Linked data has added ontologies and meta-data, making it even more vulnerable.


Current solutions

Current Solutions

  • Accountability:

    • Privacy Ontologies

    • Privacy Policies and Laws

  • Problems:

    • Requires agreement among parties.

    • Does not actually prevent breaches, just a deterent.

    • Heterogeneous


Current solutions cont d

Current Solutions (Cont’d)

  • Anonymization

    • Delete “private” data

    • K – anonymity (Strong Privacy Guarantee)

  • Problems

    • Deletion provides no strong guarantees

    • Must be carried out for every data set

    • What data should be anonymized?

    • High computational cost (k-anonimity is np-hard)


Differential privacy

Differential Privacy

  • Definition for relational databases (from PINQ paper):

    A randomized function K gives Ɛ-differential privacy if for all data sets and differing on at most one record, and all,


Differential privacy1

Differential Privacy

  • What does this mean?

    • Adversaries get roughly same results from and , meaning a single individual’s data will not greatly affect their knowledge acquired from each data set.


How achieved

How Achieved?

  • Add noise to result.

  • Simplest: Add Laplace noise


Laplace noise parameters

Laplace Noise Parameters

  • Mean = 0 (so don’t add bias)

  • Variance = , where is defined, for a record j, as

  • Theorem: For query Q result R,the output R + Laplace(0, ) is differentially private.


Other benefit of laplace noise

Other Benefit of Laplace Noise

  • A set of queries each with sensitivity will have an overall sensitivity of

  • Implementation-wise, can allocate an “budget” Ɛ for a client and for each query client specifies to use.


Benefits of differential privacy

Benefits of Differential Privacy

  • Strong Privacy Guarantee

  • Mechanism-Based, so don’t have to mess with data.

  • Independent of data set’s structure.

  • Works well with for statistical analysis algorithms.


Problems with differential privacy

Problems with Differential Privacy

  • Potentially poor performance

    • Complexity (especially for non-linear functions)

    • Noise

  • Only works with statistical data (though this has fixes)

  • How to calculate sensitivity of arbitrary query?


Differential privacy for linked data

Differential Privacy for Linked Data


Differential privacy and linked data

Differential Privacy and Linked Data

  • Want same privacy guarantees for linked data without, but no “records.”

  • What should be “unit of difference”?

    • One triple

    • All URIs related to person’s URI

    • All links going out from person’s URI


Differential privacy and linked data1

Differential Privacy and Linked Data

  • Want same privacy guarantees for linked data without, but no “records.”

  • What should be “unit of difference”?

    • One triple

    • All URIs related to person’s URI

    • All links going out from person’s URI


Differential privacy and linked data2

Differential Privacy and Linked Data

  • Want same privacy guarantees for linked data without, but no “records.”

  • What should be “unit of difference”?

    • One triple

    • All URIs related to person’s URI

    • All links going out from person’s URI


Differential privacy and linked data3

Differential Privacy and Linked Data

  • Want same privacy guarantees for linked data without, but no “records.”

  • What should be “unit of difference”?

    • One triple

    • All URIs related to person’s URI

    • All links going out from person’s URI


Records for linked data

“Records” for Linked Data

  • Reduce links in graph to attributes

  • Idea:

    • Identify individual contributions from a single individual to total answer.

    • Find contribution that affects answer most.


Records for linked data1

“Records” for Linked Data

  • Reduce links in graph to attributes, makes it a record.

P1

P2

Knows


Records for linked data2

“Records” for Linked Data

  • Repeated attributes and null values allowed

P1

P2

Knows

Loves

Knows

P3

P4

Knows


Records for linked data3

“Records” for Linked Data

  • Repeated attributes and null values allowed (not good RDBMS form but makes definitions easier)


Query sensitivity in practice

Query Sensitivity in Practice

  • Need to find triples that “belong” to a person.

  • Idea:

    • Identify individual contributions from a single individual to total answer.

    • Find contribution that affects answer most.

  • Done using sorting and limiting functions in SPARQL


Example

Example

S1

  • COUNT of places visited

S2

P1

MA

P2

State of Residence

S3

Visited


Example1

Example

S1

  • COUNT of places visited

S2

P1

MA

P2

State of Residence

S3

Visited


Example2

Example

S1

  • COUNT of places visited

S2

P1

MA

P2

State of Residence

S3

Visited

Answer: Sensitivity of 2


Using sparql

Using SPARQL

  • Query:

    (COUNT(?s) as ?num_places_visited) WHERE{

    ?p :visited ?s }


Using sparql1

Using SPARQL

  • Sensitivity Calculation Query (Ideally):

    SELECT ?p (COUNT(ABS(?s)) as ?num_places_visited) WHERE{

    ?p :visited ?s;

    ?p foaf:name ?n }

    GROUP BY ?p ORDER BY ?num_places_visited LIMIT 1


In reality

In reality…

  • LIMIT, ORDER BY, GROUP BY doesn’t work together in 4store…

  • For now: Don’t use LIMIT and get top answers manually.

    • I.e. Simulate using these in python

  • Would like to keep it on sparql-side ideally so there is less transmitted data (e.g. on large data sets)


Side rant 4store limitations

(Side rant) 4store limitations

  • Many operations not supported in unison

  • E.g. cannot always filter and use “order by” for some reason

  • Severely limits the types of queries I could use to test.

  • May be desirable to work with a different triplestore that is more up-to-date (ARQ).

    • Didn’t because wanted to keep code in python.

    • Also had already written all code for 4store


Problems with this approach

Problems with this Approach

  • Need to identify “people” in graph.

    • Assume, for example, that URI with a foaf:name is a personand use its triples in privacy calculations.

    • Imposes some constraints on linked data format for this to work.

    • For future work, maybe there’s a way to automatically identify private data, maybe by using ontologies.

  • Complexity is tied to speed of performing query over large data set.


And on the plus side

…and on the Plus Side

  • Model for sensitivity calculation can be expanded to arbitrary statistical functions.

    • e.g. dot products, distance functions, etc.

  • Relatively simple to implement using SPARQL 1.1


Differential privacy protocol

Differential Privacy Protocol

Differential Privacy Module

SPARQL Endpoint

Client

Scenario: Client wishes to make standard SPARQL 1.1 statistical query. Client has Ɛ “budget” of overall accuracy for all queries.


Differential privacy protocol1

Differential Privacy Protocol

Differential Privacy Module

SPARQL Endpoint

Query,

Ɛ > 0

Client

Step 1: Query and epsilon value sent to the endpoint and intercepted by the enforcement module.


Differential privacy protocol2

Differential Privacy Protocol

Differential Privacy Module

SPARQL Endpoint

Client

Sens Query

Step 2: The sensitivity of the query is calculated using a re-written, related query.


Differential privacy protocol3

Differential Privacy Protocol

Differential Privacy Module

SPARQL Endpoint

Client

Query

Step 3: Actual query sent.


Differential privacy protocol4

Differential Privacy Protocol

Differential Privacy Module

SPARQL Endpoint

Result and Noise

Client

Step 4: Result with Laplace noise sent over.


Design of privacy system

Design of Privacy System


Sparql privacy insurance module

SPARQL Privacy Insurance Module

  • i.e. SPIM

  • Use authentication, AIR, and differential privacy in one system.

    • Authentication to manage Ɛ-budgets.

    • AIR to control flow of information and non-statistical data.

    • Differential privacy for statistics.

  • Goal: Provide a module that can integrate into SPARQL 1.1 endpoints and provide privacy.


Design

Design

HTTP Server

OpenID Authentication

AIR Reasoner

Differential Privacy Module

SPIM Main Process

Triplestore

Privacy Policies

User Data


Http server and authentication

HTTP Server and Authentication

  • HTTP Server: Django server that handles http requests.

  • OpenID Authentication: Django module.

HTTP Server

OpenID Authentication


Spim main process

SPIM Main Process

  • Controls flow of information.

  • First checks user’s budget, then uses AIR, then performs final differentially-private query.

SPIM Main Process


Air reasoner

AIR Reasoner

  • Performs access control by translating SPARQL queries to n3 and checking against policies.

  • Can potentially perform more complicated operations (e.g. check user credentials)

AIR Reasoner

Privacy Policies


Differential privacy2

Differential Privacy

  • Works as discussed in previous slides.

  • Contains users and their Ɛ-values.

Differential Privacy Module

User Data


Evaluation

Evaluation


Evaluation1

Evaluation

  • Three things to evaluate:

    • Correctness of operation

    • Correctness of differential privacy

    • Runtime

  • Used a anonymized clinical database as the test data and added fake names, social security numbers, and addresses.


Correctness of operation

Correctness of Operation

  • Can the system do what we want?

    • Authentication provides access control

    • AIR restricts information and types of queries

    • Differential privacy gives strong privacy guarantees.

  • Can we do better?


Use case used in thesis

Use Case Used in Thesis

  • Clinical database data protection

  • HIPAA: Federal protection of private information fields, such as name and social security number, for patients.

  • 3 users

    • Alice: Works in CDC, needs unhindered access

    • Bob: Researcher that needs access to private fields (e.g. addresses)

    • Charlie: Amateur researcher to whom HIPAA should apply

  • Assumptions:

    • Django is secure enough to handle “clever attacks”

    • Users do not collude, so can allocate individual epsilon values.


Use case solution overview

Use Case Solution Overview

  • What should happen:

    • Dynamically apply different AIR policies at runtime.

    • Give different epsilon-budgets.

  • How allocated:

    • Alice: No AIR Policy, no noise.

    • Bob: Give access to addresses but hide all other private information fields.

      • Epsilon budget: E1

    • Charlie: Hide all private information fields in accordance with HIPAA

      • Epsilon budget: E2


Use case solution overview1

Use Case Solution Overview

  • Alice: No AIR Policy

  • Bob: Give access to addresses but hide all other private information fields.

    • Epsilon budget: E1

  • Charlie: Hide all private information fields in accordance with HIPAA

    • Epsilon budget: E2


Example a clinical database

Example: A Clinical Database

  • Client Accesses triplestore via HTTP server.

  • OpenID Authentication verifies user has access to data. Finds epsilon value,

HTTP Server

OpenID Authentication


Example a clinical database1

Example: A Clinical Database

  • AIR reasoner checks incoming queries for HIPAA violations.

  • Privacy policies contain HIPAA rules.

AIR Reasoner

Privacy Policies


Example a clinical database2

Example: A Clinical Database

  • Differential Privacy applied to statistical queries.

  • Statistical result + noise returned to client.

Differential Privacy Module


Correctness of differential privacy

Correctness of Differential Privacy

  • Need to test how much noise is added.

    • Too much noise = poor results.

    • Too little noise = no guarantee.

  • Test: Run queries and look at sensitivity calculated vs. actual sensitivity.


How to test sensitivity

How to test sensitivity?

  • Ideally:

    • Test noise calculation is correct

    • Test that noise makes data still useful (e.g. by applying machine learning algorithms).

  • Fort his project, just tested former

    • Machine learning APIs not as prevalent for linked data.

    • What results to compare to?


Test suite

Test suite

  • 10 queries for each operation (COUNT, SUM, AVG, MIN, MAX)

  • 10 different WHERE CLAUSES

  • Test:

    • Sensitivity calculated from original query

    • Remove each personal URI using “MINUS” keyword and see which removal is most sensitive


Example for sens test

Example for Sens Test

  • Query:

    PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>

    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

    PREFIX foaf: <http://xmlns.com/foaf/0.1#>

    PREFIX mimic: <http://air.csail.mit.edu/spim_ontologies/mimicOntology#>

    SELECT (SUM(?o) as ?aggr) WHERE{

    ?s foaf:name ?n.

    ?s mimic:event ?e.

    ?e mimic:m1 "Insulin".

    ?e mimic:v1 ?o.

    FILTER(isNumeric(?o))

    }


Example for sens test1

Example for Sens Test

  • Sensitivity query:

    PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>

    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

    PREFIX foaf: <http://xmlns.com/foaf/0.1#>

    PREFIX mimic: <http://air.csail.mit.edu/spim_ontologies/mimicOntology#>

    SELECT (SUM(?o) as ?aggr) WHERE{

    ?s foaf:name ?n.

    ?s mimic:event ?e.

    ?e mimic:m1 "Insulin".

    ?e mimic:v1 ?o.

    FILTER(isNumeric(?o))

    MINUS {?s foaf:name "%s"}

    } % (name)


Results query 6 error

Results Query 6 - Error


Runtime

Runtime

  • Queries were also tested for runtime.

    • Bigger WHERE clauses

    • More keywords

    • Extra overhead of doing the calculations.


Results query 6 runtime

Results Query 6 - Runtime


Interpretation

Interpretation

  • Sensitivity calculation time on-par with query time

    • Might not be good for big data

    • Find ways to reduce sensitivity calculation time?

  • AVG does not do so well…

    • Approximation yields too much noise vs. trying all possibilities

    • Runs ~4x slower than simple querying

    • Solution 1: Look at all data manually (large data transfer)

    • Solution 2: Can we use NOISY_SUM / NOISY_COUNT instead?


Conclusion

Conclusion


Contributions1

Contributions

  • Theory on how to apply differential privacy to linked data.

  • Experimental implementation of differential privacy.

    • Verification that it is applied correctly.

  • Overall privacy module for SPARQL queries.

    • Limited but a good start

  • Other:

    • Updated sparql to n3 translation to Sparql version 1.1

    • Expanded upon IARPA project to create policies against statistical queries.


Shortcomings and future work

Shortcomings and Future Work

  • Triplestoresneed some structure for this to work

    • Personal information must be explicitly defined in triples.

    • Is there a way to automatically detect what triples would constitute private information?

  • Complexity

    • Lots of noise for sparse data.

    • Can divide data into disjoint sets to reduce noise like PINQ does

    • Use localized sensitivity measures?

  • Third party software problems

    • Would this work better using a different Triplestore implementation?


Other work

Other work

  • Other implementations:

    • PINQ

    • Airavat

    • PDDP

  • Some of the Theoretical Work Out There

    • Differential privacy paper

    • Exponential Mechanism

    • Noise Calculation

    • Differential Privacy and Machine Learning


Appendix results q1 q2

Appendix: Results Q1, Q2


Appendix results q3 q4

Appendix: Results Q3, Q4


Appendix results q5 q6

Appendix: Results Q5, Q6


Appendix results q7 q8

Appendix: Results Q7, Q8


Appendix results q9 q10

Appendix: Results Q9, Q10


  • Login