Personalized privacy protection in social networks
This presentation is the property of its rightful owner.
Sponsored Links
1 / 30

Personalized Privacy Protection in Social Networks PowerPoint PPT Presentation


  • 45 Views
  • Uploaded on
  • Presentation posted in: General

Personalized Privacy Protection in Social Networks. Speaker : XuBo. Privacy Protection. Privacy Protection in Social Networks. more and more people join multiple social networks on the Web

Download Presentation

Personalized Privacy Protection in Social Networks

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Personalized privacy protection in social networks

Personalized Privacy Protection in Social Networks

Speaker : XuBo


Privacy protection

Privacy Protection


Privacy protection in social networks

Privacy Protection in Social Networks

  • more and more people join multiple social networks on the Web

  • as a service provider, it is essential to protect users’ privacy and at the same time provide “useful”data


Previous works

Previous Works

  • Clustering-based approaches

  • Graph editing methods

  • Drawback

    • Different users may have different privacy preferences

    • Same level may not be fair and data may useless


Personalized privacy protection in social networks1

Personalized Privacy Protection in Social Networks

  • define different levels of protection for users and incorporate them into the same published social network

  • A user can have a clear estimation of the knowledge that an attacker can know about him

  • The knowledge an attacker uses to find the privacy information of a user is called the background knowledge


Background knowledge

background knowledge


A simple example

A simple example

  • protection objectives

    • the probability that an attacker finds a person P is node u in the published graph should be less than 50%

    • the probability that an attacker finds person P1 and P2 have a connection is less than 50%


Personalized protection

Personalized Protection


Method

Method

  • For Level 1 protection, we use node label generalization

  • For Level 2 protection, we combine the noise node/edge adding methods based on the protection at Level 1

  • For Level 3 protection, we further use the edge label generalization to achieve the protection objectives


Abstract

Abstract

  • PROBLEM DEFINITION

  • LEVEL 1 PROTECTION

  • LEVEL 2 PROTECTION

  • LEVEL 3 PROTECTION

  • EXPERIMENTS


Problem definition

PROBLEM DEFINITION

  • G(V,E), Given a constant k


Np hard problem

NP-hard problem

  • Here the cost stands for the sum of label generalization levels. For example, in Figure 1(d), if let the labels in the group {a,d} to be the same, they become [(Africa, American), 2*], thus the node label generalization cost of this group is 4.


Level 1 protection

LEVEL 1 PROTECTION

  • Level 1’s background knowledge is about node label list

  • generalization is to group nodes and make the nodes within each group have one generalized node label list


A chieve level 1 protection

Achieve Level 1 protection

  • if each group’s size is at least k, the first objective is achieved.

  • To satisfy objectives (2) and (3), the following condition must be satisfied:

The first condition constrains that any two nodes in the same group do not connect to a same node

The second condition constrains that no edges within a group


T he safety grouping condition sgc

The Safety Grouping Condition (SGC)


Algorithm 1 generate safe groups

Algorithm 1: Generate safe groups


Level 2 protection

LEVEL 2 PROTECTION

  • Adding nodes/edges into GL1

  • Label Assignment


Construct a k degree anonymous graph

Construct a k-degree anonymous graph

  • The first step is to set a target degree for each node

  • the second step is to randomly create edges between the nodes which need to increase their degrees under SGC

  • we add noise nodes to enable all the nodes in GL1 have their target degrees by connecting them with noise nodes under SGC.

  • we need to hide the noise nodes by making their degree to a preselected value degree target

Degree target is the degree that the maximum number of groups of size k in GL1 have


Algorithm 2 degree anonymizing algorithm with x

Algorithm 2: Degree Anonymizing Algorithm with X


A running example of algorithm 2

A running example of Algorithm 2


Label assignment

Label Assignment

  • Nocon(A1,A2, l)

  • a node/edge label assignment with less Nocon(A1,A2, l) changes is preferred

  • heuristic method

    • first assign labels to the noise edges

    • decide the labels of noise nodes


A ssign labels to the noise edges

Assign labels to the noise edges


D ecide the labels of noise nodes

Decide the labels of noise nodes


Level 3 protection

LEVEL 3 PROTECTION

  • degree label sequence

  • a group contains at least one node that needs Level 3 protection, for any nodes in it, change their degree label sequence to be the same by generalizing edge labels.


Generate degree label anonymous graph

Generate degree label anonymous graph


Experiments

EXPERIMENTS

  • Utilities

    • One hop query (1 hop)

      • (l1, le, l2)

    • Two hop query (2 hop)

      • (l1, le, l2, le2, l3)

    • Relative error =

    • Degree Distribution

    • EMD (Earth Mover Distance)

      • a good measure of the difference between two distributions


Datasets

Datasets

  • Speed Dating Data

    • 552 nodes and 4194 edges

    • Each node has 19 labels

    • Edge label is “match”/”unmatch”

  • ArXivData

    • 19835 nodes and 40221 edges

    • Each node denotes an author

    • set label ”seldom” to edges with weight 1, ”normal” to edges with weight 2 - 5 and ”frequent” to edges with weight larger than 5

  • ArXivData with uniform labels


Results

Results

  • higher utility

  • average relative error increases with the increasing of k


Results 2

Results(2)


Results 3

Results(3)


  • Login