Design and implementation of a reliable reputation system for file sharing in p2p networks
Sponsored Links
This presentation is the property of its rightful owner.
1 / 58

Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks PowerPoint PPT Presentation


  • 78 Views
  • Uploaded on
  • Presentation posted in: General

Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks. 2006/7/6 黃盈傑. Outline. Introduction Related Work System Overview Experimental Results Demo Conclusions & Future Works. Introduction. Problem.

Download Presentation

Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks

2006/7/6黃盈傑


Outline

  • Introduction

  • Related Work

  • System Overview

  • Experimental Results

  • Demo

  • Conclusions & Future Works


Introduction

Problem

Because of the anonymity of P2P, there is a problem that some of file providers may misuse by providing tampered files.


Introduction

Reputation System

It is hard for a user himself to gather enough information to get a trust value about other users directly.


Introduction

Attacks to P2P Network

  • Distribution of tampered with information

  • Man in the middle attack

V

U

M

Intercept & Modify


Introduction

Attacks to Reputation System(1/2)

  • Reentry to get rid of the bad history

  • Self replication


Introduction

Attacks to Reputation System(2/2)

  • Pseudospoofing

  • Shilling attacks


Introduction

Motivation

  • Defend the attacks.

  • Design a mechanism to determine the judgments are real or not.


Outline

  • Introduction

  • Related Work

  • System Overview

  • Experimental Results

  • Demo

  • Conclusions & Future Works


Related Work

Recommendation-based P2P trust model

  • “A recommendation-based peer-to-peer trust model” [DWJZ 2004]

  • Pure P2P

  • Any node x has a corresponding file node Dx that stores all information for it.


Related Work

Calculation Formula

Rij: node i’s recommend degree on node j

Sij: successful transactions from node j to node i.

Fij: unsuccessful transactions from node j to node i.

Trust Calculation formula


u

download

v

Suv orFuv

echo

Dv

Related Work

Restrain Slander & Magnify

  • For every transaction, when node u puts evaluation, node v also needs to echo it in a period of time.

  • If node u puts evaluation too frequently, the evaluation may not be accepted.


Related Work

Restrain Slander

  • When node uputs Fuv(negative evaluation)

    • If node v echos in time:

      Fuv is accepted.

    • If node v does not echo in time:

      Fuv is accepted in the probability of 1-Tv

      Node v is harder to be slandered when he has a higher trust value.


Related Work

Restrain Magnify

  • When node u puts Suv(positive evaluation)

    • If node v echos in time:

      Suv is accepted in the probability of Tv.

    • If node v does not echo in time:

      Suv is not accepted.

      Node v is harder to be magnified when he has a lower trust value.


Outline

  • Introduction

  • Related Work

  • System Overview

  • Experimental Results

  • Demo

  • Conclusions & Future Works


System Overview

Formulas Design (1/4)

Global Trust Calculate Formula:

Cx: client x

GTx: Cx’s Global Trust

-1 GTx 1, w = 2

gx = Counts of “Good” judgments of Cx

bx = Counts of “Bad” judgments of Cx

vbx = Counts of “Very bad” judgments of Cx

Jx = gx + bx +w* vbx


System Overview

Formulas Design (2/4)

Self-Trust Calculate Formula:

STxy: Self-trust for Cx to Cy

-1 STxy 1, w = 2

gxy = Counts of “Good” judgments that Cx report to Cy

bxy = Counts of “Bad” judgments that Cx report to Cy

vbxy = Counts of “Very bad” judgments that Cx report to Cy

Jxy = gxy + bxy + w*vbxy


System Overview

Formulas Design (3/4)

Reputation Calculate Formula:

REPxyrepresents Cy’s reputation to Cx.

-1 REPxy 1, 0  1


System Overview

Formulas Design (4/4)

Modification Reputation Calculate Formula:

-1 REPxy 1, 0  1, w =2

z: all the other clients in the P2P network


System Overview

System Framework of On-line Server


System Overview

System Framework of Off-line Server


System Overview

Database: reputation_data table


System Overview

Database: client_registration table


System Overview

Report Record File(1/2)


System Overview

Report Record File(2/2)

ex:

RP#true#2006/07/02 14:23:37 – 1151821417218

#uuid-59616261646162614A7874615032503386

C56C2FCE0C42CA9031D5989E73FD4F03

#md5:6604305f08ca5b498b5596cbaf901acb

#72_cs 3-3.txt#A:Good file


System Overview

Defend the Attacks(1/2)

  • Distribution of tampered with information

    Use the reputation as the guide to select file provider.

  • Reentry to get rid of the bad history

    ID-Design: mac_address + JXTA-ID


System Overview

Defend the Attacks(2/2)

  • Self replication

    Our reputation system not voting mechanism.

    Client can't download the files offered by themselves

  • Pseudospoofing

    ID-Design: mac_address + JXTA-ID

  • Shilling attacks

    Use the Monitor to detect to probably malicious attack.


System Overview

Concept of the Monitor

  • System will accept all reports from the clients.

  • Set the alert threshold (parameter) of each action.

  • The Monitor acts according those parameters.


System Overview

Monitor Flowchart

Check all client’s report record file and set their alert-state with corresponding parameters

Determine

F9,F12~F16

Any client’s

alert-state is set?

Monitor

Start

Check

Phase

Determine

Phase

Yes

Client was restrained from reporting while F-8~F-12 is set

No

Monitor Done


System Overview

Determine Phase

  • Cx: the client whose alert-state is set

  • Cg: the client gives Cx good judgment

  • Cb: the client gives Cx bad judgment

  • GTx: global trust of Cx, -1 GTx 1

  • NGTx: normalize global trust of Cx,

    0 NGTx 1


System Overview

Determine Phase: F-9

Cg gives too many good judgments to a specific file of Cx

  • Determine: Cg magnifies Cx

  • Punish Cg

  • Remove those good judgments from Cx


System Overview

Determine Phase:F-12

Cb gives too many bad judgments to a specific file of Cx

  • Determine: Cb slanders Cx

  • Punish Cb

  • Remove those bad judgments from Cx


System Overview

Determine Phase: F-14

Cx gets too many good judgments from Cg

Do nothing

Yes

Are there other

judgments?

P(NGTx)

accept?

No

Reset F-14 of Cx

Remove those good

judgments from Cx

No

Yes

Do nothing


System Overview

Determine Phase: F-16

Cx gets too many bad judgments from Cb

P(1-NGTx)

accept?

Do nothing

Yes

Yes

Are there other

judgments?

P(NGTb)

accept?

No

No

Reset F-16 of Cx

Remove those bad

judgments from Cx

No

Yes

Do nothing


System Overview

Determine Phase: F-13

Cx gets too many good judgments

Do nothing

Yes

final_judge > 0?

Calculate

avg_good_judgment,

avg_bad_judgment

& final_judge

Reset F-13 of Cx

Remove those good

judgments from Cx,

Punish Cg

No

avg_good_judgment = mean(∑(NGTg))

avg_bad_judgment = mean(∑(NGTb))

final_judge = avg_good_judgment - avg_bad_judgment


System Overview

Determine Phase: F-15

Remove those bad

judgments from Cx,

Punish Cb

Cx gets too many bad judgments

Yes

final_judge > 0?

Calculate

avg_good_judgment,

avg_bad_judgment

& final_judge

Reset F-15 of Cx

Punish Cx

No

avg_good_judgment = mean(∑(NGTg))

avg_bad_judgment = mean(∑(NGTb))

final_judge = avg_good_judgment - avg_bad_judgment


Outline

  • Introduction

  • Related Work

  • System Overview

  • Experimental Results

  • Demo

  • Conclusions & Future Works


Experimental Results

Experiment 1 (1/3)

  • 1,000 clients, 10,000 files.

  • Each client downloads 100 files.

  • This is an ideal network, so that any client can find all files of all clients, and he selects the file owner who has the highest reputation to be the file provider.

  • A successful downloading means the client gets the right file he wants.


Experimental Results

Experiment 1 (2/3)

Types of bad clients:

  • Type-1:Bad provider.

    A bad client provides wrong files only.

  • Type-2:Slanderer.

    Give a “bad” judgment to the client who provides him the right file.

  • Type-3:Magnifier.

    Give a “good” judgment to the client who even provides him the wrong file.


Experimental Results

Experiment 1 (3/3)

  • Client Setting:

    •  = 1,  = 1.

  • Server Monitor Setting:

    • Acts after every 5,000 downloadings.

    • Determine F-13 (Cx gets too many good judgments)

      & F-15 (Cx gets too many bad judgments)

    • Both thresholds are set as 5.


Experimental Results

Exp1 Type-1:Bad provider


Experimental Results

Exp1 Type-2:Slanderer


Experimental Results

Exp1 Type-3:Magnifier


Experimental Results

Experiment 2

Compare different variations of on-line clients.


Experimental Results

Exp2 Type-1:Bad provider


Experimental Results

Exp2 Type-2:Slanderer


Experimental Results

Exp2 Type-3:Magnifier


Experimental Results

Experiment 3

Compare “clients select file providers highest reputation” and “clients select the file owner who has the reputation higher than a threshold”


Experimental Results

Exp3 Type-1:Bad provider


Experimental Results

Exp3 Type-2:Slanderer


Experimental Results

Exp3 Type-3:Magnifier


Experimental Results

System Evaluation(1/2)

n: clients , m: report records

  • Procedure of the Monitor

    O(n.m2) time complexity

  • Store report records

    O(n.m) space complexity


Experimental Results

System Evaluation(2/2)

  • The Monitor checks one client’s record with 2000 entries:

    • AMD 1.53GHz, 512MB RAM

      average : 559 ms (15:406~859).

  • Each entry needs about 163 + filename Bytes

  • 2000 entry about 344KB


Outline

  • Introduction

  • Related Works

  • System Overview

  • Experimental Results

  • Demo

  • Conclusions & Future Works


Outline

  • Introduction

  • Related Works

  • System Overview

  • Experimental Results

  • Demo

  • Conclusions & Future Works


Conclusions & Future Works

Conclusions

  • Design the Monitor to detect probably malicious behavior, find out the malicious clients and punish them.

  • To improve the scalability, we can extend the server to multi-servers.


Conclusions & Future Works

Future Works

  • Solve the key-applying problem for encryption and decryption on multi-servers.

  • Improve the auto-determine abilities of the Monitor for other alert-states.

  • Improve the reputation system to incentive the clients share their files.


THE END


  • Login