1 / 25

Detecting Deception in Reputation Management

Detecting Deception in Reputation Management. Appeared in AAMAS’03. by Bin Yu and Munindar P. Singh. in Department of Computer Science, North Carolina State University. Speaker : Yu-Hsin Shih. Introduction. Motivation Background Deception Experiment Results. Motivation.

lilia
Download Presentation

Detecting Deception in Reputation Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Detecting Deception in Reputation Management Appeared in AAMAS’03 by Bin Yu and Munindar P. Singh in Department of Computer Science,North Carolina State University Speaker: Yu-Hsin Shih

  2. Introduction • Motivation • Background • Deception • Experiment Results

  3. Motivation • Detect deception by testimony propagation and aggregation. • Applied belief function on weighted majority technique.

  4. Background • Dempster-Shafer Theory • Local Trust Ratings • Combining Belief Functions • Trust Networks

  5. Dempster-Shafer Theory • Also known as the theory of belief functions,Bel(X), allow us to base degrees of belief for a question on probabilities for a related question. • There is no casual relationship between a hypothesis and its negation. • Bel(X) for a set X is defined as the sum of all masses of subsets. Ex. Bel({A,B}) = m({A})+m({B})+m({A,B}) = 1

  6. Dempster-Shafer Theory (cont.) • Example: Coin flipping (Head or not?) • 1. Have no confidence that the coin is fair. • Bel(Head) = 0 x 0.5 = 0 • Bel(¬Head) = 0 x 0.5 = 0 • Bel( Head, ¬Head) = 1 • 2. Have 90% confidence that the coin is fair. • Bel(Head) = 0.9 x 0.5 = 0.45 • Bel(¬Head) = 0.9 x 0.5 = 0.45 • Bel( Head, ¬Head) = 0.1

  7. Local Trust Ratings • How to define an agent’s belief function? • Each agent has an upper and a lower threshold for trust ratings, say ωi and Ωi ,where • Given a series of responses from agent Aj, and two thresholds ωi and Ωi of agent Ai, we compute the bpa toward Aj as where f(xk) denotes the probability that a quality of service xk.

  8. Combining Belief Functions • A and B are subsets of Θ; Bel1 and Bel2 are their belief functions respectively. • Then the function : 2Θ[0,1] is defined by Sum of independent sets

  9. Trust Networks • Distinction between local belief and total belief. • Local beliefis from direct interactions with Ag. • Total belief is combined the local belief with testimonies received from any witnesses.

  10. Requests for Ag’s value Return F referrals to Ar Requests for Ag’s value Request Returns Ag’s value Answer r4 r5 Trust Networks (cont.) Ar wants to evaluate Ag’s trustworthiness. Arequst A2 Fail Branching Factor F is 3 A1 r1 r3 Hit Hit r2 Referral chain, depth = 3 Agoal

  11. Trust Networks (cont.) • Shorter referral chains are more likely to be accurate. • To limit the effort expanded in pursuing referrals, a threshold depthLimit as the bound on the length of any referral chain. • Given a set of witnesses W = { W1, W2,…,WL }, Ar will update its total belief value of Ag as follows

  12. Deception • Deception Models • Weighted Majority Algorithm • Deception Detection

  13. Deception Models Normal Complementary Exaggerated negative Exaggerated positive

  14. Deception Models • Normal: x’k = xk • Complementary: x’k = 1 – xk • Exaggerated positive: x’k =α(1- xk)+ xk • Exaggerated positive: x’k = xk –αxk /(1-α) • α (0<α<1) is the exaggeration coefficient.

  15. Weighted Majority Algorithm • Two Basic ideas: • Assign weights to advisors and to make a prediction based on the weighted sum of the ratings provided by them. • Adjust the weights after a failed prediction. • Challenge: • Ratings from witnesses are belief functions, not scalars.

  16. Weighted Majority Algorithm (cont.) • WMC (WMA continuous) allows predictions to be chosen from interval [0,1] • The master algorithm is applied to a pool of n algorithms. • : the prediction of the master algorithm of the pool in update j. • λj : the prediction of the master algorithm in update j. • ρj : the result of trial j where

  17. Experiment Results • Testbed Environment • Number of Witnesses • Accuracy of Predictions • Weighted of Witnesses

  18. Testbed Environment • 100 agents • 4 out-edges per agent • 10 agents give complementary, 10 give exaggerated positive and 10 give exaggerated negative ratings • Querying phase: 500 rounds of querying • Trust phase: 10 agents as evaluating agents, determine all agents except itself

  19. Number of Witnesses Depth of trust networks Average number of witnesses found for different depths with branching factor F = 1,2,3,4 (After 5000 cycles)

  20. Number of Witnesses Rounds (Cycles) Average number of witnesses found from 0 to 15000 cycles

  21. Number of Witnesses Rounds (Cycles) Percentage of witnesses found from 0 to 15000 cycles

  22. Accuracy of Predictions Number of witnesses The rating error for different numbers of witnesses (at 0, 2500, and 5000 cycles)

  23. Accuracy of Predictions Average rating error during weight learning

  24. Weights of Witnesses Average weights of witnesses for different deception models

  25. Weights of Witnesses Exaggerated coefficient Average weights of witnesses for different deception models

More Related