Trust

Trust PowerPoint PPT Presentation


  • 231 Views
  • Uploaded on
  • Presentation posted in: General

2. Source. Part 1: A Survey Study on Trust Management in P2P SystemsPart 2: Trust-?: A Peer-to-Peer Framework for Trust Establishment. 3. Outline. What is Trust?What is a Trust Management?How to measure Trust?ExampleReputation-based Trust Management SystemsDMRepEigenRepP2PRepFrameworks fo

Download Presentation

Trust

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


1. Trust Course CS 6381 -- Grid and Peer-to-Peer Computing

2. 2 Source Part 1: A Survey Study on Trust Management in P2P Systems Part 2: Trust-?: A Peer-to-Peer Framework for Trust Establishment

3. 3 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

4. 4 What is Trust? Kini & Choobineh trust is: "a belief that is influenced by the individual’s opinion about certain critical system features" Gambetta " …trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent will perform a particular action, both before [the trustor] can monitor such action (or independently of his capacity of ever to be able to monitor it) The Trust-EC project (http://dsa-isis.jrc.it/TrustEC/) trust is: "the property of a business relationship, such that reliance can be placed on the business partners and the business transactions developed with them''. Gradison and Sloman trust is: "the firm belief in the competence of an entity to act dependably, securely and reliably within a specified context". .

5. 5 What is Trust? Some Basic Properties of Trust Relations Trust is relative to some business transaction. A may trust B to drive her car but not to baby-sit. Trust is a measurable belief. A may trust B more than A trusts C for the same business. Trust is directed. A may trust B to be a profitable customer but B may distrust A to be a retailer worth buying from. Trust exists and evolves in time. The fact that A trusted B in the past does not in itself guarantee that A will trust B in the future. B’s performance and other relevant information may lead A to re-evaluate her trust in B.

6. 6 Reputation: perception that an agent creates through past actions about its intentions and norms. Trust: a subjective expectation a peer has about another's future behavior based on the history of their encounters. Reciprocity: mutual exchange of deeds Reputation, Trust and Reciprocity honesty and reliability based on its own direct experiences; Reputation – a peer’s belief in another peer’s capabilities, honesty and reliability based on recommendations received from other peers. It should be clear from the argument thus far that reciprocity, trust and reputation are highly related concepts. The following relationships are expected: • Increase in agent ai’s reputation in its embedded social network A should also increase the trust from the other agents for ai. • Increase in an agent aj’s trust of ai should also increase the likelihood that aj will reciprocate positively to ai’s action. • Increase in ai’s reciprocating actions to other agents in its embedded social network A should also increase ai’s reputation in A. honesty and reliability based on its own direct experiences; Reputation – a peer’s belief in another peer’s capabilities, honesty and reliability based on recommendations received from other peers. It should be clear from the argument thus far that reciprocity, trust and reputation are highly related concepts. The following relationships are expected: • Increase in agent ai’s reputation in its embedded social network A should also increase the trust from the other agents for ai. • Increase in an agent aj’s trust of ai should also increase the likelihood that aj will reciprocate positively to ai’s action. • Increase in ai’s reciprocating actions to other agents in its embedded social network A should also increase ai’s reputation in A.

7. 7 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

8. 8 What is a Trust Management? “a unified approach to specifying and interpreting security policies, credentials, relationships [which] allows direct authorization of security-critical actions” – Blaze, Feigenbaum & Lacy Trust Management is the capture, evaluation and enforcement of trusting intentions. Other areas: Distributed Agent Artificial Intelligence/ Social Sciences

9. 9 What is a Trust Management?

10. 10 What is a Trust Management?

11. 11 What is a Trust Management? Reputation can be measured directly or indirectly Reputation can be measured directly or indirectly

12. 12 What is a Trust Management? Reputation can be measured directly or indirectly Reputation can be measured directly or indirectly

13. 13 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

14. 14 How to measure Trust? An example of Computation Mode A computational Model of Trust and Reputation (Mui et al,2001) Let’s assume a social network where no new peers are expected to join or leave (i.e. the social network is static) Action Space = {cooperate, failing}

15. 15 How to measure Trust? An example of Computation Mode Let’s assume a social network where no new peers are expected to join or leave (i.e. the social network is static) Action Space = {cooperate, failing}

16. 16 How to measure Trust? An example of Computation Mode Reputation: perception that a peer creates through past actions about its intentions and norms Let ?ji(c) represents pi’s reputation in a social network of concern to pj for a context c. This value measures the likelihood that pi reciprocates pj’s actions.

17. 17 How to measure Trust? An example of Computation Mode ?ab : b’s reputation in the eyes of a. Xab(i): the ith transaction between a and b. After n transactions. We obtained the history data History: Dab = {Xab(1), Xab(2), … , Xab(n)} Reputation: ?ji(c) [0,1] Let C be the set of all contexts of interest. Let ?ji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C History: Dji(c) = {E*} Dji(c) represents a history of encounters that aj has with ai within the context c. Trust: T (c) = E [ ?(c) | D(c)] The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions. Reputation: ?ji(c) [0,1] Let C be the set of all contexts of interest. Let ?ji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C History: Dji(c) = {E*} Dji(c) represents a history of encounters that aj has with ai within the context c. Trust: T (c) = E [ ?(c) | D(c)] The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions.

18. 18 How to measure Trust? An example of Computation Mode ?ab : b’s reputation in the eyes of a. Let p be the number of cooperations by peer b toward a in the n previous encounters. b’s reputation ?ab for peer a should be a function of both p and n. A simple function can be the proportion of cooperative action over all n encounters (or transactions) From statistics, a proportion random variable can be modeled as a Beta distribution Reputation: ?ji(c) [0,1] Let C be the set of all contexts of interest. Let ?ji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C History: Dji(c) = {E*} Dji(c) represents a history of encounters that aj has with ai within the context c. Trust: T (c) = E [ ?(c) | D(c)] The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions. Reputation: ?ji(c) [0,1] Let C be the set of all contexts of interest. Let ?ji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C History: Dji(c) = {E*} Dji(c) represents a history of encounters that aj has with ai within the context c. Trust: T (c) = E [ ?(c) | D(c)] The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions.

19. 19 How to measure Trust? An example of Computation Mode NOTE: Beta Distribution Reputation: ?ji(c) [0,1] Let C be the set of all contexts of interest. Let ?ji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C History: Dji(c) = {E*} Dji(c) represents a history of encounters that aj has with ai within the context c. Trust: T (c) = E [ ?(c) | D(c)] The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions. Reputation: ?ji(c) [0,1] Let C be the set of all contexts of interest. Let ?ji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C History: Dji(c) = {E*} Dji(c) represents a history of encounters that aj has with ai within the context c. Trust: T (c) = E [ ?(c) | D(c)] The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions.

20. 20 How to measure Trust? An example of Computation Mode Beta distribution: p( ) = Beta(a, ß) : estimator for ? a and ß: a = ß = 1 (by prior assumptions) A simple estimator for ?ab b’s reputation in the eyes of a as the proportion of cooperation in n finite encounters.

21. 21 How to measure Trust? An example of Computation Mode Trust is defined as the subjective expectation a peer has about another’s future behavior based on the history of encounters. T(c) = E[ ?(c) | D(c)] The higher the trust level for peer ai, the higher the expectation that ai will reciprocate peer aj’s actions.

22. 22 How to measure Trust? An example of Computation Mode Assuming that each encounter’s cooperation probability is independent of other encounters between a and b, the likelihood of p cooperations and (n – p) failings can be modeled as: The likelihood for the n encounters: Combining the prior and the likelihood, the posterior estimate for becomes (the subscripts are omitted):

23. 23 How to measure Trust? An example of Computation Mode Trust towards b from a is the conditional expectation of given D. Tab = p(xab(n+1)|D) Then

24. 24 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

25. 25 Reputation-based Trust Management Systems Introduction Examples of completely centralized mechanism for storing and exploring reputation data: Amazon.com Visitors usually look for customer reviews before deciding to buy new books. eBay Participants at eBay’s auctions can rate each other after each transaction.

26. 26 Reputation-based Trust Management Systems P2P Properties No central coordination No central database No peer has a global view of the system Global behavior emerges from local interactions Peers are autonomous Peers and connections are unreliable

27. 27 Reputation-based Trust Management Systems Design Considerations The system should be self-policing The shared ethics of the user population are defined and enforced by the peers themselves and not by some central authority The system should maintain anonymity A peer’s reputation should be associated with an opaque identifier rather with an externally associated identity The system should not assign any profit to newcomers The system should have minimal overhead in terms of computation, infrastructure, storage, and message complexity The system should be robust to malicious collectives of peers who know one another and attempt to collectively subvert the system.

28. 28 Reputation-based Trust Management Systems Design Considerations : DMRep Managing Trust in a P2P Information System (Aberer,Despotovic,2001) P2P Facts: No central coordination or DB (e.g. not eBay) No peer has global view Peers autonomous and unreliable Importance of trust in digital communities, but information dispersed and sources are not unconditionally trustworthy Solution: reputation as decentralized storage of replicated & redundant transaction history Calculate binary trust metric based on history of complaints.

29. 29 Reputation-based Trust Management Systems DMRep Notation Let P denote the set of all peers. The behavioral data B are observations t(q,p) that a peer q makes when he interacts with a peer p. The behavioral data of p, B(p) B(p) = { t (p, q) or t (q, p) | q ? P} B(p)? B

30. 30 Reputation-based Trust Management Systems DMRep In the decentralized environment, if a peer q has to determine trustworthiness of a peer p It has no access to global knowledge B and B(p) 2 ways to obtain data: Directly by interactions Bq(p) = { t (q, p) | t (q, p) ? B} Indirectly through a limited number of referrals from witnesses r ? Wq ? P Wq(p) = { t (r, p) | r? Wq, t (r, p)? B}

31. 31 Reputation-based Trust Management Systems DMRep Assumption: The probability of cheating or having malicious within a society is comparably low In case of a malicious behavior of q, a peer p can file a complaint c(p,q) Complaints are the only behavioral data B used in this model

32. 32 Reputation-based Trust Management Systems DMRep Let us look a simple situation p and q interact, later r wants to determine the trustworthiness of p and q. Assume p is cheating, q is honest After their interaction, q will file a complaint about p p will file a complaint about q in order to hide its misbehavior. r can not detect that p is cheating, If p continues to cheat with more peers, r can conclude that it is very probable that p is the cheater by observing the other complaints about p

33. 33 Reputation-based Trust Management Systems DMRep Based on the previous simple scenario, the reputation T(p) of a peer p can be computed as the product T(p) = |{c(p,q)| q?P}| x |{c(q,p)| q?P}| High value of T(p) indicate that p is not trustworthy |{c(p,q)| q?P}|: number of complains made by p |{c(q,p)| q?P}|: number of complains about p

34. 34 Reputation-based Trust Management Systems DMRep The storage structure proposed in this approach uses P-Grid (other can be used, such as CAN or CHORD) P- Grid is a peer-to-peer lookup system based on a virtual distributed search tree. It stores data items for which the associated path is a prefix of the data key. For the trust management application this are the complaints indexed by the peer number. Properties There exists an efficient decentralized bootstrap algorithm which creates the access structure without central control The search algorithm consists of randomly forwarding the requests from one peer to the other. All algorithms scale gracefully. Time and space complexity are both O(logn) Properties There exists an efficient decentralized bootstrap algorithm which creates the access structure without central control The search algorithm consists of randomly forwarding the requests from one peer to the other. All algorithms scale gracefully. Time and space complexity are both O(logn)

35. 35 Each peer holds part of the overall tree. Every participating peer's position is determined by its path, that is, the binary bit string representing the subset of the tree's overall information that the peer is responsible for. For example, the path of Peer 4 in Figure 1 is 10, so it stores all data items whose keys begin with 10. For faulttolerance multiple peers can be responsible for the same path, for example, Peer 1 and Peer 6. PGrid's query routing approach is as follows: For each bit in its path, a peer stores a reference to at least one other peer that is responsible for the other side of the binary tree at that level. Thus, if a peer receives a binary query string it cannot satisfy, it must forward the query to a peer that is “closer” to the result. In Figure 1, Peer 1 forwards queries starting with 1 to Peer 3, which is in Peer 1's routing table and whose path starts with 1. Peer 3 can either satisfy the query or forward it to another peer, depending on the next bits of the query. If Peer 1 gets a query starting with 0, and the next bit of the query is also 0, it is responsible for the query. If the next bit is 1, however, Peer 1 will check its routing table and forward the query to Peer 2, whose path starts with 01. Each peer holds part of the overall tree. Every participating peer's position is determined by its path, that is, the binary bit string representing the subset of the tree's overall information that the peer is responsible for. For example, the path of Peer 4 in Figure 1 is 10, so it stores all data items whose keys begin with 10. For faulttolerance multiple peers can be responsible for the same path, for example, Peer 1 and Peer 6. PGrid's query routing approach is as follows: For each bit in its path, a peer stores a reference to at least one other peer that is responsible for the other side of the binary tree at that level. Thus, if a peer receives a binary query string it cannot satisfy, it must forward the query to a peer that is “closer” to the result. In Figure 1, Peer 1 forwards queries starting with 1 to Peer 3, which is in Peer 1's routing table and whose path starts with 1. Peer 3 can either satisfy the query or forward it to another peer, depending on the next bits of the query. If Peer 1 gets a query starting with 0, and the next bit of the query is also 0, it is responsible for the query. If the next bit is 1, however, Peer 1 will check its routing table and forward the query to Peer 2, whose path starts with 01.

36. 36 Reputation-based Trust Management Systems DMRep The same data can be stored at multiple peers and we have replicas of this data ? improve reliability As the example shows, collisions of interest may occur, where peers are responsible for storing complaints about themselves. We do not exclude this, as for large peer populations these cases will be very rare and multiple replicas will be available to double-check.

37. 37 Reputation-based Trust Management Systems DMRep Problem: The peers providing the data could themselves be malicious Assume that the peers are only malicious with a certain probability p = p max <1. If there are r replicas satisfies on average p rmax < e, where e is an acceptable fault-tolerance. Problem Solution: If we receive the same data about a specific peer from a sufficient number of replicas we need no further checks, otherwise continue search. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare.

38. 38 Reputation-based Trust Management Systems DMRep How it works? P-Grid has two operations for storage-retrieve information insert(p; k; v), where p is an arbitrary peer in the network, k is the key value to be searched for, and v is a data value associated with the key. query(r; k) : v, where r is an arbitrary peer in the network, which returns the data values v for a corresponding query k. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare.

39. 39 Reputation-based Trust Management Systems DMRep How it works? Every peer p can file a complaint about q at any time. It stores the complaint by sending messages insert(a1; key(p); c(p; q)) and insert(a2; key(q); c(p; q)) to arbitrary peers a1 and a2. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare.

40. 40 Reputation-based Trust Management Systems DMRep – Query Results Assume that a peer p query for information about q (p evaluates the trustworthiness of q) p submits messages query(a; key(q)) to arbitrary peers a. This process is performed s times. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare. It also limits the depth of the exploration of trustworthiness of peers to limit the search space. Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare.

41. 41 Reputation-based Trust Management Systems DMRep – Query Results The result of these queries, called W, such that w: number of witness found cri(q): number of complaints that q received according witness ai cfi(q): number of complaints q filed according witness ai fi: the frequency with which ai is found (non-uniformity of the P-Grid structure) fi indicate that not all witnesses are found with the same probability due to the non-uniformity of the P-Grid struc- ture. In practice this variations can be rather large. This non-uniformity impacts not only query messages but also storage messages. Thus witnesses found less frequently will probably also not receive as many storage messages when complaints are led. Thus the number of complaints they report will tend to be too low. T Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare. fi indicate that not all witnesses are found with the same probability due to the non-uniformity of the P-Grid struc- ture. In practice this variations can be rather large. This non-uniformity impacts not only query messages but also storage messages. Thus witnesses found less frequently will probably also not receive as many storage messages when complaints are led. Thus the number of complaints they report will tend to be too low. T Thus, if we receive the same data about a specic agent from a suĆcient number of replicas we need no further checks. If the data is insuĆcient or contradictory we continue to check. In addition, we will also limit the depth of the explo- ration of trustworthiness of agents to limit the search space, and might end up in situations, where no clear decision can be made. These cases should be rare.

42. 42 Reputation-based Trust Management Systems DMRep – Variability Different frequencies fi indicate that not all witnesses are found with the same probability due to the non-uniformity of the P-Grid structure. Thus witnesses found less frequently will probably also not receive as many storage messages when complaints are filed. Thus the number of complaints they report will tend to be too low. Problem: We need to compensate the information contribution from every witness. Problem solution: Normalize values by using the frequencies . High contribution (high fi), high probability Low contribution (low fi), low probability Therefore we normalize the values by using the frequencies observed during querying. The following function compensates for the variable proba- bility of an agent to be found Therefore we normalize the values by using the frequencies observed during querying. The following function compensates for the variable proba- bility of an agent to be found

43. 43 Reputation-based Trust Management Systems DMRep – Variability Therefore we normalize the values by using the frequencies observed during querying. The following function compensates for the variable probability of an agent to be found. Therefore we normalize the values by using the frequencies observed during querying. The following function compensates for the variable probability of an agent to be found.

44. 44 Reputation-based Trust Management Systems DMRep – Trust This model proposed to decide if a peer p considers peer q trustworthy (binary decision) based on tracking the history and computing T. Thus p keeps a statistics of the average number of complaints received and complaints filed, aggregating all observations it makes over its lifetime. Using the following heuristic approach: This criterion is a heuristics. It is based on the argument that, if an observed value for complaints exceeds the general average of the trust measure too much, the agent must be dishonest. The problem is the determination of the factor, by which it may exceed the average value at most. To determine this factor we performed a probabilistic analysis, which we can describe here only informally because of space limitations.This criterion is a heuristics. It is based on the argument that, if an observed value for complaints exceeds the general average of the trust measure too much, the agent must be dishonest. The problem is the determination of the factor, by which it may exceed the average value at most. To determine this factor we performed a probabilistic analysis, which we can describe here only informally because of space limitations.

45. 45 Reputation-based Trust Management Systems DMRep – Trust This criterion is a heuristics. It is based on the argument that, if an observed value for complaints exceeds the general average of the trust measure too much, the agent must be dishonest. The problem is the determination of the factor, by which it may exceed the average value at most. To determine this factor we performed a probabilistic analysis, which we can describe here only informally because of space limitations.This criterion is a heuristics. It is based on the argument that, if an observed value for complaints exceeds the general average of the trust measure too much, the agent must be dishonest. The problem is the determination of the factor, by which it may exceed the average value at most. To determine this factor we performed a probabilistic analysis, which we can describe here only informally because of space limitations.

46. 46 Reputation-based Trust Management Systems DMRep - Discussion Strength The method can be implemented in a fully decentralized peer-to-peer environment and scales well for large number of participants. Limitations environment with low cheating rates. specific data management structure. Not robust to malicious collectives of peers.

47. 47 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

48. 48 Reputation-based Trust Management Systems Design Considerations : EigenRep The Eigen Trust Algorithm for Reputation Management in P2P Networks (Kamvar, Schossler,2003) Goal: To identify sources of inauthentic files and bias peers against downloading from them. Method: Give each peer a trust value based on its previous behavior.

49. 49 Reputation-based Trust Management Systems EigenRep: Terminology Local trust value: cij. The opinion that peer i has of peer j, based on past experience. Global trust value: ti. The trust that the entire system places in peer i. Here, also talk about trust vectors as well.Here, also talk about trust vectors as well.

50. 50 Reputation-based Trust Management Systems EigenRep: Normalizing Local Trust Values All cij non-negative ci1 + ci2 + . . . + cin = 1

51. 51 Reputation-based Trust Management Systems EigenRep: Local Trust Vector Local trust vector ci: contains all local trust values cij that peer i has of other peers j.

52. 52 Reputation-based Trust Management Systems EigenRep: Local Trust Values Model Assumptions: Each time peer i downloads an authentic file from peer j, cij increases. Each time peer i downloads an inauthentic file from peer j, cij decreases. Here, talk about how opinion vectors are constructed. Say that we will say that we choose this because of a probabilistic interpretation which we discuss later.Here, talk about how opinion vectors are constructed. Say that we will say that we choose this because of a probabilistic interpretation which we discuss later.

53. 53 Reputation-based Trust Management Systems EigenRep: Local Reputation Values Local Reputation Values = own experience sat(i, j): number of satisfactory transactions (downloads) peer i has had with peer j. unsat(i, j): number of unsatisfactory transactions (downloads) peer i has had with peer j local Reputation value: sij=sat(i, j) - unsat(i, j). File-based: don’t include this, but if people ask, mention that, this is generalizable to services, while file-based approaches are not.File-based: don’t include this, but if people ask, mention that, this is generalizable to services, while file-based approaches are not.

54. 54 Reputation-based Trust Management Systems EigenRep: Normalizing Local Reputation Value In order to aggregate local trust values, it is necessary to normalize them in some manner. Otherwise, malicious peers can assign arbitrarily high local trust values to other malicious peers, and arbitrarily low local trust values to good peers, easily subverting the system. In order to aggregate local trust values, it is necessary to normalize them in some manner. Otherwise, malicious peers can assign arbitrarily high local trust values to other malicious peers, and arbitrarily low local trust values to good peers, easily subverting the system.

55. 55 Reputation-based Trust Management Systems EigenRep: Normalizing Local Reputation Value In order to aggregate local trust values, it is necessary to normalize them in some manner. Otherwise, malicious peers can assign arbitrarily high local trust values to other malicious peers, and arbitrarily low local trust values to good peers, easily subverting the system. In order to aggregate local trust values, it is necessary to normalize them in some manner. Otherwise, malicious peers can assign arbitrarily high local trust values to other malicious peers, and arbitrarily low local trust values to good peers, easily subverting the system.

56. 56 Reputation-based Trust Management Systems EigenRep: Local Reputation Values Problem: The peers have limited own experience. Solution: Get information from other peers who may have more experience about other peers. File-based: don’t include this, but if people ask, mention that, this is generalizable to services, while file-based approaches are not.File-based: don’t include this, but if people ask, mention that, this is generalizable to services, while file-based approaches are not.

57. 57 Reputation-based Trust Management Systems EigenRep: Combining information by asking others Ask for the opinions of the people who you trust. Here, it’s important to note that each peer has it’s own trust vector.Here, it’s important to note that each peer has it’s own trust vector.

58. 58 Reputation-based Trust Management Systems EigenRep: Aggregating Local Reputation Values Peer i asks its friends about their opinions on peer k.

59. 59 Reputation-based Trust Management Systems EigenRep: Aggregating Local Reputation Values Peer i asks its friends about their opinions on all peers.

60. 60 Reputation-based Trust Management Systems EigenRep: Aggregating Local Reputation Values Peer i asks its friends about their opinions about other peers again. (It seems like asking his friends’ friends) Continues in this manner, If n is large, will converge to the same vector por every peer i (left principal eigenvector of C for every peer i if C is irreducible and aperiodic) Whether there is a stationary distribution, and whether it is unique if it does exist, are determined by certain properties of the process. Irreducible means that every state is accessible from every other state. A process is periodic if there exists at least one state to which the process will continually return with a fixed time period (greater than one). Aperiodic means that there is no such state. Positive recurrent means that the expected return time is finite for every state. Sometimes the terms indecomposable, acyclic, and persistent are used as synonyms for "irreducible", "aperiodic", and "recurrent", respectively. When the state space of a Markov chain is not irreducible, it may be partitioned into a set of (irreducible) communicating classes, each of which may be classified as above. The problem of classification is an important one in the mathematical study of Markov chains and related stochastic processes. Fortunately, if n is large, the trust vector ~ti will converge to the same vector for every peer i. Namely, it will converge to the left principal eigenvector of C. In other words,~t is a global trust vector in this model. Its elements, tj , quantify how much trust the system as a whole places peer j.Whether there is a stationary distribution, and whether it is unique if it does exist, are determined by certain properties of the process. Irreducible means that every state is accessible from every other state. A process is periodic if there exists at least one state to which the process will continually return with a fixed time period (greater than one). Aperiodic means that there is no such state. Positive recurrent means that the expected return time is finite for every state. Sometimes the terms indecomposable, acyclic, and persistent are used as synonyms for "irreducible", "aperiodic", and "recurrent", respectively. When the state space of a Markov chain is not irreducible, it may be partitioned into a set of (irreducible) communicating classes, each of which may be classified as above. The problem of classification is an important one in the mathematical study of Markov chains and related stochastic processes. Fortunately, if n is large, the trust vector ~ti will converge to the same vector for every peer i. Namely, it will converge to the left principal eigenvector of C. In other words,~t is a global trust vector in this model. Its elements, tj , quantify how much trust the system as a whole places peer j.

61. 61 Reputation-based Trust Management Systems EigenRep: Global Reputation Vector,

62. 62 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (non-dist)

63. 63 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (non-dist)

64. 64 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (non-dist) Pre-trust peers: P is a set of peers which are known to be trusted, is the pre-trusted vector of P, where, Assign some trust on pre-trust peers and use this information in new or inactive peers:

65. 65 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (non-dist) To avoid Malicious collectives Where a is some constant less than 1. This strategy breaks collectives by having each peer place at least some trust in the peers P that are not part of a collective. Strong Assumption: Pre-trusted peers are essential The modified algorithm is given in Algorithm 2. It should be emphasized that the pre-trusted peers are essential to this algorithm, as they guarantee convergence and break up malicious collectives. Therefore, the choice of pre-trusted peers is important. In particular, it is important that no pre-trusted peer be a member of a malicious collective. This would compromise the quality of the algorithm. To avoid this, the system may choose a very few number of pre-trusted peers (for example, the designers of the network). The modified algorithm is given in Algorithm 2. It should be emphasized that the pre-trusted peers are essential to this algorithm, as they guarantee convergence and break up malicious collectives. Therefore, the choice of pre-trusted peers is important. In particular, it is important that no pre-trusted peer be a member of a malicious collective. This would compromise the quality of the algorithm. To avoid this, the system may choose a very few number of pre-trusted peers (for example, the designers of the network).

66. 66 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (non-dist) Modified Basic EigenTrust Algorithm non-distributed algorithm

67. 67 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (distributed) All peers in the network cooperate to compute and store the global trust vector. Each peer stores and computes its own global trust value. Minimize the computation, storage, and message overhead.

68. 68 Distributed Algorithm (cont…) Ai: set of peers which have downloaded files from peer i. Bi: set of peers which peer i has downloaded files.

69. 69 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (distributed) Complexity

70. 70 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (dist. secure) Issue: The trust value of one peer should be computed by more than one other peer. malicious peers report false trust values of their own. malicious peers compute false trust values for others.

71. 71 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (dist. secure) Solution Strategy: the current trust value of a peer must not be computed by and reside at the peer itself, where it can easily become subject to manipulation. the trust value of one peer in the network will be computed by more than one other peer. Use multiple DHTs to assign mother peers, such as CAN or CHORD. The number of mother peers for one peer is same to all peers.

72. 72 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (dist. secure)

73. 73 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (dist. secure)

74. 74 Reputation-based Trust Management Systems EigenRep: EigenTrust Algorithm (dist. secure) Here we describe the secure algorithm to compute a global trust vector. We will use these definitions: Each peer has a number M of score managers, whose DHT coordinates are determined by applying a set of one-way secure hash functions h0; h1; : : : ; hM??1 to the peer’s unique identifier. posi are the coordinates of peer i in the hash space. Since each peer also acts as a score manager, it is assigned a set of daughters Di - the set contains the indexes of peers whose trust value computation is covered by the peer. As a score manager, peer i also maintains the opinion vector ci d of its daughter peer d (where d 2 Di) at some point in the algorithm. Also, peer i will learn Ai d which is the set of peers which downloaded files from its daughter peer d: It will receive trust assessments from these peers referring to its daughter peer d. Finally, peer i will get to know the set Bi d which denotes the set of peers which its daughter peer d downloaded files from: Upon kicking off a global trust value computation, its daughter peer d is supposed to submit its trust assessments on other peers to its score manager, providing the score manager with Bi d. Here we describe the secure algorithm to compute a global trust vector. We will use these definitions: Each peer has a number M of score managers, whose DHT coordinates are determined by applying a set of one-way secure hash functions h0; h1; : : : ; hM??1 to the peer’s unique identifier. posi are the coordinates of peer i in the hash space. Since each peer also acts as a score manager, it is assigned a set of daughters Di - the set contains the indexes of peers whose trust value computation is covered by the peer. As a score manager, peer i also maintains the opinion vector ci d of its daughter peer d (where d 2 Di) at some point in the algorithm. Also, peer i will learn Ai d which is the set of peers which downloaded files from its daughter peer d: It will receive trust assessments from these peers referring to its daughter peer d. Finally, peer i will get to know the set Bi d which denotes the set of peers which its daughter peer d downloaded files from: Upon kicking off a global trust value computation, its daughter peer d is supposed to submit its trust assessments on other peers to its score manager, providing the score manager with Bi d.

75. 75 Reputation-based Trust Management Systems EigenRep: Limitation of EigenRep Cannot distinguish between newcomers and malicious peers. Malicious peers can still cheat cooperatively A peer should not report its predecessors by itself. Flexibility How to calculate reputation values when peers join and leave, on line and off line. When to update global reputation values? According to the new local reputation vector of all peers. Anonymous? A mother peer know its daughters.

76. 76 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

77. 77 Reputation-based Trust Management Systems P2PRep: Introduction Choosing reputable servents in a P2P network (Cornelli et al, 2002) Not focus on computation of reputations Security of exchanged messages Queries Votes How to prevent different security attacks

78. 78 Using Gnutella for reference A fully P2P decentralized infrastructure Peers have low accountability and trust Security threats to Gnutella Distribution of tampered information Man in the middle attack Reputation-based Trust Management Systems P2PRep: Introduction

79. 79 Reputation-based Trust Management Systems P2PRep: Sketch of P2PRep To ensure authenticity of offerers & voters, and confidentiality of votes Use public-key encryption to provide integrity and confidentiality of messages Require peer_id to be a digest of a public key, for which the peer knows the private key Votes are values expressing opinions on other peers Servent reputation represents the “trustworthiness” of a servent in providing files Servent credibility represents the “trustworthiness” of a servent in providing votes

80. 80 P select a peer among those who respond to P’s query P polls its peers for opinions about the selected peer Peers respond to the polling with votes P uses the votes to make its decision Reputation-based Trust Management Systems P2PRep: Sketch of P2PRep

81. 81 Reputation-based Trust Management Systems P2PRep: Approaches Two approaches: Basic polling Voters do not provide peer_id in votes Enhanced polling Voters declare their peer_id in votes

82. 82 Reputation-based Trust Management Systems P2PRep: Basic Polling Phase 1: Resource searching. p sends a Query message for searching resources, and servents matching the request respond with a QueryHit

83. 83 Reputation-based Trust Management Systems P2PRep: Basic Polling Phase 2: Vote polling. p polls its peers about the reputation of a top list T of servents, and peers wishing to respond send back a PollReply

84. 84 Reputation-based Trust Management Systems P2PRep: Basic Polling Phase 3: Voter evaluation. p selects a set of voters, contacts them directly, and expects back a confirmation message

85. 85 Reputation-based Trust Management Systems P2PRep: Basic Polling Phase 4: Resource download. p selects a servent s from which download the resource and starts a challenge-response phase before downloading

86. 86 Reputation-based Trust Management Systems P2PRep: Enhanced Polling Phase 1: Resource searching. p sends a Query message for searching resources, and servents matching the request respond with a QueryHit

87. 87 Reputation-based Trust Management Systems P2PRep: Enhanced Polling Phase 2: Vote polling. p polls its peers about the reputation of a top list of servents, and peers wishing to respond send back a PollReply

88. 88 Reputation-based Trust Management Systems P2PRep: Enhanced Polling Phase 3: Voter evaluation. p selects a set of voters, contacts them directly to avoid servent_id to declare fake IPs

89. 89 Reputation-based Trust Management Systems P2PRep: Enhanced Polling Phase 4: Resource download. p selects a servent s from which download the resource and starts a challenge-response phase before downloading

90. 90 Reputation-based Trust Management Systems P2PRep: Comparison: Basic vs Enhanced Basic polling all votes are considered equal Enhanced polling peer_ids allow p to weight the votes based on v’s trustworthiness

91. 91 Reputation-based Trust Management Systems P2PRep: Security Improvements (1) Distribution of Tampered Information B responds to A with a fake resource P2PRep Solution: A discovers the harmful content from B A updates B’s reputation, preventing further interaction with B A become witness against B in pollings by others

92. 92 Reputation-based Trust Management Systems P2PRep: Security Improvements (2) Man in the Middle Attack Data from C to A can be modified by B, who is in the path A broadcasts a Query and C responds B intercepts the QueryHit from C and rewrites it with B’s IP & port A receives B’s reply A chooses B for downloading B downloads original content from C, modifies it and passes it to A

93. 93 Reputation-based Trust Management Systems P2PRep: Security Improvements (2) Man in the Middle Attack P2PRep addresses this problem by including a challenge-response phase before downloading To impersonate C, B needs C’s private key To design a public key whose digest is C’s identifier Public key encryption strongly enhances the integrity of the exchanged messages Both versions address this problem

94. 94 Outline What is Trust? What is a Trust Management? How to measure Trust? Example Reputation-based Trust Management Systems DMRep EigenRep P2PRep Frameworks for Trust Establishment Trust- ?

95. 95 Frameworks for Trust Establishment Trust- ?: Introduction Trust establishment via trust negotiation Exchange of digital credentials Credential exchange has to be protected Policies for credential disclosure Claim: Current approaches to trust negotiation don’t provide a comprehensive solution that takes into account all phases of the negotiation process Credentials contain sensitive info Policies specify which credentials must be received before the requested credential can be revealedCredentials contain sensitive info Policies specify which credentials must be received before the requested credential can be revealed

96. 96

97. 97 Frameworks for Trust Establishment Trust- ? XML-based system Designed for a peer-to-peer environment Both parties are equally responsible for negotiation management. Either party can act as a requester or a controller of a resource X-TNL: XML based language for specifying certificates and policies

98. 98 Frameworks for Trust Establishment Trust- ? Certificates: They are of two types Credentials: States personal characteristics of its owner and is certified by a CA Declarations: collect personal information about its owner that does not need to be certified Trust tickets (X-TNL) Used to speed up negotiations for a resource when access was granted in a previous negotiation Support for policy pre-conditions Negotiation conducted in phases

99. 99 Frameworks for Trust Establishment Trust- ?: Credentials and Declarations

100. 100 The basic Trust-X system The system is composed of a Policy Base, storing disclosure policies, the X-Profile associated with the party, a Tree Manager, storing the state of the negotiation, and a Compliance Checker, to test policy satisfaction and determine request replies. Client e Server sono dotati di un’architettura per mantenere sotto controllo il processo di negoziazione in Trust-X che č cosi composta: il Policy DataBase dove risiedono le politiche di rilascio Un X-Profile che contiene i certificati che ha a disposizione l’entitŕ Un Tree Manager dove viene memorizzato lo stato della negoziazione Un Compliance Checker che determina sia la soddisfacibilitŕ delle politiche di rilascio che le richieste verso al controparte attra. Il compito principale di tali moduli č di supportare la negoziazione tramite lo scambio di politiche e eventualmente di credenziali e risorse sensibili. The system is composed of a Policy Base, storing disclosure policies, the X-Profile associated with the party, a Tree Manager, storing the state of the negotiation, and a Compliance Checker, to test policy satisfaction and determine request replies. Client e Server sono dotati di un’architettura per mantenere sotto controllo il processo di negoziazione in Trust-X che č cosi composta: il Policy DataBase dove risiedono le politiche di rilascio Un X-Profile che contiene i certificati che ha a disposizione l’entitŕ Un Tree Manager dove viene memorizzato lo stato della negoziazione Un Compliance Checker che determina sia la soddisfacibilitŕ delle politiche di rilascio che le richieste verso al controparte attra. Il compito principale di tali moduli č di supportare la negoziazione tramite lo scambio di politiche e eventualmente di credenziali e risorse sensibili.

101. 101 Frameworks for Trust Establishment Trust- ? : Message exchange in a Trust-X negotiation Once a trust sequence has been determined the credential exchange phase is actually executed. Each time a credential is received, the local compliance checker module checks local policy satisfaction and verifies at runtime the validity and ownership of the remote credentials. Once a trust sequence has been determined the credential exchange phase is actually executed. Each time a credential is received, the local compliance checker module checks local policy satisfaction and verifies at runtime the validity and ownership of the remote credentials.

102. 102 Frameworks for Trust Establishment Trust- ?: Disclosure Policies “They state the conditions under which a resource can be released during a negotiation” Prerequisites – associated to a policy, it’s a set of alternative disclosure policies that must be satisfied before the disclosure of the policy they refer to.

103. 103 Frameworks for Trust Establishment Trust- ? : Logic formalism P() credential type C set of conditions Una slide per l’alberoUna slide per l’albero

104. 104 Example Consider a Rental Car service. The service is free for the employees of Corrier company. Moreover, the Company already knows Corrier employees and has a digital copy of their driving licenses. Thus, it only asks the employees for the company badge and a valid copy of the ID card, to double check the ownership of the badge. By contrast, rental service is available on payment for unknown requesters, who have to submit first a digital copy of their driving licence and then a valid credit card. These requirements can be formalized as follows:

105. 105 Example (2) Policy pol2 requires the driving license of the requester and is a precondition to proceed on the rental process. Intuitively, there is no reason to ask for a credit card if the requester cannot drive a car. Thus, pol3 can be disclosed whether policy pol2 specified in its precondition set is satisfied. The resource is thus deliverable (pol4) when either policy pol3 or pol1 are satisfied. Policy pol2 requires the driving license of the requester and is a precondition to proceed on the rental process. Intuitively, there is no reason to ask for a credit card if the requester cannot drive a car. Thus, pol3 can be disclosed whether policy pol2 specified in its precondition set is satisfied. The resource is thus deliverable (pol4) when either policy pol3 or pol1 are satisfied.

106. 106 Trust-X negotiation

107. 107 Frameworks for Trust Establishment Trust- ?: Negotiation Tree Used in the policy evaluation phase Maintains the progress of a negotiation Used to identify at least a possible trust sequence that can lead to success in a negotiation (a view) Edges in tree are policies Nodes are terms. A term is an expression of the following forms. Expressions of the form P (C ) where P is a Trust-X certificate and C is a possibly empty list of policy conditions X(C ) : where X is a variable and C is a nonempty list of policy conditionsEdges in tree are policies Nodes are terms. A term is an expression of the following forms. Expressions of the form P (C ) where P is a Trust-X certificate and C is a possibly empty list of policy conditions X(C ) : where X is a variable and C is a nonempty list of policy conditions

108. 108 Frameworks for Trust Establishment Trust- ? : Negotiation Tree (2)

109. 109 Summary Thanks!, Questions? Trust builder does not have any facility to speed up negotiation whenever possible. Also, it does not have the notion of sequence cachingTrust builder does not have any facility to speed up negotiation whenever possible. Also, it does not have the notion of sequence caching

  • Login