1 / 37

computational trust and reputation models dr. jordi sabater mir dr. laurent vercouter

. IIIA

paul
Download Presentation

computational trust and reputation models dr. jordi sabater mir dr. laurent vercouter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Computational Trust and Reputation Models

    3. Presentation index Motivation Approaches to control de interaction Some definitions The computational perspective

    4. Motivation

    5. Let’s start introducing what we are talking about. Niklas Luhman, a well known German Sociologist said... I’m not saying anything unknwon if I say that Trust and trusworthiness are necessary in our everyday life. It is part of the “glue” that holds our society together. Similarly reputation is a social artifact that has been present in our societies for a long time. From ancient Greeks to modern days, from Vietnamese to Bedouins, the concept of reputation plays a very important role in human social organizations.Let’s start introducing what we are talking about. Niklas Luhman, a well known German Sociologist said... I’m not saying anything unknwon if I say that Trust and trusworthiness are necessary in our everyday life. It is part of the “glue” that holds our society together. Similarly reputation is a social artifact that has been present in our societies for a long time. From ancient Greeks to modern days, from Vietnamese to Bedouins, the concept of reputation plays a very important role in human social organizations.

    13. Advantages of trust and reputation mechanisms Each agent is a norm enforcer and is also under surveillance by the others. No central authority needed. Their nature allows to arrive where laws and central authorities cannot. Punishment is based usually in ostracism.

    14. Problems of trust and reputation mechanisms Bootstrap problem. Exclusion must be a punishment for the outsider. Not all kind of environments are suitable to apply these mechanisms.

    15. Approaches to control the interaction

    16. Different approaches to control the interaction Second slide: security approach : we can deploy a security infrastructure that constraint the behaviour of agents. These infratstructure (such as PKI) guarantee some properties, essentially during communication (authentication, confidentiality, non-repudation, integrity, ...). The problem is that these approach do not cover all the risks. Mainly it does not refer to the _content_ of messages and it cannot guarantee that an agent tells the truth or that it will behave as it committed to do.Second slide: security approach : we can deploy a security infrastructure that constraint the behaviour of agents. These infratstructure (such as PKI) guarantee some properties, essentially during communication (authentication, confidentiality, non-repudation, integrity, ...). The problem is that these approach do not cover all the risks. Mainly it does not refer to the _content_ of messages and it cannot guarantee that an agent tells the truth or that it will behave as it committed to do.

    17. Security approach Different approaches to control the interaction Second slide: security approach : we can deploy a security infrastructure that constraint the behaviour of agents. These infratstructure (such as PKI) guarantee some properties, essentially during communication (authentication, confidentiality, non-repudation, integrity, ...). The problem is that these approach do not cover all the risks. Mainly it does not refer to the _content_ of messages and it cannot guarantee that an agent tells the truth or that it will behave as it committed to do Second slide: security approach : we can deploy a security infrastructure that constraint the behaviour of agents. These infratstructure (such as PKI) guarantee some properties, essentially during communication (authentication, confidentiality, non-repudation, integrity, ...). The problem is that these approach do not cover all the risks. Mainly it does not refer to the _content_ of messages and it cannot guarantee that an agent tells the truth or that it will behave as it committed to do

    18. Different approaches to control the interaction Third slide: institutionnal approach : here the agents are deployed over a specific infrastructure called institution. This institution has some power over the agents : it can observe their behaviour and also has a sanctionning power. Then it can punish agents that do not behave well. This approach tackles the problem of controlling the agents' behaviour but it makes some strong assumption. First it requires that the institution can observe all what is going on in the agent society (not feasable in decentralized systems). Second, it is an intrusive approach that needs that the institution can sanction agents. Third slide: institutionnal approach : here the agents are deployed over a specific infrastructure called institution. This institution has some power over the agents : it can observe their behaviour and also has a sanctionning power. Then it can punish agents that do not behave well. This approach tackles the problem of controlling the agents' behaviour but it makes some strong assumption. First it requires that the institution can observe all what is going on in the agent society (not feasable in decentralized systems). Second, it is an intrusive approach that needs that the institution can sanction agents.

    19. Institutional approach Different approaches to control the interaction Third slide: institutionnal approach : here the agents are deployed over a specific infrastructure called institution. This institution has some power over the agents : it can observe their behaviour and also has a sanctionning power. Then it can punish agents that do not behave well. This approach tackles the problem of controlling the agents' behaviour but it makes some strong assumption. First it requires that the institution can observe all what is going on in the agent society (not feasable in decentralized systems). Second, it is an intrusive approach that needs that the institution can sanction agents. Third slide: institutionnal approach : here the agents are deployed over a specific infrastructure called institution. This institution has some power over the agents : it can observe their behaviour and also has a sanctionning power. Then it can punish agents that do not behave well. This approach tackles the problem of controlling the agents' behaviour but it makes some strong assumption. First it requires that the institution can observe all what is going on in the agent society (not feasable in decentralized systems). Second, it is an intrusive approach that needs that the institution can sanction agents.

    20. Different approaches to control the interaction Fourth slide: the social approach. In this approach, each agent can participate to the control of other agents. This is called social control by Castelfranchi & Falcone. There is no need of a global complete view of the systems not of an external intrusive power over agents. Each agent observe its neighborhood, evaluate its neighbours by the way of a reputation model and exchange this information with others. Agents that do not behave well will see their reputations in the other agents' model decrease, and will be progressivly excluded from interactions and then from the society. This approach is suited for large scale decentralized systems, but as it is a kind of learning process, there is usually some time during when malicious agents can function before being discovered. Fourth slide: the social approach. In this approach, each agent can participate to the control of other agents. This is called social control by Castelfranchi & Falcone. There is no need of a global complete view of the systems not of an external intrusive power over agents. Each agent observe its neighborhood, evaluate its neighbours by the way of a reputation model and exchange this information with others. Agents that do not behave well will see their reputations in the other agents' model decrease, and will be progressivly excluded from interactions and then from the society. This approach is suited for large scale decentralized systems, but as it is a kind of learning process, there is usually some time during when malicious agents can function before being discovered.

    21. Example: P2P systems - There is some global tasks that can only be achieved by a collective activity of several agents (for instance query routing) - Then part of these global tasks are achieved by agents that have been deployed and/or developped by different users - There is no guarantee that these agents behave well. If they are buggy or malicious, their behaviour can perturb the system or even prevent the accomplishment of global tasks. - Trust and reputations are means to evaluate the compliance of agents behaviour according to some expected behaviour - As p2p systems are fully decentralized, it is not possible to have a complete global view of each agent and to monitor their behaviour. It is the neighborhood of an agent that can do it, using reputation and trust, and then share and propagate the results of their observation - There is some global tasks that can only be achieved by a collective activity of several agents (for instance query routing) - Then part of these global tasks are achieved by agents that have been deployed and/or developped by different users - There is no guarantee that these agents behave well. If they are buggy or malicious, their behaviour can perturb the system or even prevent the accomplishment of global tasks. - Trust and reputations are means to evaluate the compliance of agents behaviour according to some expected behaviour - As p2p systems are fully decentralized, it is not possible to have a complete global view of each agent and to monitor their behaviour. It is the neighborhood of an agent that can do it, using reputation and trust, and then share and propagate the results of their observation

    22. Example: P2P systems

    23. Example: P2P systems

    24. Different approaches to control the interaction Fourth slide: the social approach. In this approach, each agent can participate to the control of other agents. This is called social control by Castelfranchi & Falcone. There is no need of a global complete view of the systems not of an external intrusive power over agents. Each agent observe its neighborhood, evaluate its neighbours by the way of a reputation model and exchange this information with others. Agents that do not behave well will see their reputations in the other agents' model decrease, and will be progressivly excluded from interactions and then from the society. This approach is suited for large scale decentralized systems, but as it is a kind of learning process, there is usually some time during when malicious agents can function before being discovered. Fourth slide: the social approach. In this approach, each agent can participate to the control of other agents. This is called social control by Castelfranchi & Falcone. There is no need of a global complete view of the systems not of an external intrusive power over agents. Each agent observe its neighborhood, evaluate its neighbours by the way of a reputation model and exchange this information with others. Agents that do not behave well will see their reputations in the other agents' model decrease, and will be progressivly excluded from interactions and then from the society. This approach is suited for large scale decentralized systems, but as it is a kind of learning process, there is usually some time during when malicious agents can function before being discovered.

    25. Definitions

    28. The antrologist Fredrik Barth tells the story of his dealing with a rug merchant in a bazaar in the Middle East. Barth found a rug that he liked but he had no way to pay for it at the time. The dealer told him to take the rug and send the money later. All of us have had similar situations where we are trusted by a complete stranger who would quite likely never see us again.The antrologist Fredrik Barth tells the story of his dealing with a rug merchant in a bazaar in the Middle East. Barth found a rug that he liked but he had no way to pay for it at the time. The dealer told him to take the rug and send the money later. All of us have had similar situations where we are trusted by a complete stranger who would quite likely never see us again.

    33. Computational perspective

    34. Dimensions of trust [McKnight & Chervany, 02]

    35. The Functional Ontology of Reputation [Casare & Sichman, 05] The Functional Ontology of Reputation (FORe) aims at defining standard concepts related to reputation FORe includes: Reputation processes Reputation types and natures Agent roles Common knowledge (information sources, entities, time) Facilitate the interoperability of heterogeneous reputation models

    36. Processes needed for trust computation Initialisation first default value Evaluation judgement of a behaviour Punishment/Sanction calculation of reputation values Reasoning inference of trust intentions Decision decision to trust Propagation communication about reputation/trust information

    37. Agent roles

    38. Reputation types [Casare & Sichman, 05] Primary reputation Direct reputation Observed reputation Secondary reputation Collective reputation Propagated reputation Stereotyped reputation

    39. What is a good trust model ? A good trust model should be [Fullam et al, 05]: Accurate provide good previsions Adaptive evolve according to behaviour of others Quickly converging quickly compute accurate values Multi-dimensional Consider different agent characteristics Efficient Compute in reasonable time and cost

    40. Why using a trust model in aMAS ? Trust models allow: Identifying and isolating untrustworthy agents

    41. Why using a trust model in aMAS ? Trust models allow: Identifying and isolating untrustworthy agents Evaluating an interaction’s utility

    42. Why using a trust model in aMAS ? Trust models allow: Identifying and isolating untrustworthy agents Evaluating an interaction’s utility Deciding whether and with whom to interact

    43. Presentation index Motivation Approaches to control de interaction Some definitions The computational perspective

    44. Computational trust and reputation models eBay TrustNet LIAR ReGret Repage

    49. Computational trust and reputation models eBay TrustNet LIAR ReGret Repage

    50. Trust Net [Schillo & Funk, 99] Model designed to evaluate the agents’ honesty Completely decentralized Applied in a game theory context : the Iterated Prisonner’s Dilemma (IPD) Each agent announce its strategy and choose an opponent according to its announced strategy If an agent does not follow the strategy it announced, its opponent decreases its reputation The trust value of agent A towards agent B is T(A,B) = number of honest rounds / number of total rounds

    51. Agents can communicate their trust values to fasten the convergence of trust models An agent can build a Trust Net of trust values transmitted by witnesses The final trust value of an agent towards another aggregate direct experiences and testimonies with a probabilistic function on the lying behaviour of witnesses Trust Net [Schillo & Funk, 99]

    52. Computational trust and reputation models eBay TrustNet LIAR ReGret Repage

    53. The LIAR model [Muller & Vercouter, 07] Model designed for the control of communications in a P2P network Completely decentralized Applied to a peer-to-peer protocol for query routings The global functionning of a p2p network relies on an expected behaviour of several nodes (or agents) Agents’ behaviour must be regulated by a social control [Castelfranchi, 00]

    54. LIAR: Social control of agent communications

    55. The LIAR agent architecture

    56. Detection of violations

    57. Reputation types in LIAR Rptargetbeneficary(facet,dimension,time) ? [-1,+1] ? {unknown}

    58. Reputation computation Direct Interaction based Reputation Separate the social policies according to their state associate a penalty to each set reputation = weighted average of the penalties Reputation Recommendation based Reputation based on trusted recommendation reputation = weighted average of received values weighted by the reputation of the punisher

    59. LIAR decision process

    60. Computational trust and reputation models eBay TrustNet LIAR ReGret Repage

    64. Outcome: The initial contract to take a particular course of actions to establish the terms and conditions of a transaction. AND The actual result of the contract.

    66. Impression: The subjective evaluation of an outcome from a specific point of view. En quč consiteix aquesta avaluació? -> Noció d’utilitat.En quč consiteix aquesta avaluació? -> Noció d’utilitat.

    69. Reputation that an agent builds on another agent based on the beliefs gathered from society members (witnesses).

    73. Witness reputation

    74. Witness reputation

    82. The trust on the agents that are in the “neighbourhood” of the target agent and their relation with it are the elements used to calculate what we call the Neighbourhood reputation.

    84. The idea behind the System reputation is to use the common knowledge about social groups and the role that the agent is playing in the society as a mechanism to assign reputation values to other agents. The knowledge necessary to calculate a system reputation is usually inherited from the group or groups to which the agent belongs to.

    85. If the agent has a reliable direct trust value, it will use that as a measure of trust. If that value is not so reliable then it will use reputation.

    86. Computational trust and reputation models eBay TrustNet LIAR ReGret Repage

    89. The Repage system

    99. The analyzer The interplay between image and reputation might be a cause of uncertainty and inconsistency. Inconsistencies do not necessarily lead to a state of cognitive dissonance, nor do they always urge the system to find a solution. For example, an inconsistency between own image of a given target and its reputation creates no problem to the system. However, a contradiction between own evaluations is sometimes possible: my direct experience may be confirmed in further interaction, but at the same time it may be challenged by the image I believe others, whom I trust a lot, have formed about the same target What will I do in such a condition? Will I go ahead and sign a contract, may be a low-cost one, just to acquire a new piece of direct evidence, or will I check the reliability of my informants? The picture is rather complex, and the number of possibilities is bound to increase at any step, making the application of rule-based reasoning computationally heavy.

    101. Current work with the RepAge architecture Agents that are able to justify the values of Images and reputations (the LRep language). Formalization that allows an agent to reason about the elements that conform an image and/or a reputation. Dynamic ontology mapping.

    103. Dmitry Karamazov tells the story of a liuténant who, as commander of a unit far from Moscow, has managed substantial sums of money on behalf of the army. Immediately after each periodic audit of his books, he was taking the available funds to the merchant Trifonov. After some time, Trifonov was returning the money with interests. Because it was highly irregular... Dmitry Karamazov tells the story of a liuténant who, as commander of a unit far from Moscow, has managed substantial sums of money on behalf of the army. Immediately after each periodic audit of his books, he was taking the available funds to the merchant Trifonov. After some time, Trifonov was returning the money with interests. Because it was highly irregular...

    104. When the day comes that the liutenant is abruptly to be replaced in his command, he asks Trifonov to return the last sum loaned to him. Trifonov replies...When the day comes that the liutenant is abruptly to be replaced in his command, he asks Trifonov to return the last sum loaned to him. Trifonov replies...

    105. This is only an example...there are other situations similar to this.This is only an example...there are other situations similar to this.

    112. Comparison among models Slide 11, Guillaume’s thesis Some slides to show that there are a lot of models and they are quite different.

    113. Presentation index Motivation Approaches to control de interaction Some definitions The computational perspective

    114. The Agent Reputation and Trust Testbed

    115. Motivation Trust in MAS is a young field of research, experiencing breadth-wise growth Many trust-modeling technologies Many metrics for empirical validation Lack of unified research direction No unified objective for trust technologies No unified performance metrics and benchmarks

    116. An Experimental and Competition Testbed… Presents a common challenge to the research community Facilitates solving of prominent research problems Provides a versatile, universal site for experimentation Employs well-defined metrics Identifies successful technologies Matures the field of trust research Utilizes an exciting domain to attract attention of other researchers and the public

    117. The ART Testbed A tool for Experimentation: Researchers can perform easily-repeatable experiments in a common environment against accepted benchmarks Competitions: Trust technologies compete against each other; the most promising technologies are identified

    118. Testbed Game Rules Agents are art appraisers with varying expertise in different eras For a fixed price, clients ask appraisers to provide “appraisals” of paintings from various eras. If an appraiser is not very knowledgeable about a painting, it can purchase “opinions” from other appraisers who might be experts in the respective era Appraisers whose appraisals are more accurate receive larger shares of the client base in the future Appraisers compete to achieve the highest earnings by the end of the game. Agents are art appraisers with varying expertise in different eras For a fixed price, clients ask appraisers to provide “appraisals” of paintings from various eras. If an appraiser is not very knowledgeable about a painting, it can purchase “opinions” from other appraisers who might be experts in the respective era Appraisers whose appraisals are more accurate receive larger shares of the client base in the future Appraisers compete to achieve the highest earnings by the end of the game.

    119. Step 1: Client and Expertise Assignments Appraisers receive clients who pay a fixed price to request appraisals Client paintings are randomly distributed across eras As game progresses, more accurate appraisers receive more clients (thus more profit)

    120. Step 2: Reputation Transactions Appraisers know their own level of expertise for each era Appraisers are not informed (by the simulation) of the expertise levels of other appraisers Appraisers may purchase reputations, for a fixed fee, from other appraisers Reputations are values between zero and one Might not correspond to appraiser’s internal trust model Serves as standardized format for inter-agent communication

    121. Step 2: Reputation Transactions

    122. Step 3: Opinion Transactions For a single painting, an appraiser may request opinions (each at a fixed price) from as many other appraisers as desired The simulation “generates” opinions about paintings for opinion-providing appraisers Accuracy of opinion is proportional to opinion provider’s expertise for the era and cost it is willing to pay to generate opinion Appraisers are not required to truthfully reveal opinions to requesting appraisers

    123. Step 3: Opinion Transactions

    124. Step 4: Appraisal Calculation Upon paying providers and before receiving opinions, requesting appraiser submits to simulation a weight (self-assessed reputation) for each other appraiser Simulation collects opinions sent to appraiser (appraisers may not alter weights or received opinions) Simulation calculates “final appraisal” as weighted average of received opinions True value of painting and calculated final appraisal are revealed to appraiser Appraiser may use revealed information to revise trust models of other appraisers

    125. Analysis Metrics Agent-Based Metrics Money in bank Average appraisal accuracy Consistency of appraisal accuracy Number of each type of message passed System-Based Metrics System aggregate bank totals Distribution of money among appraisers Number of messages passed, by type Number of transactions conducted Evenness of transaction distribution across appraisers

    126. Conclusions The ART Testbed provides a tool for both experimentation and competition Promotes solutions to prominent trust research problems Features desirable characteristics that facilitate experimentation

    127. An example of using ART Building an agent creating a new agent class strategic methods Running a game designing a game running the game Viewing the game Running a game monitor interface

    128. Building an agent for ART An agent is described by 2 files: a Java class (MyAgent.java) must be in the testbed.participant package must extend the testbed.agent.Agent class an XML file (MyAgent.xml) only specifying the agent Java class in the following way: <agentConfig> <classFile> c:\ARTAgent\testbed\participants\MyAgent.class </classFile> </agentConfig>

    129. Strategic methods of the Agent class (1) For the beginning of the game initializeAgent() To prepare the agent for a game For reputation transactions prepareReputationRequests() To ask reputation information (gossips) to other agents prepareReputationAcceptsAndDeclines() To accept or refuse requests prepareReputationReplies() To reply to confirmed requests

    130. Strategic methods of the Agent class (2) For opinion transactions prepareOpinionRequests() To ask opinion to other agents prepareOpinionCertainties() To announce its own expertise to a requester prepareOpinionRequestConfirmations() To confirm/cancel requests to providers prepareOpinionCreationOrders() To produce evaluations of paintings prepareOpinionProviderWeights() To weight the opinion of other agents prepareOpinionReplies() To reply to confirmed requests

    131. The strategy of this example of agent We will implement an agent with a very simple reputation model: It associates a reputation value to each other agent (initialized at 1.0) It only sends opinion requests to agents with reputation > 0.5 No reputation requests are sent If an appraisal of another agent is different from the real value by less than 50%, reputation is increased by 0.03 Otherwise it is decreased by 0.03 If our agent receives a reputation request from another agent with a reputation less than 0.5, it provides a bad appraisal (cheaper) Otherwise its appraisal is honest

    132. Initialization

    133. Opinion requests

    134. Opinion Creation Order

    135. Updating reputations

    136. Running a game with MyAgent Parameters of the game : 3 agents: MyAgent, HonestAgent, CheaterAgent 50 time steps 4 painting eras average client share : 5 / agent

    137. How did my agent behaved ?

    138. References [Casare & Sichman, 05] S. J. Casare and J. S. Sichman, Towards a functional ontology of reputation, Proceedings of AAMAS’05, 2005 [Castelfranchi, 00] C. Castelfranchi, Engineering Social Order, Proceedings of ESAW’00, 2000 [Fullam et al, 05] K. Fullam, T. Klos, G. Muller, J. Sabater-Mir, A. Schlosser, Z. Topol, S. Barber, J. Rosenschein, L. Vercouter and M. Voss, A Specification of the Agent Reputation and Trust (ART) Testbed: Experimentation and Competition for Trust in Agent Societies, Proceedings of AAMAS’05, 2005 [McKnight & Chervany, 02] D. H. McKnight and N. L. Chervany, What trust means in e-commerce customer relationship: an interdisciplinary conceptual typology, International Journal of Electronic Commerce, 2002 [Muller & Vercouter, 05] G. Muller and L. Vercouter, Decentralized Monitoring of Agent Communication with a Reputation Model, Trusting Agents for trusting Electronic Societies, LNCS 3577, 2005 [Sabater, 04] Evaluating the ReGreT system Applied Artificial Intelligence ,18 (9-10) :797-813 [Sabater & Sierra, 05] Review on computational trust and reputation models Artificial Intelligence Review ,24 (1) :33-60 [Sabater-Mir & Paolucci, 06] Repage: REPutation and imAGE among limited autonomous partners, JASSS - Journal of Artificial Societies and Social Simulation ,9 (2), 2006 [Schillo & Funk, 99] M. Schillo and P. Funk, Learning from and about other agents in terms of social metaphors, Agents Learning About From and With Other Agents, 1999

More Related