1 / 35

Trust Model for High Quality of Recommendations

Trust Model for High Quality of Recommendations. G. Lenzini , N. Sahli, and H. Eertink (Telematica Instituut, NL). SECRYPT, special session, Porto, July 2008. Opening. Ratings and Recommender/Review Systems.

gari
Download Presentation

Trust Model for High Quality of Recommendations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trust Model for High Quality of Recommendations G. Lenzini,N. Sahli, and H. Eertink (Telematica Instituut, NL) SECRYPT, special session, Porto, July 2008

  2. Opening

  3. Ratings and Recommender/Review Systems Recommender systems aim to predict the rating that a user would give to an unknown item (as if he had indeed tasted, used, tried it)

  4. Recommender Systems Recommender systems’ three main categories: • Content based: the prediction estimated from the ratings that the user has given to “similar” items • items are similar on content-based factors (tags, keywords, ontologies) • Collaborative (filtering) based: the prediction estimated from the ratings that “similar” users have given to the item • users are similar on “taste likelihood” calculated upon common rated items • Hybrid

  5. Trust and Collaborative Filtering To overcome the limitation of current recommender systems (i.e., sparsity and accuracy) very recent proposals suggest to substitute the user similarity with trust. • P. Massa, P. Aversani, Trust-aware Recommender SystemsRECSYS 2007 • N. Lathia, S. Hailes, L. Capra, Trust-based Collaborative FilteringIFIPTM 2008 • Dell’Amico, L. Capra, SOFIA: Social Filtering for Robust Recommendations, IFIPTM 2008 • D. Quercia, today The experimental results are positive. Rummble.com uses trust-based recommendation with commercial scope.

  6. Epinions.com

  7. Epinions.com

  8. Our motivation

  9. Virtual Communities We were working on virtual communities in e-commerce applications (i.e., recommender and reviews systems). Virtual communities’ size may increases quite fast. Trust becomes fuzzy quite fast too.

  10. Flixter.com

  11. Virtual Communities Networks of Trust: questions • How to provide specific solutions to maintain trust relationships in those community? (e.g., autonomous) • How to increase the trustworthiness of members towards the community and the information they find there? (e.g., increase personalization) • What features can be advantageous in the design of a trustworthy virtual community (e.g., agent-based, mobility)? • How to improve current recommender system that are based on virtual communities (e.g., by improving the quality of recommendation)?

  12. Quality vs Usefulness How to distinguish between a notuseful recommendation (but coming from a trusted recommender) from a recommendation of doubt honesty? Recommenders’ experiences might have maturated in different contexts. Recommenders may have tastes that are completely different from ours. That is sufficient/correct to label them as untrustworthy?

  13. In practice: Peer Review of Justification

  14. Our Proposal

  15. Solution for High Quality of Recommendation We designed a framework for an hybrid recommender/reviews where trust and other mechanisms are used to achieved high quality of recommendations • Key concepts • Trust Model • Architecture (skipped in the talk, look into the paper)

  16. Key Concepts

  17. Virtual Agora, TRat, TRec Embedded Delegate Items Recommenders TRec TRat network of (un)trusted recommender registrer of (un)trusted items Virtual Agora

  18. Trust Model

  19. Trust Model • Aim: build/use/update TRat(A) and TRec(A) • Notation: • In TRat(A), agents-items • In TRec(A), agents-agents (recommenders) • temporary and eventual, e.g.,

  20. Virtual Agora, TRat, TRec Embedded Delegate Items Recommenders TRec TRat network of (un)trusted recommender register of (un)trusted items Virtual Agora

  21. Detail of TRat(A), items • A rating that a user gives to an item is calculated, at a certain time, in a certain context, by using a combination of the following strategies • content-based (past experience on the “similar” items, in the same or “similar” context): • collaborative filtering approaches (ratings from “similar” users, same or similar items, same or “similar” context) • trust-based approaches (ratings from trusted users, same of similar items, same or “similar” context) • Recommended ratings are selected/weighted upon their quality • Outputs are merged and recommenders and their recommendations are stored (from temporary to eventual)

  22. On High quality of recommendation quality = trust in the source  analysis of justification

  23. TRat(A), items: Recommendation • A accepts D’s recommendation only if D’s trustworthiness combined with an evaluation of the justification that D has given for his recommendation is above a certain threshold. • D’s justification is a set of arguments supporting the rating gave for each aspects (e.g., food, ambience, service) • D’s arguments are evaluated against A’s way of reasoning by running an argumentation protocol

  24. Argumentation Protocol • An argumentation protocol is a composition of dialogue games (primitives: assert, attack, defend, challenge, justify, accept, refuse, or declare undefined) • Logic-based, efficient, implementation of argumentation protocols are available in the literature (J. Bentahar and J.J. Meyer, 2007)

  25. Paul I love that place (claim) They serve traditional food, cooked in the traditional way.(grounds for a claim) why? (asking for ground) yes, sometimes, it is the price you pay for discovering new tastes (undercutting counter-argument) Ok, I agree Olga why? (asking for ground) I may not like the place (stating a counter-argument) since traditional cooking may be not clean (ground for the counter-argument) is not for that that I am willing to pay a price (alternative counter-arguments) (refuse the argument) Example (informal version)

  26. Running an Argumentation Protocol • A and D run a protocol to argue on the arguments that D has given for each aspect of its recommendation. Output of the protocol a value of A’s argumentation trust in D’s arguments

  27. Argumentation Trust Nau = # argument accepted or undefined Nr= # argument refused N = Nr + Nau

  28. Consequences • D’s arguments can be so strong to have D’s recommendation accepted (by A’s) despite D’s trust as a recommender is not so strong • (after a real experience) if D’s recommendation was indeed a good one, A’s trust in D increases. • D’s arguments are so weak to have D’s recommendation refused (by A) despite D’s trust as recommender is high. • (after a real experience) if D’s recommendation was not a good one, D’s trust is not affected because that recommendation was not accepted anyhow. • Trust is dynamic

  29. Virtual Agora, TRat, TRec Embedded Delegate Items Recommenders TRec TRat network of (un)trusted recommender register of (un)trusted items Virtual Agora

  30. TRec(A), recommenders • A’s builds/maintains its trust in D by using a combination of the following strategies: • evaluation of D’s reputation (as a recommender) according to A’s past experience • direct evaluation of D by content-based strategies (referral trust bootstrap) • check between D’s given recommendations and A’s direct experience w.r.t. items recommended by D

  31. Conclusion andFuture Directions

  32. Features of our solution • Context-awareness • Unobtrusiveness • Usefulness • Quality • Privacy and Subjectiveness • Mobility • Low Traffic

  33. We have already implemented a prototype JADEX (Jadex 2008) as a development environment, which handles BDI concept. In order to commercialise our solution and make it useful for the market, we are currently integrating our approach to a set of well-known techniques. Duine Toolkit (M. Van Setten et al, 2004), developed in our Institute, is a framework for hybrid recommender which makes available a number of prediction techniques and allows them to be combined dynamically On going work: Duine Toolkit

  34. On going, future work • Have the solution implemented in a review site • Evaluation by “return of business”-based metrics • Mobility and automatic context capture with IYOUIT

  35. Not(Questions)  Thanks(gabriele.lenzini@telin.nl)

More Related