1 / 29

Improving Recommendation Lists Through Topic Diversification

Improving Recommendation Lists Through Topic Diversification. CaiNicolas Ziegler, Sean M. McNee, Joseph A. Konstan, Georg Lausen WWW '05 報告人 : 謝順宏. Outline. Introduction On collaborative filtering Evaluation metrics Topic diversification Empirical analysis Related work Conclusion.

loan
Download Presentation

Improving Recommendation Lists Through Topic Diversification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving Recommendation Lists ThroughTopic Diversification CaiNicolas Ziegler, Sean M. McNee,Joseph A. Konstan, Georg Lausen WWW '05 報告人:謝順宏

  2. Outline • Introduction • On collaborative filtering • Evaluation metrics • Topic diversification • Empirical analysis • Related work • Conclusion

  3. Introduction • To reflect the user’s complete spectrum of interests. • Improves user satisfaction. • Many recommendations seem to be “similar” with respect to content. • Traditionally, recommender system projects have focused on optimizing accuracy using metrics such as precision/recall or mean absolute error.

  4. Introduction • Topic diversification • Intra-list similarity metric. • Accuracy versus satisfaction. • “accuracy does not tell the whole story”

  5. On collaborative filtering(CF) • Collaborative filtering (CF) still represents the most commonly adopted technique in crafting academic and commercial recommender systems. • Its basic idea refers to making recommendations based upon ratings that users have assigned to products.

  6. User-based Collaborative Filtering • a set of users • a set of products • partial rating function for each user,

  7. User-based Collaborative Filtering Two major steps: • Neighborhood formation. • Pearson correlation • Cosine distance • Rating prediction

  8. Itembased Collaborative Filtering • Unlike user-based CF, similarity values c are computed for items rather than users.

  9. Evaluation metrics • Accuracy Metrics • Predictive Accuracy Metrics • Decision Support Metrics • Beyond Accuracy • Coverage • Novelty and Serendipity • Intra-List Similarity

  10. Accuracy Metrics • Predictive Accuracy Metrics • Mean absolute error (MAE) • Mean squared error(MSE) • Decision Support Metrics • Recall • Precision

  11. Beyond Accuracy • Coverage • Coverage measures the percentage of elements part of the problem domain for which predictions can be made. • Novelty and Serendipity • Novelty and serendipity metrics thus measure the “non-obviousness” of recommendations made, avoiding “cherry-picking”.

  12. Intra List Similarity(ILS) • To measure the similarity between product

  13. Topic Diversification “Law of Diminishing Marginal Returns” • Suppose you are offered your favorite drink. Let p1 denote the price you are willing to pay for that product. Assuming your are offered a second glass of that particular drink, the amount p2 of money you are inclined to spend will be lower, i.e., p1 > p2. Same for p3, p4, and so forth.

  14. Topic Diversification • Taxonomy-based similarity metric • To compute the similarity between product sets based upon their classification.

  15. Topic Diversification • Topic Diversification Algorithm • Re-ranking the recommendation list from applying topic diversification.

  16. Topic Diversification • ΘF defines the impact that dissimilarity rank exerts on the eventual overall output. • Large ΘF favors diversification over a’s original relevance order. • The input lists muse be considerably larger than the final top-N list.

  17. Recommendation dependency • We assume that recommended products along with their content descriptions, only relevance weight ordering must hold for recommendation list items, no other dependencies are assumed. • An item b’s current dissimilarity rank with respect to preceding recommendations plays an important role and may influence the new ranking.

  18. Empirical analysis • Dataset • BookCrossing (http://www.bookcrossing.com) • 278,858 members • 1,157,112 ratings • 271,379 distinct ISBN

  19. Data clean & condensation • Discarded all books missing taxonomic descriptions. • Only community members with at least 5 ratings each were kept. • 10339 users • 6708books • 316349 ratings

  20. Evaluation Framework Setup • Did not compute MAE metric values • Adopted K-folding (K=4) • We were interested in seeing how accuracy, captured by precision and recall, behaves whe increasing θF.

  21. Empirical analysis • ΘF=0,

  22. Empirical analysis

  23. Empirical analysis

  24. Conclusion • We found that diversification appears detrimental to both user-based and item-based CF along precision and recall metrics. • Item-based CF seems more susceptible to topic diversification than user-based CF, backed by result from precision, recall and ILS metric analysis.

  25. Empirical analysis

  26. Conclusion • Diversification factor impact • Human perception • Interaction with accuracy

  27. Multiple Linear Regression

  28. Related work • Northern Light (http://www.northernlight.com) • Google (http://www.google.com)

  29. Conclusion • An algorithmic framework to increase the diversity of a top-N list of recommended products. • New intra-list similarity metric.

More Related