1 / 25

An Improved Data Aggregation Strategy for Group Recommendations

An Improved Data Aggregation Strategy for Group Recommendations. Toon De Pessemier Simon Dooms Luc Martens from iMinds – Ghent University. Outline. Introduction – Individual v.s . Group How to come up with individual recommendations? Common algorithms

prem
Download Presentation

An Improved Data Aggregation Strategy for Group Recommendations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Improved Data Aggregation Strategy for Group Recommendations Toon De Pessemier Simon Dooms Luc Martens from iMinds – Ghent University Presented by Jeremy Chan

  2. Outline • Introduction – Individual v.s. Group • How to come up with individual recommendations? • Common algorithms • How to come up with group recommendations? • Aggregating recommendations • Aggregating preferences Presented by Jeremy Chan

  3. Outline • How to get better group recommendations? • What approach should be used? • Combining strategies • Experiment and evaluation • Literature review • Experiment procedures • Evaluation metrics • Result analysis and conclusion Presented by Jeremy Chan

  4. Individual v.s. Group Recommendation Presented by Jeremy Chan

  5. Individual recommendations – Algorithms • Collaborative Filtering (CF) • Pearson (correlation) metric • User-based (UBCF) • Item-based (IBCF) • Content-Based (CB) • Collect user ratings and user behavior • Profile model on every single user • Match with metadata of items • Hybrid • IBCF + CB Presented by Jeremy Chan

  6. Individual recommendations – Algorithms (cont’d) • Singular value decomposition (SVD) • Matrix factorization • Reduce dimensionality for easy comparison Presented by Jeremy Chan

  7. Group recommendations – Aggregating recommendations • Aggregate individual recommendations of each member in the group • Least Misery • Plurality Voting • Averaging Prediction Scores Presented by Jeremy Chan

  8. Group recommendations – Aggregating preferences • Aggregate preferences of each member in the group into a group preference model • Average group members’ ratings as the single rating of the group • Treat the model as a single pseudo user to produce recommendations using individual recommendation algorithms Presented by Jeremy Chan

  9. Better group recommendations – What approach should be used? • Depends on • Algorithm • Density (No. of consumptions) * Accuracy is based on Mean Absolute Error (MAE). Only CF is considered and only a small data set (approximately 3300 ratings) are considered. Presented by Jeremy Chan

  10. Better group recommendations – What approach should be used? (Cont’d) …The grouping strategy that provides the most accurate recommendations depends on the used algorithm. The CB and UBCF algorithm generate the most accurate group recommendations if the group members’ preferences are aggregated whereas the results of SVD and IBCF are most optimal if the members’ recommendations are aggregated. A possible explanation for these differences in accuracy lies in the way in which the algorithm processes the data. The CB and UBCF algorithm create some kind of user profile to find respectively matching items or similar users. In contrast, the matrix decomposition of SVD and the item-item similarities of IBCF provide less insight into the preferences of the users. So, aggregating the preferences of the group members provides optimal results if the algorithm internally composes some kind of user profile holding his preferences, whereas aggregating the recommendations of the group members is a better option if the users’ preferences are less transparent in the data structure of the algorithm. … Adopted from T. De Pessemier, S. Dooms, and L. Martens. Design and evaluation of a group recommender system. In Proceedings of the sixth ACM conference onRecommender systems, RecSys '12, pages 225-228, New York, NY, USA, 2012. ACM. Presented by Jeremy Chan

  11. Better group recommendations – Combining strategies • Merge the knowledge of the 2 aggregation strategies into a final group recommendation list • If one of the aggregation strategies comes up with a less suitable or undesirable group recommendation, the other aggregation strategy can correct the mistake Presented by Jeremy Chan

  12. Better group recommendations – Combining strategies (Cont’d) Presented by Jeremy Chan

  13. Better group recommendations – Combining strategies (Cont’d) • There are 2 lists of top-N items (most interesting items). Which items are to be adopted? • The one co-exists in both the lists • Intersection of 2 lists Presented by Jeremy Chan

  14. Better group recommendations – Combining strategies (Cont’d) • Remaining items co-exist in both the lists. How to order them for prediction? • The same items are ordered differently in both the lists • Average the prediction scores of each items in both of the list, then order them for output • Weighted average can be an alternative to consider importance of different aggregating strategies Presented by Jeremy Chan

  15. Better group recommendations – Combining strategies (Example) A 5 B 4 C 3 D2 E 1 C 5 F 4 R 3 A 2 B 1 A 5 B 4 C 3 A 2 B 1 C 5 A 3.5 B 2.5 C 4.0 C 4.0 A 3.5 B 2.5 Note: In the experiment, the top-5% of both the recommendation lists are selected (= the top-84 of recommended items for the MovieLens data set) Presented by Jeremy Chan

  16. Experiment and evaluation – Literature review • How to do group recommendations in the past? • Simulate groups of different sizes and different degrees of similarity • Simulate group ratings by users’ opinion importance parameter (psychology) • Simulate groups using synthetic data Presented by Jeremy Chan

  17. Experiment and evaluation – Literature review (Cont’d) • How to evaluate group recommendations? • Online evaluations/ Group interviews • Impossible in large scale configurations • 5 recommendation algorithms × 2 data aggregation strategies × 12 different group sizes = 120 setups • Offline evaluations • Synthetic groups are samples from the users of a traditional single-user data set • Reliable? Presented by Jeremy Chan

  18. Experiment and evaluation – Literature review (Cont’d) …Quijano-Sanchez et al. used synthetically generated data to simulate groups of people in order to test the accuracy of group recommendations for movies. In addition to this offline evaluation, they conducted an experiment with real users to validate the results obtained with the synthetic groups. One of the main conclusions of their study was that it is possible to realize trustworthy experiments with synthetic data, as the online user test confirmed the results of the experiment with synthetic data. This conclusion justifies the use of an offline evaluation with synthetic groups to evaluate the group recommendations in our experiment. … Presented by Jeremy Chan

  19. Experiment and evaluation – Experiment procedures • Data preparation and preprocessing • MovieLens (100k) data set • Ordered ratings chronologically • Assigned the oldest 60% to the training set and the most recent 40% to the test set, to reflect a realistic scenario Presented by Jeremy Chan

  20. Experiment and evaluation – Experiment procedures (Cont’d) • Experiment setup • Synthetic groups are composed by selecting random users from the data set • All users are assigned to one group of a predefined size • Group recommendations are generated for each of the groups based on the ratings of the members in the training set • Each member receives the same recommendation list of his group since the recommendations are supposed to be consumed by all group members Presented by Jeremy Chan

  21. Experiment and evaluation – Evaluation metrics • Recommendations are evaluated individually as in the classical single-user case • No real group ratings are available! • Compare the recommendations with the test set • Normalized Discounted Cumulative Gain (nDCG) Presented by Jeremy Chan

  22. Result analysis Note: For each algorithm, only the most accurate individual strategy is shown: Presented by Jeremy Chan

  23. Result analysis (Cont’d) Note: The null hypothesis, = the mean accuracy of the recommendations generated by using the best individual aggregation strategy is equal to the mean accuracy of the recommendations generated by using the combined aggregation strategy Presented by Jeremy Chan

  24. Result analysis (Cont’d) • Increased mean nDCGs for combined aggregation strategies show the improvement of predictions • Low p-values of statistical T-test (< 0.05) show the robustness of the combined aggregation strategies Presented by Jeremy Chan

  25. Conclusion • Combined aggregation strategy may take advantage of both strategies • Aggregating recommendations • Aggregating preferences • Doubts • Reliability of offline evaluation (synthetic groups) • Play-safe method to only get the items found in both aggregation strategies • There may be valid recommendations which only exist in either strategy Presented by Jeremy Chan

More Related