1 / 56

Am Introduction to Recommendation Systems

Am Introduction to Recommendation Systems. Hasan Davulcu CIPS, ASU. Recommendation Systems. U: C X S  R C: profile: age, gender, income S: title, director, actor, year, genre. Recommendations:. Recommender Systems. Content Based. Limitations. Too Similar ! New user problem.

abbott
Download Presentation

Am Introduction to Recommendation Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Am Introduction to Recommendation Systems Hasan Davulcu CIPS, ASU

  2. Recommendation Systems U: C X S  R C: profile: age, gender, income S: title, director, actor, year, genre Recommendations:

  3. Recommender Systems

  4. Content Based

  5. Limitations • Too Similar ! • New user problem

  6. User - Collaborative Methods • U(C,S) is estimated based on utilities U(Cj,S) by those users Cj who are “similar” to user C.

  7. Limitations • New Item Problem • Sparsity

  8. Amazon’s Item-to-Item

  9. Comparing Human Recommenders to Online Systems Rashmi Sinha & Kirsten Swearingen SIMS, UC Berkeley

  10. Which one should I read? Recommender Systems are technological proxy for a social process Recommendations from friends Recommendations from Online Systems

  11. I know what you’ll read next summer (Amazon, Barnes&Noble) • what movies you should watch… (Reel, RatingZone, Amazon) • what music you should listen to… (CDNow, Mubu, Gigabeat) • what websites you should visit (Alexa) • what jokes you will like (Jester) • & who you should date (Yenta)

  12. Method Philosophy: Testing & Analysis as part of the Iterative Design Process Design Evaluate Use both quantitative & qualitative methods Analyze Generate Design Recommendations Slide adapted from James Landay

  13. User incurs cost in using system: • Time, Effort, Privacy Issues Input Receives Recommendation • Cost in reviewing recommendations Judges if he/she will sample recommendation Benefit only if recommended item appeals Taking a closer look at the Recommendation Process

  14. Amazon’s Recommendation Process • Input: One artist/author name

  15. Search using Recommendations • Output: List of Recommendations • Explore / Refine Recommendations

  16. Book Recommendation Site: Sleeper • Input: Ratings of 10 books for all users • Use of continuous Rating Bar (System designed by Ken Goldberg)

  17. Sleeper: Output • Output: List of items with brief information about each item • Degree of confidence in prediction

  18. What convinces a user to sample the recommendation • Judging recommendations: • What is a good recommendation from the user’s perspective? • Trust in a Recommender System: • What factors lead to trust in a system? • System Transparency: • Do users need to know why an item was recommended?

  19. Social Recommendations Study of RS has focused mostly on Collaborative Filtering Algorithms Input from user Output (Recommendations) Collaborative Filtering Algorithms

  20. Beyond “Algorithms Only” : An HCI Perspective on Recommender Systems • Comparing the Social Recommendation Process to Online Recommender Systems • Understanding the factors that go into an effective recommendation (by studying users interaction with 6 online RS)

  21. The Human vs. Recommenders Death Match

  22. Book Systems Amazon Books Rating Zone Sleeper

  23. Movie Systems Amazon Movies Movie Critic Reel

  24. Method • 19 participants, age:18 to 34 years • For each of 3 online systems: • Registered at site • Rated items • Reviewed and evaluated recommendation set • Completed questionnaire • Also reviewed and evaluated sets of recommendations from 3 friends each

  25. Results

  26. USEFUL Not yet read/viewed Defining Types of Recommendations GOOD: User likes Good Recs. (Precision) • % items user felt interested in Useful Recs. • Subset of Good Recs. • User felt interested in and had not read / viewed yet

  27. Comparing Human Recommenders to RS: “Good” and “Useful” Recommendations % Good Recommendations 100% % Useful Recommendations 90% 80% 70% 60% 50% 40% 30% 20% 10% 0 Amazon (15) Sleeper (10) Friends (9) Rating Zone (8) Amazon (15) Reel (5-10) Movie Critic (20) Friends (9) Movies Books (x)No. of Recommendations RS Average Ave. Std. Error

  28. However users like online RS This result was supported by post test interviews.

  29. Why systems over friends? • “Suggested a number of things I hadn’t heard of, interesting matches.” • “It was like going to Cody’s—looking at that table up front for new and interesting books.” • “Systems can pull from a large database—no one person knows about all the movies I might like.”

  30. Items users had “Heard of” before Movies Books Friends recommended mostly “old” previously experienced items

  31. What systems did users prefer? Yes • Sleeper and Amazon books average highest ratings • Split opinions on Reel, MovieCritic No Movies Books

  32. Why did some systems… • Provide useful recommendations but leave users unsatisfied? • RatingZone, MovieCritic & Reel

  33. Possible Reasons • Previously Enjoyed Items are important: We term these Trust-Generating Items • Adequate Item Description & Ease of Use are important • Missing from List:Time to Receive Recommendations & No. of Items to Rate not important! All correlations are significant at .05

  34. USEFUL Not yet read/viewed A Question of Trust… GOOD: User likes Post Test Interviews showed that users “trust” systems if they have already sampled some recommendations • Positive Experiences lead to “trust • Negative Experiences with Recommended Items lead to mistrust of system TRUST-GENERATING Previously read/viewed

  35. A Question of Trust … Movies Books Difference between Amazon and Sleeper highlights the fact that there are different kinds of good Recommender Systems

  36. Adequate Item Description: The RatingZone Story 0 % of Version 1 and 60% of Version 2 users found item description adequate An adequate item description, and links to other sources about item was a crucial factor in users being convinced by a recommendation.

  37. System Transparency • Why was this item recommended? • Do users understand why an item was recommended Users mentioned this factor in post test interviews

  38. Discussion & Design Recommendations

  39. Design Recommendations: Justification • Justify your Recommendations • Adequate Item Information: Providing enough detail about item for user to make choice • System Transparency: Generate (at least some) recommendations which are clearly linked to the rated items • Explanation: Provide an Explanation, why the item was recommended. • Community Ratings: Provide link to ratings / reviews by other users. If possible, present numerical summary of ratings.

  40. Design Recommendations:Accuracy vs. Less Input • Don’t sacrifice accuracy for the sake of generating quick recommendations. Users don’t mind rating more items to receive quality recommendations. • A possible way to achieve this: have multilevel recommendations. Users can initially use the system by providing one rating, and are offered subsequent opportunities to refine recommendation • One needs a happy medium between too little input (leading to low accuracy) and too much input (leading to user impatience)

  41. Design Recommendations: New Unexpected Items Design Recommendations: New Unexpected Items • Users like Rec. Systems as they provide information about new, unexpected items. • List of recommended items should include new items which the user might not find out in any other way. • List could also include some unexpected items (e.g., from other topics / genres) which the user might not have thought of themselves.

  42. Design Recommendations: Trust Generating Items • Users (especially first time users) need to develop trust in the system. • Trust in system is enhanced by the presence of items that the user has already enjoyed. • Generating some very popular (which have probably been experienced previously) in the initial recommendation set might be one way to achieve this.

  43. Design Recommendations: Mix of Items • Systems need to provide a mix of different kinds of items to cater to different users: • Trust Generating Items: A few very popular ones, which the system has high confidence in • Unexpected Items: Some unexpected items, whose purpose is to allow users to broaden horizons. • Transparent Items: At least some items for which the user can see the clear link between the items he /she rated and the recommendation. • New Items: Some items which are new. Question: Should these be presented as a sorted list / unsorted list/ different categories of recommendations?

  44. Design Recommendations: Continuous Scales for Input • Allow users to provide ratings on a continuous scale. • One of the reasons users liked Sleeper was because it allowed them to rate on a continuous scale. Users did not like binary scales.

  45. Limitations of Study • Simulated first-time visit, did not allow system to learn user preferences over time • Source of recommendations known to subjects—might have biased towards friends • Fairly homogenous group of subjects, no novice users

  46. Future Plans: Second Generation Music Recommender Systems • Have evolved beyond previous systems • Use a variety of sophisticated algorithms to map users preferences over music domain • Require a lot more input from the user • Users can sample recommendations during the study!

  47. MusicBudha (Mubu.com): Exploring Genres

  48. Mubu.com: Exploring Jazz Styles

  49. Mubu.com: Rating Samples

  50. Mubu.com: Recommendations as Audio Samples

More Related