1 / 8

Memory-Based Recommender Systems : A Comparative Study

CSCI 572 PROJECT RECOMPARATOR. Memory-Based Recommender Systems : A Comparative Study. Aaron John Mani Srinivas Ramani. Problem definition. This project is a comparative study of two movie recommendation systems based on collaborative filtering. User-User Rating vs Item-Item Rating

danica
Download Presentation

Memory-Based Recommender Systems : A Comparative Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCI 572 PROJECT RECOMPARATOR Memory-Based Recommender Systems : A Comparative Study Aaron John Mani Srinivas Ramani

  2. Problem definition • This project is a comparative study of two movie recommendation systems based on collaborative filtering. • User-User Rating vs Item-Item Rating • Slope-One algorithm - Prediction engine. • Pearson’s Correlation – Calculate similarity of users/items • Also compare against Netflix/IMDB recommendations • The aim of the experiment is to study the accuracy of the two algorithms when applied on the same dataset under similar conditions

  3. S/W, Language used

  4. Plan of Action

  5. Sample Screenshot[Recommendation Page]

  6. Sample graphs showing the data you will collect and how it will be presented. • Mean Absolute Error (MAE) – Sample error difference of approx.100 Users. This is a standard metric which is essentially used to measure how much deviation a particular algorithm will show against original ratings (blanked out for the test).

  7. Sample graphs showing the data you will collect and how it will be presented. • New User Problem – Conduct a survey among 10 human testers to gauge how relevant the top n predictions are compared to the selected movie and rate their accuracy on a scale of 1-10. These users will be new user rows in the User-Item Matrix with a single rating. The mean of this test data will provide a human perspective on the Precision of machine-generated suggestions for new users introduced into the system.

  8. Sample graphs showing the data you will collect and how it will be presented. • Average Precision Analysis – Create similar test conditions as before. Each human tester logs the relevancy of the top-n predictions of each algorithm to the selected movie. The average across each category of algorithms should provide some insight into the # of relevant predictions generated as compared to the total predictions generated.

More Related