1 / 75

Personalizing the Web: Building effective recommender systems

Personalizing the Web: Building effective recommender systems. Bamshad Mobasher Center for Web Intelligence School of Computer Science, Telecommunication, and Information Systems DePaul University, Chicago, Illinois, USA. Outline. Web Personalization & Recommender systems

Download Presentation

Personalizing the Web: Building effective recommender systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Personalizing the Web:Building effective recommender systems Bamshad Mobasher Center for Web Intelligence School of Computer Science, Telecommunication, and Information Systems DePaul University, Chicago, Illinois, USA

  2. Outline • Web Personalization & Recommender systems • Basic Approaches & Algorithms • Special focus on collaborative filtering • Extending Traditional Approaches • Hybrid models • Personalization Based on Data Mining • Vulnerability of Collaborative Filtering to Attacks

  3. Web Personalization • The Problem • Dynamically serve customized content (pages, products, recommendations, etc.) to users based on their profiles, preferences, or expected interests • Common Approaches • Collaborative Filtering • Give recommendations to a user based on preferences of “similar” users • Preferences on items may be explicit or implicit • Content-Based Filtering • Give recommendations to a user based on items with “similar” content in the user’s profile • Rule-Based (Knowledge-Based) Filtering • Provide recommendations to users based on predefined (or learned) rules • age(x, 25-35) and income(x, 70-100K) and childred(x, >=3)  recommend(x, Minivan)

  4. Content-Based Recommender Systems

  5. Content-Based Recommenders: Personalized Search Agents • How can the search engine determine the “user’s context”? ? Query: “Madonna and Child” ? • Need to “learn” the user profile: • User is an art historian? • User is a pop music fan?

  6. Collaborative Recommender Systems

  7. Collaborative Recommender Systems

  8. Collaborative Recommender Systems

  9. Collaborative Recommender Systems http://movielens.umn.edu

  10. Hybrid Recommender Systems

  11. Other Combined to Hybrid Recommenders

  12. Other Forms of Collaborative Filtering • Social Tagging (Folksonomy) • people add free-text tags to their content • where people happen to use the same terms then their content is linked • frequently used terms floating to the top to create a kind of positive feedback loop for popular tags. • Examples: • Del.icio.us • Flickr • QLoud & iTunes

  13. Social / Collaborative Tags

  14. Social / Collaborative Tags

  15. Social / Collaborative Tags

  16. The Recommendation Task • Basic formulation as a prediction problem • Typically, the profile Pu contains preference scores by u on some other items, {i1, …, ik} different from it • preference scores on i1, …, ik may have been obtained explicitly (e.g., movie ratings) or implicitly (e.g., time spent on a product page or a news article) Given a profilePu for a user u, and a target itemit, predict the preference score of user u on item it

  17. Content-Based Recommenders • Predictions for unseen (target) items are computed based on their similarity (in terms of content) to items in the user profile. • E.g., user profile Pu contains recommend highly: and recommend “mildly”:

  18. Content-Based Recommenders:: more examples • Music recommendations • Play list generation Example: Pandora

  19. Basic Collaborative Filtering Process Current User Record <user, item1, item2, …> Nearest Neighbors Recommendation Engine Neighborhood Formation Combination Function Historical User Records Recommendations user rating item Recommendation Phase Neighborhood Formation Phase

  20. Collaborative Recommender Systems • Collaborative filtering recommenders • Predictions for unseen (target) items are computed based the other users’ with similar interest scores on items in user u’s profile • i.e. users with similar tastes (aka “nearest neighbors”) • requires computing correlations between user u and other users according to interest scores or ratings • k-nearest-neighbor (knn) strategy Can we predict Karen’s rating on the unseen item Independence Day?

  21. Collaborative Filtering: Measuring Similarities • Pearson Correlation • weight by degree of correlation between user U and user J • 1 means very similar, 0 means no correlation, -1 means dissimilar • Works well in case of user ratings (where there is at least a range of 1-5) • Not always possible (in some situations we may only have implicit binary values, e.g., whether a user did or did not select a document) • Alternatively, a variety of distance or similarity measures can be used Average rating of user J on all items.

  22. Collaborative Recommender Systems • Collaborative filtering recommenders • Predictions for unseen (target) items are computed based the other users’ with similar interest scores on items in user u’s profile • i.e. users with similar tastes (aka “nearest neighbors) • requires computing correlations between user u and other users according to interest scores or ratings prediction Correlation to Karen Predictions for Karen on Indep. Day based on the K nearest neighbors

  23. Collaborative Filtering: Making Predictions • When generating predictions from the nearest neighbors, neighbors can be weighted based on their distance to the target user • To generate predictions for a target user a on an item i: • ra = mean rating for user a • u1, …, ukare the k-nearest-neighbors to a • ru,i = rating of user u on item I • sim(a,u) = Pearson correlation between a and u • This is a weighted average of deviations from the neighbors’ mean ratings (and closer neighbors count more)

  24. Example Collaborative System Prediction  Bestmatch Using k-nearest neighbor with k = 1

  25. Collaborative Recommenders :: problems of scale

  26. Item-based Collaborative Filtering • Find similarities among the items based on ratings across users • Often measured based on a variation of Cosine measure • Prediction of item I for user a is based on the past ratings of user a on items similar to i. • Suppose: • Predicted rating for Karen on Indep. Day will be 7, because she rated Star Wars 7 • That is if we only use the most similar item • Otherwise, we can use the k-most similar items and again use a weighted average sim(Star Wars, Indep. Day) > sim(Jur. Park, Indep. Day) > sim(Termin., Indep. Day)

  27. Item-based collaborative filtering

  28. Item-Based Collaborative Filtering Prediction  Bestmatch

  29. Collaborative Filtering: Evaluation • split users into train/test sets • for each user a in the test set: • split a’s votes into observed (I) and to-predict (P) • measure average absolute deviation between predicted and actual votes in P • MAE = mean absolute error • average over all test users

  30. Semantically Enhanced Collaborative Filtering • Basic Idea: • Extend item-based collaborative filtering to incorporate both similarity based on ratings (or usage) as well as semantic similarity based on domain knowledge • Semantic knowledge about items • Can be extracted automatically from the Web based on domain-specific reference ontologies • Used in conjunction with user-item mappings to create a combined similarity measure for item comparisons • Singular value decomposition used to reduce noise in the semantic data • Semantic combination threshold • Used to determine the proportion of semantic and rating (or usage) similarities in the combined measure

  31. Semantically Enhanced Hybrid Recommendation • An extension of the item-based algorithm • Use a combined similarity measure to compute item similarities: • where, • SemSim is the similarity of items ip and iq based on semantic features (e.g., keywords, attributes, etc.); and • RateSim is the similarity of items ip and iq based on user ratings (as in the standard item-based CF) •  is the semantic combination parameter: •  = 1  only user ratings; no semantic similarity •  = 0  only semantic features; no collaborative similarity

  32. Semantically Enhanced CF • Movie data set • Movie ratings from the movielens data set • Semantic info. extracted from IMDB based on the following ontology

  33. Semantically Enhanced CF • Used 10-fold x-validation on randomly selected test and training data sets • Each user in training set has at least 20 ratings (scale 1-5)

  34. Semantically Enhanced CF • Dealing with new items and sparse data sets • For new items, select all movies with only one rating as the test data • Degrees of sparsity simulated using different ratios for training data

  35. Web Mining Approach to Personalization • Basic Idea • generate aggregate user models (usage profiles) by discovering user access patterns through Web usage mining (offline process) • Clustering user transactions • Clustering items • Association rule mining • Sequential pattern discovery • match a user’s active session against the discovered models to provide dynamic content (online process) • Advantages • no explicit user ratings or interaction with users • helps preserve user privacy, by making effective use of anonymous data • enhance the effectiveness and scalability of collaborative filtering

  36. Web Usage Mining • Web Usage Mining • discovery of meaningful patterns from data generated by user access to resources on one or more Web/application servers • Typical Sources of Data: • automatically generated Web/application server access logs • e-commerce and product-oriented user events (e.g., shopping cart changes, product clickthroughs, etc.) • user profiles and/or user ratings • meta-data, page content, site structure • User Transactions • sets or sequences of pageviews possibly with associated weights • a pageview is a set of page files and associated objects that contribute to a single display in a Web Browser

  37. Data Preparation Phase Pattern Discovery Phase Web & Application Server Logs Pattern Analysis Site Content & Structure Pattern Filtering Aggregation Characterization Aggregate Usage Profiles Data Preprocessing Usage Mining Data Cleaning Pageview Identification Sessionization Data Integration Data Transformation Transaction Clustering Pageview Clustering Correlation Analysis Association Rule Mining Sequential Pattern Mining Domain Knowledge Patterns User Transaction Database Personalization Based on Web Usage Mining Offline Process

  38. Recommendation Engine Aggregate Usage Profiles <user,item1,item2,…> Integrated User Profile Recommendations Stored User Profile Domain Knowledge Active Session Web Server Client Browser Personalization Based on Web Usage Mining: Online Process

  39. Conceptual Representation of User Transactions or Sessions Pageview/objects Session/user data Raw weights are usually based on time spent on a page, but in practice, need to normalize and transform.

  40. Web Usage Mining: clustering example • Transaction Clusters: • Clustering similar user transactions and using centroid of each cluster as a usage profile (representative for a user segment) Sample cluster centroid from CTI Web site (cluster size =330)

  41. Using Clusters for Personalization Given an active session A  B, the best matching profile is Profile 1. This may result in a recommendation for page F.html, since it appears with high weight in that profile. Original Session/user data Result of Clustering PROFILE 0 (Cluster Size = 3) -------------------------------------- 1.00 C.html 1.00 D.html PROFILE 1 (Cluster Size = 4) -------------------------------------- 1.00 B.html 1.00 F.html 0.75 A.html 0.25 C.html PROFILE 2 (Cluster Size = 3) -------------------------------------- 1.00 A.html 1.00 D.html 1.00 E.html 0.33 C.html

  42. Clustering and Collaborative Filtering :: clustering based on ratings: movielens

  43. Clustering and Collaborative Filtering :: tag clustering example

  44. Profile Injection Attacks • Consist of a number of "attack profiles" • added to the system by providing ratings for various items • engineered to bias the system's recommendations • Two basic types: • “Push attack” (“Shilling”): designed to promote an item • “Nuke attack”: designed to demote a item • Prior work has shown that CF recommender systems are highly vulnerable to such attacks • Attack Models • strategies for assigning ratings to items based on knowledge of the system, products, or users • examples of attack models: “random”, “average”, “bandwagon”, “segment”, “love-hate”

  45. A Successful Push Attack BestMatch Prediction  “user-based” algorithm using k-nearest neighbor with k = 1

  46. Amazon blushes over sex link gaffeBy Stefanie Olsen http://news.com.com/Amazon+blushes+over+sex+link+gaffe/2100-1023_3-976435.html Story last modified Mon Dec 09 13:46:31 PST 2002 In a incident that highlights the pitfalls of online recommendation systems, Amazon.com on Friday removed a link to a sex manual that appeared next to a listing for a spiritual guide by well-known Christian televangelist Pat Robertson. The two titles were temporarily linked as a result of technology that tracks and displays lists of merchandise perused and purchased by Amazon visitors. Such promotions appear below the main description for products under the title, "Customers who shopped for this item also shopped for these items.” Amazon's automated results for Robertson's "Six Steps to Spiritual Revival” included a second title by Robertson as well as a book about anal sex for men…. Amazon conducted an investigation and determined … “hundreds of customers going to the same items while they were shopping on the site”….

  47. Profile Injection Attacks

  48. A Generic Attack Profile IS IF IÆ • Attack models differ based on ratings assigned to filler and selected items Ratings for lfiller items Unrated items in the attack profile Ratings for kselected items Rating for the target item

  49. Average and Random Attack Models IF IÆ • Random Attack: filler items are assigned random ratings drawn from the overall distribution of ratings on all items across the whole DB • Average Attack: ratings each filler item drawn from distribution defined by average rating for that item in the DB • The percentage of filler items determines the amount knowledge (and effort) required by the attacker Rating for the target item Unrated items in the attack profile Random ratings for lfiller items

  50. Bandwagon Attack Model IS IF IÆ • What if the system's rating distribution is unknown? • Identify products that are frequently rated (e.g., “blockbuster” movies) • Associate the pushed product with them • Ratings for the filler items centered on overall system average rating (Similar to Random attack) • frequently rated items can be guessed or obtained externally Random ratings for lfiller items Ratings for kfrequently rated items Unrated items in the attack profile Rating for the target item

More Related