1 / 29

User Modeling and Recommendations – Part 2

User Modeling and Recommendations – Part 2. Many slides adapted from Lora Aroyo http :// de.slideshare.net / laroyo. User Modeling Basic Concepts.

halla-moss
Download Presentation

User Modeling and Recommendations – Part 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. User Modeling andRecommendations– Part 2 Manyslidesadaptedfrom Lora Aroyohttp://de.slideshare.net/laroyo

  2. User Modeling Basic Concepts • User Profile: a datastructurethatrepresents a characterizationofa userat a particularmomentof time representswhat, froma given (system) perspective, thereistoknowabout a user. The datain theprofilecanbeexplicitlygivenbytheuserorderivedbythesystem • User Model: containsthedefinitions & rulesfortheinterpretationofobservationsabouttheuserandaboutthetranslationofthatinterpretationintothecharacteristics in a userprofile • usermodelistherecipeforobtainingandinterpretinguserprofiles • User Modeling: theprocessofrepresentingtheuser

  3. Knowingtheuser - thisknowledge - canbeappliedtoadapta systemorinterfacetotheusertoimprovethesystemfunctionalityanduserexperience User Adaptation

  4. User-Adaptive Systems

  5. Last.fm: Adaptstoyourmusic taste

  6. Issues in User-Adaptive Systems • Overfitting, “bubbleeffects”, lossofserendipityproblem: • systemsmayadapttoostronglytotheinterests/behavior • e.g., an adaptive radiostationmayalwaysplaythe same orverysimilarsongs • Wesearchfortherightbalancebetweennoveltyandrelevancefortheuser (Diversity!) • “Lost in Hyperspace” problem: • whenadaptingthenavigation – i.e. the links on whichuserscanclickto find/accessinformation • e.g., re-ordering/hidingofmenuitemsmayleadtoconfusion

  7. Successperspectives

  8. Evaluation Strategies • User studies: Clean-roomstudy: ask/observe (selected) peoplewhetheryoudid a goodjob • Log analysis: Analyze (click) dataandinferwhetheryoudida goodjob, e.g., cross-validation by “Leave-one-out” • Evaluation ofusermodeling: • measurequalityofprofilesdirectly, e.g. measureoverlapwithexisting(true) profiles, orletpeoplejudgethequalityofthegenerateduserprofiles • measurequalityofapplicationthatexploitstheuserprofile, e.g., applyusermodelingstrategies in a recommendersystem (not trivial toevaluaterecommenders -> nextlecturetopic, workbyDellschaft)

  9. Measuringsuccess?

  10. Possiblemetrics • The usual IR metrics: • Precision: fractionofretrieveditemsthatare relevant • Recall: fractionof relevant itemsthathavebeenretrieved • F-Measure: (harmonic) meanofprecisionandrecall • Metricsforevaluatingrecommendation (rankings): • MeanReciprocal Rank (MRR) offirst relevant item • Success@k: probabilitythat a relevant item occurswithinthe top k • Precision@k, Recall@k & F-Measure@k • Ifa truerankingisgiven: rank correlations • Metricsforevaluatingpredictionofuserpreferences: • MAE = Mean Absolute Error • True/False Positives/Negatives

  11. ExampleEvaluation on Flickr [Rae et al.] shows a typicalexampleofhowtoinvestigateandevaluatea proposalforimproving (tag) recommendations (usingsocialnetworks) • Task: testhowwellthe different strategies (here different tag contexts) canbeusedfor tag prediction/recommendation • Giventwo tags usedalreadyfor a photopredictfivemore tags Steps:... [Rae et al. Improving Tag RecommendationsUsingSocial Networks, RIAO’10]

  12. ExampleEvaluation usingFlickr [Rae et al.] shows a typicalexampleofhowtoinvestigateandevaluatea proposalforimproving (tag) recommendations (usingsocialnetworks) • Task: testhowwellthedifferent strategies (heredifferent tag contexts) canbeusedfor tag prediction/recommendation • PC: Personal context • SCC: socialcontactcontext • SGC: socialgroupcontext • CC: collective/global context

  13. ExampleEvaluation • Task: testhowwellthe different strategies (here different tag contexts) canbeusedfor tag prediction/recommendation Steps: 1. Gather a datasetof tag datapartofwhichcanbeusedasinputandaimtotesttherecommendation on theremaining tag data 2. Usetheinputdataandcalculateforthe different strategiesthepredictions 3. Measuretheperformanceusingstandard (IR) metrics: Precision ofthe • top 5 recommended tags (P@5), MeanReciprocal Rank (MRR), Mean • Average Precision (MAP) 4. Test theresultsforstatisticalsignificanceusingStudent’s T-test, relative tothebaseline(e.g. existingapproach, competitiveapproach)

  14. ExampleEvaluation - 2 [Guy et al.] showsanotherexampleof a similarevaluationapproach Here, the different strategiesdiffer in thewaypeopleandtags areused in thestrategies: withthese tag-basedsystems, therearecomplexrelationshipsbetweenusers, tags anditems, andstrategiesaimto find the relevant aspectsoftheserelationshipsformodelingandrecommendation Here, theirbaselineisthestrategyofthe ‘mostpopular’ tags: thisis a strategyoftenused, tocomparethegloballymostpopulartags tothe tags predictedby a particularpersonalizationstrategy, thusinvestigatingwhetherthepersonalizationisworththeeffortandisabletooutperformtheeasilyavailablebaseline. [Guy et al. Social Media Recommendationbased on People and Tags, SIGIR’10]

  15. Collaborative Filtering: Problem

  16. Collaborative Filtering • Typicalassumption: • Itistoodifficulttorepresentcontentandyourcontentpreferences • Or, do youknowthedifferencebetween • White metal • Black metal • Thrash metal, speedmetal • Death metal • Power metal • Doomandgothicmetal • .... ? èDon‘teventry

  17. Representingcontent in collaborativefiltering • An objectisrepresentedbywholikesithowmuch • PulpFictionT=(null,5,1,null,null,2,5,....) • Cold Start Problem: New movie • OblivionT=(null,null,null,....) • Noonehasratedityetbecauseit will onlybereleased in 2013 First personhas not ratedit secondpersonlikesit Third persondislikesit Oneentryforeachofthe 1 billionusers

  18. Representingusers in collaborativefiltering • A userisrepresentedbywhat he likes • JohnSmithT=(null,5,1,null,null,2,5,....) • Cold Start Problem: New user • SteffenStaabT=(null,null,null,....) • I have not ratedanymovieyet Has not ratedPulpFiction Likesskyfall Dislikesantichrist Oneentryforeachofthe 1 million (?) movies

  19. Collaborative Filtering • Memory-based: User-Item matrix: ratings/preferencesofusers => computesimilaritybetweenusers & recommenditemsofsimilarusers • Model-based: Item-Item matrix: similarity (e.g. based on userratings) betweenitems=> recommenditemsthataresimilartotheonestheuserlikes • Model-based: Clustering: clusterusersaccordingtotheirpreferences => recommenditemsofusersthatbelongtothe same cluster • Model-based: Bayesiannetworks: P(ulikes item B | ulikes item A) = howlikelyisitthat a user, wholikes item A, will like item B learnprobabilitiesfromuserratings/preferences • Others: rule-based, otherdataminingtechniques

  20. Socialnetworks & interestsimilarity • Limitationsofcollaborativefiltering: • ‘coldstart’ and • ‘sparsity’ • thelack ofcontrol(overpeoplewhosharesome, but not all ofmyinterests) is also a problem, i.e. cannotadd ‘trusted’ people, norexclude ‘strange’ ones • ‘socialrecommenders’: presenceofsocialconnectionsdefinesthesimilarity in interests (e.g. socialtaggingCiteULike): • Rationale: homophily = birdsof a featherflocktogether • doesa socialconnectionindicateuserinterestsimilarity? • howmuchusersinterestsimilaritydepends on thestrengthoftheirconnection? • isitfeasibletouse a socialnetworkas a personalizedrecommendation? [Lin & Brusilovsky, Social Networks and Interest Similarity: The Case ofCiteULike, HT’10]

  21. Conclusions • pairsunilaterallyconnectedhavemorecommoninformationitems, metadata, and tags than non-connectedpairs. • thesimilarity was largestfordirectconnectionsand • decreasedwiththeincreaseofdistancebetweenusers in thesocialnetworks • usersinvolved in a reciprocalrelationshipexhibitedsignificantlylarger similaritythanusers in a unidirectionalrelationship • traditional item-level similaritymaybelessreliablewayto find similarusersin socialbookmarkingsystems • itemscollectionsofpeersconnectedbyself-definedsocialconnectionscouldbe a usefulsourceforcross-recommendation

More Related