1 / 66

Seesaw Personalized Web Search

Explore the benefits of personalization algorithms in web search, including query expansion and result re-ranking. Learn about efficient scoring, evaluation frameworks, and user-controlled personalization.

dmamie
Download Presentation

Seesaw Personalized Web Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SeesawPersonalized Web Search Jaime Teevan, MIT with Susan T. Dumais and Eric Horvitz, MSR

  2. Personalization Algorithms • Query expansion • Standard IR Query Server Document Client User

  3. Personalization Algorithms • Query expansion • Standard IR Query Server Document Client User v. Result re-ranking

  4. Result Re-Ranking • Ensures privacy • Good evaluation framework • Can look at rich user profile • Look at light weight user models • Collected on server side • Sent as query expansion

  5. Seesaw Search Engine Seesaw Seesaw dog 1 cat 10 india 2 mit 4 search 93 amherst 12 vegas 1

  6. Seesaw Search Engine query dog 1 cat 10 india 2 mit 4 search 93 amherst 12 vegas 1

  7. Seesaw Search Engine query forest hiking walking gorp dog cat monkey banana food baby infant child boy girl csail mit artificial research robot baby infant child boy girl web search retrieval ir hunt dog 1 cat 10 india 2 mit 4 search 93 amherst 12 vegas 1

  8. Seesaw Search Engine query Search results page 6.0 1.6 0.2 2.7 0.2 1.3 dog 1 cat 10 india 2 mit 4 search 93 amherst 12 vegas 1 web search retrieval ir hunt 1.3

  9. Calculating a Document’s Score • Based on standard tf.idf web search retrieval ir hunt 1.3

  10. Calculating a Document’s Score • Based on standard tf.idf (ri+0.5)(N-ni-R+ri+0.5) (ni-ri+0.5)(R-ri+0.5) wi = log • User as relevance feedback • Stuff I’ve Seen index • More is better 0.1 0.5 0.05 0.35 0.3 1.3

  11. Finding the Score Efficiently • Corpus representation (N, ni) • Web statistics • Result set • Document representation • Download document • Use result set snippet • Efficiency hacks generally OK!

  12. Evaluating Personalized Search • 15 evaluators • Evaluate 50 results for a query • Highly relevant • Relevant • Irrelevant • Measure algorithm quality • DCG(i) = { Gain(i), DCG(i–1) + Gain(i)/log(i), if i = 1 otherwise

  13. Evaluating Personalized Search • Query selection • Chose from 10 pre-selected queries • Previously issued query Pre-selected cancer Microsoft traffic … Las Vegas rice McDonalds … bison frise Red Sox airlines … Mary Joe Total: 137 53 pre-selected (2-9/query)

  14. Seesaw Improves Text Retrieval • Random • Relevance Feedback • Seesaw

  15. Text Features Not Enough

  16. Take Advantage of Web Ranking

  17. Further Exploration • Explore larger parameter space • Learn parameters • Based on individual • Based on query • Based on results • Give user control?

  18. Making Seesaw Practical • Learn most about personalization by deploying a system • Best algorithm reasonably efficient • Merging server and client • Query expansion • Get more relevant results in the set to be re-ranked • Design snippets for personalization

  19. User Interface Issues • Make personalization transparent • Give user control over personalization • Slider between Web and personalized results • Allows for background computation • Creates problem with re-finding • Results change as user model changes • Thesis research – Re:Search Engine

  20. Thank you! teevan@csail.mit.edu

  21. END

  22. Personalizing Web Search • Motivation • Algorithms • Results • Future Work

  23. Personalizing Web Search • Motivation • Algorithms • Results • Future Work

  24. Study of Personal Relevancy • 15 participants • Microsoft employees • Managers, support staff, programmers, … • Evaluate 50 results for a query • Highly relevant • Relevant • Irrelevant • ~10 queries per person

  25. Study of Personal Relevancy • Query selection • Chose from 10 pre-selected queries • Previously issued query Pre-selected cancer Microsoft traffic … Las Vegas rice McDonalds … bison frise Red Sox airlines … Mary Joe Total: 137 53 pre-selected (2-9/query)

  26. Relevant Results Have Low Rank Highly Relevant Relevant Irrelevant

  27. Relevant Results Have Low Rank Highly Relevant Rater 1 Rater 2 Relevant Irrelevant

  28. Same Results Rated Differently • Average inter-rater reliability: 56% • Different from previous research • Belkin: 94% IRR in TREC • Eastman: 85% IRR on the Web • Asked for personalrelevance judgments • Some queries more correlated than others

  29. Same Query, Different Intent • Different meanings • “Information about the astronomical/astrological sign of cancer” • “information about cancer treatments” • Different intents • “is there any new tests for cancer?” • “information about cancer treatments”

  30. Same Intent, Different Evaluation • Query: Microsoft • “information about microsoft, the company” • “Things related to the Microsoft corporation” • “Information on Microsoft Corp” • 31/50 rated as not irrelevant • Only 6/31 do more than one agree • All three agree only for www.microsoft.com • Inter-rater reliability: 56%

  31. Search Engines are for the Masses Joe Mary

  32. Much Room for Improvement • Group ranking • Best improves on Web by 38% • More people  Less improvement

  33. Much Room for Improvement • Group ranking • Best improves on Web by 38% • More people  Less improvement • Personal ranking • Best improves on Web by 55% • Remains constant

  34. Personalizing Web Search • Motivation • Algorithms • Results • Future Work - Seesaw Search Engine - See - Seesaw

  35. BM25 with Relevance Feedback Score = Σtfi * wi N ni R ri N ni wi = log

  36. BM25 with Relevance Feedback Score = Σtfi * wi N ni R ri (ri+0.5)(N-ni-R+ri+0.5) (ni-ri+0.5)(R-ri+0.5) wi = log

  37. User Model as Relevance Feedback Score = Σtfi * wi N R N’ = N+R ni’ = ni+ri ri ni (ri+0.5)(N-ni-R+ri+0.5) (ni- ri+0.5)(R-ri+0.5) (ri+0.5)(N’-ni’-R+ri+0.5) (ni’- ri+0.5)(R-ri+0.5) wi = log

  38. User Model as Relevance Feedback World Score = Σtfi * wi N User R ri ni

  39. User Model as Relevance Feedback World Score = Σtfi * wi N User World related to query R ri ni ni N

  40. User Model as Relevance Feedback World Score = Σtfi * wi N User World related to query R ri ni R ni N User related to query ri Query Focused Matching

  41. User Model as Relevance Feedback World Focused Matching World Score = Σtfi * wi N User Web related to query R ri ni R ni N User related to query ri Query Focused Matching

  42. Parameters • Matching • User representation • World representation • Query expansion

  43. Parameters • Matching • User representation • World representation • Query expansion Query focused World focused

  44. Parameters • Matching • User representation • World representation • Query expansion Query focused World focused

  45. User Representation • Stuff I’ve Seen (SIS) index • MSR research project [Dumais, et al.] • Index of everything a user’s seen • Recently indexed documents • Web documents in SIS index • Query history • None

  46. Parameters • Matching • User representation • World representation • Query expansion Query focused World focused All SIS Recent SIS Web SIS Query history None

  47. Parameters • Matching • User representation • World representation • Query expansion Query Focused World Focused All SIS Recent SIS Web SIS Query History None

  48. World Representation • Document Representation • Full text • Title and snippet • Corpus Representation • Web • Result set – title and snippet • Result set – full text

  49. Parameters • Matching • User representation • World representation • Query expansion Query focused World focused All SIS Recent SIS Web SIS Query history None Full text Title and snippet Web Result set – full text Result set – title and snippet

More Related