1 / 35

Improving Web Search Ranking by Incorporating User Behavior Information

Improving Web Search Ranking by Incorporating User Behavior Information. Eugene Agichtein Eric Brill Susan Dumais. Microsoft Research. Web Search Ranking. Rank pages relevant for a query Content match e.g., page terms, anchor text, term weights Prior document quality

Download Presentation

Improving Web Search Ranking by Incorporating User Behavior Information

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving Web Search Ranking by Incorporating User Behavior Information Eugene AgichteinEric BrillSusan Dumais MicrosoftResearch

  2. Web Search Ranking • Rank pages relevant for a query • Content match • e.g., page terms, anchor text, term weights • Prior document quality • e.g., web topology, spam features • Hundreds of parameters • Tune ranking functions on explicit document relevance ratings

  3. Query: SIGIR 2006 • Users can help indicate most relevant results

  4. Web Search Ranking: Revisited • Incorporate user behavior information • Millions of users submit queries daily • Rich user interaction features (earlier talk) • Complementary to content and web topology • Some challenges: • User behavior “in the wild” is not reliable • How to integrate interactions into ranking • What is the impact over all queries

  5. Outline • Modelling user behavior for ranking • Incorporating user behavior into ranking • Empirical evaluation • Conclusions

  6. Related Work • Personalization • Rerank results based on user’s clickthrough and browsing history • Collaborative filtering • Amazon, DirectHit: rank by clickthrough • General ranking • Joachims et al. [KDD 2002], Radlinski et al. [KDD 2005]: tuning ranking functions with clickthrough

  7. Rich User Behavior Feature Space • Observed and distributional features • Aggregate observed values over all user interactions for each query and result pair • Distributional features: deviations from the “expected” behavior for the query • Represent user interactions as vectors in user behavior space • Presentation: what a user sees before a click • Clickthrough: frequency and timing of clicks • Browsing: what users do after a click

  8. Some User Interaction Features

  9. Training a User Behavior Model • Map user behavior features to relevance judgements • RankNet: Burges et al., [ICML 2005] • Scalable Neural Net implementation • Input: user behavior + relevance labels • Output: weights for behavior feature values • Used as testbed for all experiments

  10. Training RankNet • For query results 1 and 2, present pair of vectors and labels, label(1) > label(2)

  11. RankNet [Burges et al. 2005] • For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) Feature Vector1 Label1 NN output 1

  12. RankNet [Burges et al. 2005] • For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) Feature Vector2 Label2 NN output 1 NN output 2

  13. RankNet [Burges et al. 2005] • For query results 1 and 2, present pair of vectors and labels, label(1) > label(2) NN output 1 NN output 2 Error is function of both outputs (Desire output1 > output2)

  14. Predicting with RankNet • Present individual vector and get score Feature Vector1 NN output

  15. Outline • Modelling user behavior • Incorporating user behavior into ranking • Empirical evaluation • Conclusions

  16. User Behavior Models for Ranking • Use interactions from previous instances of query • General-purpose (not personalized) • Only available for queries with past user interactions • Models: • Rerank, clickthrough only: reorder results by number of clicks • Rerank, predicted preferences (all user behavior features): reorder results by predicted preferences • Integrate directly into ranker: incorporate user interactions as features for the ranker

  17. Rerank, Clickthrough Only • Promote all clicked results to the top of the result list • Re-order by click frequency • Retain relative ranking of un-clicked results

  18. Rerank, Preference Predictions • Re-order results by function of preference prediction score • Experimented with different variants • Using inverse of ranks • Intuition: scores not comparable  merge ranks

  19. Integrate User Behavior Features Directly into Ranker • For a given query • Merge original feature set with user behavior features when available • User behavior features computed from previous interactions with same query • Train RankNet on enhanced feature set

  20. Outline • Modelling user behavior • Incorporating user behavior into ranking • Empirical evaluation • Conclusions

  21. Evaluation Metrics • Precision at K: fraction of relevant in top K • NDCG at K: norm. discounted cumulative gain • Top-ranked results most important • MAP: mean average precision • Average precision for each query: mean of the precision at K values computed after each relevant document was retrieved

  22. Datasets • 8 weeks of user behavior data from anonymized opt-in client instrumentation • Millions of unique queries and interaction traces • Random sample of 3,000 queries • Gathered independently of user behavior • 1,500 train, 500 validation, 1,000 test • Explicit relevance assessments for top 10 results for each query in sample

  23. Methods Compared • Content only: BM25F • Full Search Engine: RN • Hundreds of parameters for content match and document quality • Tuned with RankNet • Incorporating User Behavior • Clickthrough: Rerank-CT • Full user behavior model predictions: Rerank-All • Integrate all user behavior features directly: +All

  24. Content, User Behavior: Precision at K, queries with interactions BM25 < Rerank-CT < Rerank-All < +All

  25. Content, User Behavior: NDCG BM25 < Rerank-CT < Rerank-All < +All

  26. Full Search Engine, User Behavior: NDCG, MAP

  27. Impact: All Queries, Precision at K < 50% of test queries w/ prior interactions +0.06-0.12 precision over all test queries

  28. Impact: All Queries, NDCG +0.03-0.05 NDCG over all test queries

  29. Which Queries Benefit Most Most gains are for queries with poor ranking

  30. Conclusions • Incorporating user behavior into web search ranking dramatically improves relevance • Providing rich user interaction features to ranker is the most effective strategy • Large improvement shown for up to 50% of test queries

  31. Thank you Text Mining, Search, and Navigation group: http://research.microsoft.com/tmsn/ Adaptive Systems and Interaction group: http://research.microsoft.com/adapt/ MicrosoftResearch

  32. Content,User Behavior: All Queries, Precision at K BM25 < Rerank-CT < Rerank-All < All

  33. Content, User Behavior: All Queries, NDCG BM25 << Rerank-CT << Rerank-All < All

  34. Results Summary • Incorporating user behavior into web search ranking dramatically improves relevance • Incorporating user behavior features into ranking directly most effective strategy • Impact on relevance substantial • Poorly performing queries benefit most

  35. Promising Extensions • Backoff (improve query coverage) • Model user intent/information need • Personalization of various degrees • Query segmentation

More Related