1 / 28

Personalized Ranking Model Adaptation for Web Search

Personalized Ranking Model Adaptation for Web Search. Hongning Wang 1 , Xiaodong He 2 , Ming-Wei Chang 2 , Yang Song 2 , Ryen W. White 2 and Wei Chu 3. 2 Microsoft Research, Redmond WA, 98007 USA 3 Microsoft Bing, Bellevue WA, 98004 USA

rance
Download Presentation

Personalized Ranking Model Adaptation for Web Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Personalized Ranking Model Adaptation for Web Search Hongning Wang1, Xiaodong He2, Ming-Wei Chang2, Yang Song2, RyenW. White2 and Wei Chu3 2Microsoft Research, Redmond WA, 98007 USA 3Microsoft Bing, Bellevue WA, 98004 USA {yangsong,minchang,xiaohe,ryenw,wechu}@microsoft.com 1Department of Computer Science University of Illinois at Urbana-Champaign Urbana IL, 61801 USA wang296@illinois.edu

  2. Searcher’s information needs are diverse • Exploring user’s search preferences SIGIR 2013 @ Dublin Ireland

  3. Personalization for web search • Exploring user’s search preferences SIGIR 2013 @ Dublin Ireland

  4. Existing methods for personalization • Extracting user-centric features [Teevan et al. SIGIR’05] • Location, gender, click history • Require large volume of user history • Memory-based personalization [White and Drucker WWW’07, Shen et al. SIGIR’05] • Learn direct association between query and URLs • Limited coverage, poor generalization SIGIR 2013 @ Dublin Ireland

  5. Personalization for web search • Major considerations • Accuracy • Maximize the search utility for each single user • Efficiency • Executable on the scale of all the search engine users • Adapt to the user’s result preferences quickly SIGIR 2013 @ Dublin Ireland

  6. Personalized Ranking Model Adaptation • Adapting the global ranking model for each individual user SIGIR 2013 @ Dublin Ireland

  7. Personalized Ranking Model Adaptation • Adjusting the generic ranking model’s parameters with respect to each individual user’s ranking preferences SIGIR 2013 @ Dublin Ireland

  8. Linear Regression Based Model Adaptation • Adapting global ranking model for each individual user Lose function from any linear learning-to-rank algorithm, e.g., RankNet, LambdaRank, RankSVM Complexity of adaptation SIGIR 2013 @ Dublin Ireland

  9. Instantiation example • Adapting RankSVM[Joachims KDD’02] Margin rescaling reducing mis-ordered pairs Non-linear kernels SIGIR 2013 @ Dublin Ireland

  10. Ranking feature grouping I • Grouping features by name - Name • Exploring informative naming scheme • BM25_Body, BM25_Title • Clustering by manually crafted patterns Group 3 Group 2 Group 1 SIGIR 2013 @ Dublin Ireland

  11. Ranking feature grouping II • Co-clustering of documents and features – SVD [Dhillon KDD’01] • SVD on document-feature matrix • k-Means clustering to group features SVD + k-Means SIGIR 2013 @ Dublin Ireland

  12. Ranking feature grouping III • Clustering features by importance - Cross • Estimate linear ranking model on different splits of data • k-Means clustering by feature weights in different splits k-Means SIGIR 2013 @ Dublin Ireland

  13. Discussions • A general framework for ranking model adaptation • Model-based adaptation v.s. {instance, feature}-based adaptation • Within the same optimization complexity as the original ranking model • Adaptation sharing across features to reduce the requirement of adaptation data SIGIR 2013 @ Dublin Ireland

  14. Experimental Results • Dataset • Bing.com query log: May 27, 2012 – May 31, 2012 • Manual relevance annotation • 5-grade relevance score • 1830 ranking features • BM25, PageRank, tf*idf and etc. SIGIR 2013 @ Dublin Ireland

  15. Comparison of adaptation performance • Baselines • Tar-RankSVM • No adaptation, user’s own data only • RA-RankSVM[Geng et al. TKDE’12] • Model-based: global model as regularization • TransRank[Chen et al. ICDMW'08] • Instance-based: reweight annotated queries for adaptation • IW-RankSVM[Gao et al. SIGIR’10] • Instance-based: reweight user’s click data for adaptation • CLRank[Chen et al. Information Retrieval’10] • Feature-based: construct new feature representation for adaptation Applicable in per-user basis adaptation Only applicable in aggregated adaptation SIGIR 2013 @ Dublin Ireland

  16. Adaptation accuracy I • Per-user basis adaptation SIGIR 2013 @ Dublin Ireland

  17. Adaptation accuracy II • Aggregated adaptation SIGIR 2013 @ Dublin Ireland

  18. Improvement analysis I • Query-level improvement • Against global model SIGIR 2013 @ Dublin Ireland

  19. Improvement analysis II • User-level improvement • Against global model SIGIR 2013 @ Dublin Ireland

  20. Adaptation efficiency I • Batch mode SIGIR 2013 @ Dublin Ireland

  21. Adaptation efficiency II • Online mode SIGIR 2013 @ Dublin Ireland

  22. Conclusions • Efficient ranking model adaption framework for personalized search • Linear transformation for model-based adaptation • Transformation sharing within a group-wise manner • Future work • Joint estimation of feature grouping and model transformation • Incorporate user-specific features and profiles • Extend to non-linear models SIGIR 2013 @ Dublin Ireland

  23. References • White, Ryen W., and Steven M. Drucker. "Investigating behavioral variability in web search." Proceedings of the 16th international conference on World Wide Web. ACM, 2007. • Shen, Xuehua, Bin Tan, and ChengXiangZhai. "Context-sensitive information retrieval using implicit feedback." Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2005. • Teevan, Jaime, Susan T. Dumais, and Eric Horvitz. "Personalizing search via automated analysis of interests and activities." Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2005. • Burges, Chris, et al. "Learning to rank using gradient descent." Proceedings of the 22nd international conference on Machine learning. ACM, 2005. • Burges, Chris, Robert Rangoand Quoc Viet Le. "Learning to rank with nonsmooth cost functions."Proceedings of the Advances in Neural Information Processing Systems 19 (2007): 193-200. • Joachims, Thorsten. "Optimizing search engines using clickthroughdata."Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2002. • Dhillon, Inderjit S. "Co-clustering documents and words using bipartite spectral graph partitioning." Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2001. • Geng, Bo, et al. "Ranking model adaptation for domain-specific search."Knowledge and Data Engineering, IEEE Transactions on 24.4 (2012): 745-758. • Chen, Depin, et al. "Transrank: A novel algorithm for transfer of rank learning."Data Mining Workshops, 2008. ICDMW'08. IEEE International Conference on. IEEE, 2008. • Gao, Wei, et al. "Learning to rank only using training data from related domain."Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. ACM, 2010. • Chen, Depin, et al. "Knowledge transfer for cross domain learning to rank."Information Retrieval 13.3 (2010): 236-253. SIGIR 2013 @ Dublin Ireland

  24. Thank you! Q&A SIGIR 2013 @ Dublin Ireland

  25. Notations • Query collection • from user • for each query • is a V-dimensional vector of ranking features for a retrieved document • is the corresponding relevance label • Ranking model • Focusing on linear ranking models SIGIR 2013 @ Dublin Ireland

  26. Instantiation I • Adapting RankNet[Burges et al. ICML’05] & LambdaRank[Burges etal. NIPS’07] • Objective function • Regularization SIGIR 2013 @ Dublin Ireland

  27. Instantiation I • Adapting RankNet & LambdaRank • Derived gradients Group-wise updating SIGIR 2013 @ Dublin Ireland

  28. Analysis of feature grouping • Effectiveness of different grouping method • Baseline: random grouping and no grouping SIGIR 2013 @ Dublin Ireland

More Related