1 / 24

Multi-Task Learning for Boosting with Application to Web Search Ranking Olivier Chapelle et al.

Multi-Task Learning for Boosting with Application to Web Search Ranking Olivier Chapelle et al. Presenter: Wei Cheng. Outline. Motivation Backgrounds Algorithm From svm to boosting using L1 regularization Є -boosting for optimization Overall algorithm Evaluation

bena
Download Presentation

Multi-Task Learning for Boosting with Application to Web Search Ranking Olivier Chapelle et al.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Task Learning for Boosting with Application to Web Search RankingOlivier Chapelle et al. Presenter: Wei Cheng

  2. Outline • Motivation • Backgrounds • Algorithm • From svm to boosting using L1 regularization • Є-boosting for optimization • Overall algorithm • Evaluation • Overall review and new research points discussion

  3. Motivation Domain specific engine is better! e.g. ‘gelivable’ (very useful) Different/same search engine(s) for different countries?

  4. Motivation • Should we train ranking model separately? • Corps in some domains might be too small to train a good model • Solution: Multi-task learning

  5. Backgrounds SinnoJialin Pan and Qiang Yang, TKDE’2010

  6. Backgrounds SinnoJialin Pan and Qiang Yang, TKDE’2010

  7. Backgrounds • Why transfer learning works? SinnoJialin Pan et al. At WWW’10

  8. Backgrounds • Why transfer learning works?(continue)

  9. Backgrounds • Why transfer learning works?(continue)

  10. Backgrounds traditional learning Input: LearnerA LearnerB Target: Dog/human Girl/boy

  11. Backgrounds Multi-task learning Input: Joint Learning Task Target: Dog/human Girl/boy

  12. Algorithm • The algorithm aims at designing an algorithm based on gradient boosted decision trees • Inspired by svm based multi-task solution and boosting-trick. • Using Є-boosting for optimization

  13. Algorithm • From svm to boosting using L1 regularization • Previous svm based multi-task learning:

  14. Algorithm • Svm(kernel-trick)---boosting (boosting trick) Pick set of non-linear functions(e.g., decision trees, regression trees,….) Apply every single function to each data point H Xф(X) |H|=J

  15. Є-boosting for optimization • Using L1 regulization Using Є-boosting

  16. Algorithm

  17. Evaluation • Datasets

  18. Evaluation

  19. Evaluation

  20. Evaluation

  21. Evaluation

  22. Evaluation

  23. Overall review and new research point discussion • Contributions: • Propose a novel multi-task learning method based on gradient boosted decision tree, which is useful for web-reranking applications. (e.g., personalized search). • Have a thorough evaluation on we-scale datasets. • New research points: • Negative transfer: • Effective grouping: flexible domain adaptation

  24. Q&A Thanks!

More Related