1 / 32

Web Query Disambiguation from Short Sessions

Web Query Disambiguation from Short Sessions. Lilyana Mihalkova* and Raymond Mooney University of Texas at Austin *Now at University of Maryland College Park. Web Query Disambiguation. scrubs. ?. Existing Approaches. Well-studied problem:

selah
Download Presentation

Web Query Disambiguation from Short Sessions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Web Query Disambiguation from Short Sessions Lilyana Mihalkova* and Raymond Mooney University of Texas at Austin *Now at University of Maryland College Park

  2. Web Query Disambiguation scrubs ?

  3. Existing Approaches • Well-studied problem: • [e.g., Sugiyama et al. 04, Sun et al. 05, Dou et al. 07] • Build a user profile from a long history of that user’s interactions with the search engine

  4. Concerns • Privacy concerns • AOL Data Release • “Googling Considered Harmful” [Conti 06] • Pragmatic concerns • Storing and protecting data • Identifying users across searches

  5. Proposed Setting • Base personalization only on short glimpses of user search activity captured in brief short-termsessions • Do not assume across-session user identifiers of any sort

  6. How Short is Short-Term? Number of sessions with that many queries Number of queries before ambiguous query

  7. 98.7 fm huntsville hospital www.star987.com www.huntsvillehospital.com kroq ebay.com www.kroq.com www.ebay.com scrubs scrubs ??? ??? scrubs.com scrubs-tv.com Is This Enough Info?

  8. More Closely Related Work • [Almeida & Almeida 04]: Similar assumption of short sessions, but better suited for a specialized search engine (e.g. on computer science literature)

  9. Main Challenge • How to harness this small amount of potentially noisy information available for each user? • Exploit the relations among sessions, URLs, and queries

  10. The choices users make over the set of possible results are also interdependent Relationship are established by shared queries or clicks that are themselves related Query strings and URLs are related by sharing identical keywords, or by being identical

  11. Exploiting Relational Information • To overcome sparsity in individual sessions, need techniques that can effectively combine effects of noisy relations of varying types among entities • Use statistical relational learning (SRL) [Getoor & Taskar 07] • We used one particular SRL model: Markov logic networks (MLNs) [Richardson & Domingos 06]

  12. Rest of This Talk • Background on MLNs • Details of our model • Ways of relating users • Clauses defined in the model • Experimental results • Future work

  13. Markov Logic Networks (MLNs) [Richardson & Domingos 06] • Set of weighted first-order clauses • The larger the weight of a clause, the greater the penalty for not satisfying a grounding of this clause • Clauses can be viewed as relational features

  14. MLNs Continued • Q: set of unknown query atoms • E: set of evidence atoms • What is P(Q|E), according to an MLN: Number of satisfied groundings of formula i All formulas Normalizing constant

  15. MLN Learning and Inference • A wide range of algorithms available in the Alchemy software package [Kok et al. 05] • Weight learning: we used contrastive divergence [Lowd & Domingos 07] • Adapted it for streaming data • Inference: we used MC-SAT [Poon & Domingos 06]

  16. 0.5 0.1 0.9 0.3 0.7 0.4 0.9 0.7 0.5 0.4 0.3 0.1 Re-Ranking of Search Results MLN • Hand-code a set of formulas to capture regularities in the domain • Challenge: define an effective set of relations • Learn weights over these formulas from the data • Challenge: noisy data

  17. huntsville hospital huntsvillehospital.org ebay ebay.com scrubs ??? Specific Relationships huntsville school . . . scrubs scrubs.com . . . hospitallink.com scrubs scrubs-tv.com … ebay.com

  18. Shared click • Shared keyword click-to-click, click-to-search, search-to-click, or search-to-search ambiguous query some query ambiguous query some query www.clickedResult.com www.someplace1.com www.clickedResult.com www.someplace1.com . . . ambiguous query www.someplace1.com some other ambiguous query www.aClick.com Collaborative Clauses • The user will click on result chosen by sessions related by:

  19. Popularity Clause • User will choose result chosen by any previous session, regardless of whether it is related • Effect is that the most popular result becomes the most likely pick

  20. Local Clauses • User will choose result that shares a keyword with a previous search or click in the current session Not effective because of data sparsity www.someResult.com some query www.anotherPossibility.com www.someplace1.com www.yetAnother.com ambiguous query

  21. Balance Clause • If the user chooses one of the results, she will not choose another • Sets up a competition among possible results • Allows the same set of weights to work well for different-size problems

  22. Empirical Evaluation: Data • Provided by MSR • Collected in May 2006 on MSN Search • First 25 days is training, last 6 days is test • Does not specify which queries are ambiguous • Used DMOZ hierarchy of pages to establish ambiguity • Only lists actually clicked results, not all that were seen • Assumed user saw all results that were clicked at least once for that exact query string

  23. Empirical Evaluation: Models Tested • Random re-ranker • Popularity • Standard collaborative filtering baselines based on preferences of a set of the most closely similar sessions • Collaborative-Pearson • Collaborative-Cosine • MLNs • Collaborative + Popularity + Balance • Collaborative + Balance • Collaborative + Popularity + Local + Balance In Paper

  24. Empirical Evaluation: Measure • Area under the ROC curve (mean average true negative rate) In Paper: Area under precision-recall curve, aka mean average precision

  25. 0.9 0.7 0.5 0.4 0.3 0.1 AUC-ROC Intuitive Interpretation • Assuming that user scans results from the top until a relevant one is found, • AUC-ROC captures what proportion of the irrelevant results are not considered by the user

  26. Collaborative + Popularity + Balance Results

  27. Difficulty Levels Average Case KL MLN Proportion Clicked Popularity Possible Results for a given query

  28. Difficulty Levels Worst Case

  29. Future Directions • Incorporate more info into the models • How to retrieve relevant information quickly • Learn more nuanced models • Cluster shared clicks/keywords and learn separate weight for clusters • More advanced measures of user relatedness • Incorporate similarity measures • Use time spent on a page as an indicator of interestedness

  30. Thank you • Questions?

  31. Literal Literal WorkedFor(brando, coppola) Ground Literal, “Grounding” for short First-Order Logic • Relations and attributes are represented as predicates Actor(A) WorkedFor(A, B) Predicate Predicate Variable

  32. Premises Conclusion Clauses • Dependencies and regularities among the predicates are represented as clauses: Director(A) Movie(T, A) Actor(A) • To obtain a grounding of a clause, replace all variables with entity names: Actor(pacino) Movie(godfather, pacino) Director(pacino)

More Related