1 / 28

Retrieval and Feedback Models for Blog Feed Search SIGIR 2008

Retrieval and Feedback Models for Blog Feed Search SIGIR 2008. Advisor : Dr. Koh Jia-Ling Speaker : Chou-Bin Fan Date : 2009.10.05. Outline. Introduction Approach  Retrieval Models : Large Document Model Small Document Models  Query Expansion Models :

Download Presentation

Retrieval and Feedback Models for Blog Feed Search SIGIR 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Retrieval and Feedback Models for Blog Feed SearchSIGIR 2008 Advisor:Dr. Koh Jia-Ling Speaker:Chou-Bin Fan Date:2009.10.05

  2. Outline • Introduction • Approach  Retrieval Models : Large Document Model Small Document Models  Query Expansion Models: Pseudo-Relevance Feedback Wikipedia Link-based Query Expansion Wikipedia PRF Query Expansion • Conclusion

  3. Introduction • What is a feed ? Posts Blog ( HMTL ) Entries Feed( XML ) <xml> <feed> <entry> <author>Peter …</> <title>Good, Evil…</> <content>I’ve said…</> </entry> <entry> <author>Peter …</> <title>Agreeing…</> <content>Some peo…</> </entry> …

  4. Introduction • What is the task “feed search”?  Ranking feeds (collections of entries) in response to a user’s query “Q” .  A relevant feed should have a principle and recurring interest in Q .

  5. Introduction • Challenges in Feed Search: • How does relevance at the entry level correspond to relevance at the feed level? • Can we favor entries close to the central topic of the feed? • Feeds are noisy. (Spam blogs, Spam & off topic comments) • Retrieval models: models for ranking collections, account for the topical diversity within the collection. (Solve Challenge 1,2.) • Feedback models: overcome noise in the blog collection, aimed at addressing multifaceted information needs. (Solve Challenge 3.)

  6. Retrieval models • Challenge : ranking topically diverse collections. • Representation: feed ,entry . • Model topical relationship between entries.

  7. Feed Document Collection Ranked Feeds <?xml… … </…> <?xml… … </…> `<?xml… … </…> <?xml… … </…> <?xml… … </…> <?xml… … </…> <?xml… <feed> <entry> <entry> <entry> <entry> <entry> … </…> <?xml… <feed> <entry> <entry> <entry> <entry> <entry> … </…> Retrieval models – Large Document (Feed) Model [Query]

  8. Retrieval models – Large Document (Feed) Model Note: The query likelihood model. Remember that ?

  9. Retrieval models – Large Document (Feed) Model • Advantages : A straightforward application of existing retrieval techniques. • Potential Pitfalls: Large entries dominate a feed’s language model. Ignores relationship among entries.

  10. Ranked Feeds Entry Document Collection <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <?xml… <entry> Retrieval models – Small Document (Entry) Model [Query] Ranked Entries Rank by P(Q|E) Rank aggregation function

  11. Retrieval models – Small Document (Entry) Model ReDDE Federated Resource Ranking Algortihm a resource ranking algorithm which scores a document collection, Cj , by the estimated number of relevant documents in that collection. |C| is an estimate of total number of documents in collection Cj .

  12. Retrieval models – Small Document (Entry) Model • Query Likelihood

  13. Retrieval models – Small Document (Entry) Model • Entry Centrality Two purposes: To balance scoring across feeds with differing numbers of entries. Without this balancing, the summation in the small document model, Equation 2, would favor longer feeds. To favor some feeds over others based on their similarity to the feed’s language model. In general, any measure of similarity could be used here. Geometric Mean : Uniform :

  14. Q Retrieval models – Small Document (Entry) Model Advantages: • Controls for differing entry length • Models topical relationship among entries Disadvantages: • Centrality computation is slow(er). • Feed Prior PLOG(F) ∝ log(NF ) <grows logarithmically with the feed size> PUNIF (F) ∝ 1.0 <does not influence the documentranking at all> Not only improves speed, Also performance

  15. Retrieval models – Results • 45 Queries from the TREC 2007 Blog Distillation Task • BLOG06 test collection, XML feeds only • 5-Fold Cross Validation for all retrieval model smoothing parameters Statistical significance at the 0.05 level is indicated by † for improvement from φGM, + for improvement from PLOG , ∗ for improvements over the best LD model.

  16. Feedback Models • Challenge:Noisy collection with general & ongoing information needs. • Use a cleaner external collection for query expansion (Wikipedia) • With an expansion technique designed to identify multiple query . • Pseudo-Relevance Feedback (PRF) [Lavrenko & Croft, 2001] • Wikipedia PRF Query Expansion [Diaz & Metzler, 2006] • Wikipedia Link-based Query Expansion

  17. Feedback Models –Pseudo-Relevance Feedback(PRF) BLOG06 Collection [Q] Related Terms from top K documents [Q + Terms]

  18. Feedback Models –Pseudo-Relevance Feedback(PRF) • PRF assumes the top N retrieved documents, DN, are relevant to the base query and extracts highly discriminative terms or phrases from those document as query expansion terms. • PRF.FEED: PRF where DN are the top N feeds. • PRF.ENTRY: PRF where DN are the top N feed entries. • PRF.WIKI: PRF where DN are the top N Wikipedia articles when the base query is run on the Wikipedia. • PRF.WIKI.P: PRF where DN are the top N Wikipedia passages (sequences of at most 220 words,the average entry length in the BLOG06 corpus) when the query is run on the Wikipedia.

  19. Feedback Models –Pseudo-Relevance Feedback(PRF) • RF.TTR: RF where expansion terms originate from the top 10 relevant feeds (TTR =“top 10 relevant”). • RF.RTT: RF where expansion terms originate from the relevant feeds within the top 10 (RTT =“relevant in top 10”). • NO_EXP (baseline): No query expansion.

  20. Feedback Models –Pseudo-Relevance Feedback(PRF) • Example : give a query “Photography” Ideal digital photography depth of field photographic film photojournalism cinematography PRF photography nude erotic art girl free teen fashion women

  21. Feedback Models –Wikipedia PRF Query Expansion Wikipedia [Q] Related Terms from top K documents BLOG06 Collection [Q + Terms]

  22. Feedback Models –Wikipedia PRF Query Expansion • Example : give a query “Photography” Ideal digital photography depth of field photographic film photojournalism cinematography PRF photography nude erotic art girl free teen fashion women Wikipedia PRF photography director special film art camera music cinematographer photographic

  23. Feedback Models –Wikipedia Link-based Query Expansion Wikipedia [Q] Related Terms from link structure BLOG06 Collection [Q + Terms]

  24. Feedback Models –Wikipedia Link-based Query Expansion • Example : give a query “Photography” PRF photography nude erotic art girl free teen fashion women Wikipedia Link-Based photography photographer digital photography photographic depth of field feature photography film photographic film photojournalism Ideal digital photography depth of field photographic film photojournalism cinematography

  25. Feedback Models –Wikipedia Link-based Query Expansion • After we give a query to Wikipedia , two sets we can get: The relevant and working sets, SR and SW, contain articles ranking within the top R or top W retrieved results. • Then, each anchor phrase, ai, occuring in an article in SW and linking to an article in SR is scored according to

  26. Feedback Models –Result On the BD07 tests, significance at the 0.05 level over the NO_EXP and PRF.WIKI models is indicated by ∗ and †, respectively.

  27. Feedback Models –Result

  28. Conclusion • Feed Search Challenges: • Feeds are topically diverse, noisy collections. • Ranked against ongoing & general information needs. • Novel Retrieval Models: • Ranking collections, sensitive to topical relationship among entries. • Novel Feedback Models: • Discover multiple query facets & robust to collection noise.

More Related