1 / 82

Dictionary search

This paper explores the use of Bloom filters to efficiently keep track of URLs visited by a web crawler. The focus is on fast checking of long URLs and not worrying about small errors. The problem of false positives is addressed, and an explicit formula for optimal parameters is provided. The paper also discusses document ranking and the shortcomings of distance-based similarity measures.

jwalker
Download Presentation

Dictionary search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dictionary search Making one-side errors Paper on Bloom Filter

  2. Crawling How to keep track of the URLs visited by a crawler? • URLs are long • Check should be very fast • No care about small errors (≈ page not crawled) Bloom Filter over crawled URLs

  3. Searching with errors...

  4. Problem: false positives

  5. 2 TTT

  6. Not perfectly true but...

  7. Opt k = 5.45... m/n = 8 We do have an explicit formula for the optimal k

  8. Document ranking Paolo Ferragina Dipartimento di Informatica Università di Pisa

  9. The big fight: find the best ranking...

  10. Ranking: Google vs Google.cn

  11. Document ranking Text-based Ranking (1° generation) Reading 6.2 and 6.3

  12. Similarity between binary vectors • Document is binary vector X,Y in {0,1}D • Score: overlap measure What’s wrong ?

  13. Normalization • Dice coefficient (wrt avg #terms): • Jaccard coefficient (wrt possible terms): NO, triangular OK, triangular

  14. What’s wrong in doc-similarity ? Overlap matching doesn’t consider: • Term frequency in a document • Talks more of t ? Then t should be weighted more. • Term scarcity in collection • of commoner than baby bed • Length of documents • score should be normalized

  15. æ ö n where nt = #docs containing term t n = #docs in the indexed collection = ç ÷ idf log è ø t n t A famous “weight”: tf-idf = tf Number of occurrences of term t in doc d t,d Vector Space model

  16. Sec. 6.3 Why distance is a bad idea

  17. Postulate: Documents that are “close together” in the vector space talk about the same things. Euclidean distance sensible to vector length !! Easy to Spam A graphical example t3 cos(a)= v w / ||v|| * ||w|| d2 d3 Sophisticated algos to find top-k docs for a query Q d1 a t1 d5 t2 d4 The user query is a very short doc

  18. Sec. 6.3 cosine(query,document) Dot product qi is the tf-idf weight of term i in the query di is the tf-idf weight of term i in the document cos(q,d) is the cosine similarity of q and d … or, equivalently, the cosine of the angle between q and d.

  19. Cos for length-normalized vectors • For length-normalized vectors, cosine similarity is simply the dot product (or scalar product): for q, d length-normalized.

  20. Cosine similarity among 3 docs How similar are the novels SaS: Sense and Sensibility PaP: Pride and Prejudice, and WH: Wuthering Heights? Term frequencies (counts) Note: To simplify this example, we don’t do idf weighting.

  21. 3 documents example contd. Log frequency weighting After length normalization cos(SaS,PaP) ≈ 0.789 × 0.832 + 0.515 × 0.555 + 0.335 × 0.0 + 0.0 × 0.0 ≈ 0.94 cos(SaS,WH) ≈ 0.79 cos(PaP,WH) ≈ 0.69

  22. Sec. 7.1.2 Storage • For every term, we store the IDF in memory, in terms of nt, which is actually the length of its posting list (so anyway needed). • For every docID d in the posting list of term t, we store its frequency tft,dwhich is tipically small and thus stored with unary/gamma.

  23. Vector spaces and other operators • Vector space OK for bag-of-words queries • Clean metaphor for similar-document queries • Not a good combination with operators: Boolean, wild-card, positional, proximity • First generation of search engines • Invented before “spamming” web search

  24. Document ranking Top-k retrieval Reading 7

  25. Sec. 7.1.1 Speed-up top-k retrieval • Costly is the computation of the cos • Find a set A of contenders, with K < |A| << N • Set A does not necessarily contain the top K, but has many docs from among the top K • Return the top K docs in A, according to the score • The same approach is also used for other (non-cosine) scoring functions • Will look at several schemes following this approach

  26. Sec. 7.1.2 Possible Approaches • Consider docs containing at least one query term. Hence this means… • Take this further: • Only consider high-idf query terms • Champion lists: top scores • Only consider docs containing many query terms • Fancy hits: sophisticated ranking functions • Clustering

  27. Sec. 7.1.2 Approach #1: High-idf query terms only • For a query such as catcher in the rye • Only accumulate scores from catcher and rye • Intuition: in and the contribute little to the scores and so don’t alter rank-ordering much • Benefit: • Postings of low-idf terms have many docs  these (many) docs get eliminated from set A of contenders

  28. Approach #2: Champion Lists • Preprocess: Assign to each term, its m best documents • Search: • If |Q| = q terms, merge their preferred lists ( mq answers). • Compute COS between Q and these docs, and choose the top k. Need to pick m>k to work well empirically. Now SE use tf-idf PLUS PageRank (PLUS other weights)

  29. Approach #3: Docs containing many query terms • For multi-term queries, compute scores for docs containing several of the query terms • Say, at least 3 out of 4 • Imposes a “soft conjunction” on queries seen on web search engines (early Google) • Easy to implement in postings traversal

  30. Sec. 7.1.2 3 2 4 4 8 8 16 16 32 32 64 64 128 128 1 2 3 5 8 13 21 34 3 of 4 query terms Antony Brutus Caesar Calpurnia 13 16 32 Scores only computed for docs 8, 16 and 32.

  31. Sec. 7.1.4 Complex scores • Consider a simple total score combining cosine relevance and authority • net-score(q,d) = PR(d) + cosine(q,d) • Can use some other linear combination than an equal weighting • Now we seek the top K docs by net score

  32. Approach #4: Fancy-hits heuristic • Preprocess: • Assign docID by decreasing PR weight • Define FH(t) =m docs for t with highest tf-idf weight • Define IL(t) = the rest (i.e. incr docID = decr PR weight) • Idea: a document that scores high should be in FH or in the front of IL • Search for a t-term query: • First FH: Take the common docs of their FH • compute the score of these docs and keep the top-k docs. • Then IL: scan ILs and check the common docs • Compute the score and possibly insert them into the top-k. • Stop when M docs have been checked or the PR score becomes smaller than some threshold.

  33. Sec. 7.1.6 Approach #5: Clustering Query Leader Follower

  34. Sec. 7.1.6 Cluster pruning: preprocessing • Pick N docs at random: call these leaders • For every other doc, pre-compute nearest leader • Docs attached to a leader: its followers; • Likely: each leader has ~ N followers.

  35. Sec. 7.1.6 Cluster pruning: query processing • Process a query as follows: • Given query Q, find its nearest leader L. • Seek K nearest docs from among L’s followers.

  36. Sec. 7.1.6 Why use random sampling • Fast • Leaders reflect data distribution

  37. Sec. 7.1.6 General variants • Have each follower attached to b1=3 (say) nearest leaders. • From query, find b2=4 (say) nearest leaders and their followers. • Can recur on leader/follower construction.

  38. Document ranking Relevance feedback Reading 9

  39. Sec. 9.1 Relevance Feedback • Relevance feedback: user feedback on relevance of docs in initial set of results • User issues a (short, simple) query • The user marks some results as relevant or non-relevant. • The system computes a better representation of the information need based on feedback. • Relevance feedback can go through one or more iterations.

  40. Sec. 9.1.1 Rocchio (SMART) • Used in practice: • Dr =set of known relevant doc vectors • Dnr = set of known irrelevant doc vectors • qm = modified query vector; q0 = original query vector; α,β,γ: weights (hand-chosen or set empirically) • New query moves toward relevant documents and away from irrelevant documents

  41. Relevance Feedback: Problems • Users are often reluctant to provide explicit feedback • It’s often harder to understand why a particular document was retrieved after applying relevance feedback • There is no clear evidence that relevance feedback is the “best use” of the user’s time.

  42. Sec. 9.1.6 Pseudo relevance feedback • Pseudo-relevance feedback automates the “manual” part of true relevance feedback. • Retrieve a list of hits for the user’s query • Assume that the top k are relevant. • Do relevance feedback (e.g., Rocchio) • Works very well on average • But can go horribly wrong for some queries. • Several iterations can cause query drift.

  43. Sec. 9.2.2 Query Expansion • In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents • In query expansion, users give additional input (good/bad search term) on words or phrases

  44. Sec. 9.2.2 How augment the user query? • Manual thesaurus (costly to generate) • E.g. MedLine: physician, syn: doc, doctor, MD • Global Analysis(static; all docs in collection) • Automatically derived thesaurus • (co-occurrence statistics) • Refinements based on query-log mining • Common on the web • Local Analysis (dynamic) • Analysis of documents in result set

  45. Query assist Would you expect such a feature to increase the query volume at a search engine?

  46. Quality of a search engine Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 8

More Related