Lecture 2. Information Retrieval and Text Mining. Recap of the previous lecture. Basic inverted indexes: Structure: Dictionary and Postings Key step in construction: Sorting Boolean query processing Simple optimization Linear time merging Overview of course topics. Plan for this lecture.
Basic inverted indexes:
Structure: Dictionary and Postings
Key step in construction: Sorting
Boolean query processing
Linear time merging
Overview of course topics
Finish basic indexing
What terms do we put in the index?
Query processing – speedups
Friends, Romans, countrymen.
What format is it in?
What language is it in?
What character set is in use?
Each of these is a classification problem, which we will study later in the course.
But there are complications …
Documents being indexed can include docs from many different languages
A single index may have to contain terms of several languages.
Sometimes a document or its components can contain multiple languages/formats
French email with a Portuguese pdf attachment.
What is a unit document?
An email with a zip containing documents?
Input: “Friends, Romans and Countrymen”
Each such token is now a candidate for an index entry, after further processing
But what are valid tokens to emit?
Issues in tokenization:
Finland? Finlands? Finland’s?
Hewlett-PackardHewlett and Packard as two tokens?
State-of-the-art: break up hyphenated sequence.
the hold-him-back-and-drag-him-away-maneuver ?
San Francisco: one token or two? How do you decide it is one token?
Mar. 12, 1991
My PGP key is 324a3df234cb23e
Generally, don’t index as text.
Will often index “meta-data” separately
Creation date, format, etc.
L'ensemble one token or two?
L ? L’ ? Le ?
Want ensemble to match with un ensemble
German noun compounds are not segmented
‘life insurance company employee’
Chinese and Japanese have no spaces between words:
Not always guaranteed a unique tokenization
Further complicated in Japanese, with multiple alphabets intermingled
Dates/amounts in multiple formats
End-user can express query entirely in hiragana!
Arabic (or Hebrew) is basically written right to left, but with certain items like numbers written left to right
Words are separated, but letter forms within a word form complex ligatures
استقلت الجزائر في سنة 1962 بعد 132 عاما من الاحتلال الفرنسي.
← → ← → ← start
‘Algeria achieved its independence in 1962 after 132 years of French occupation.’
With Unicode, the surface presentation is complex, but the stored form is straightforward
Need to “normalize” terms in indexed text as well as query terms into the same form
We want to match U.S.A. and USA
We most commonly implicitly define equivalence classes of terms
e.g., by deleting periods in a term
Alternative is to do limited expansion:
Enter: window Search: window, windows
Enter: windows Search: Windows, windows
Enter: Windows Search: Windows
Potentially more powerful, but less efficient
Reduce all letters to lower case
exception: upper case (in mid-sentence?)
e.g., General Motors
Fed vs. fed
SAIL vs. sail
Often best to lower case everything, since users will use lowercase regardless of ‘correct’ capitalization
Ne’er vs. never: use language-specific, handcrafted “locale” to normalize.
Most common: detect/apply language at a pre-determined granularity: doc/paragraph.
U.S.A. vs. USA – remove all periods or use locale.
Handle synonyms and homonyms
Hand-constructed equivalence classes
e.g., car = automobile
color = colour
Rewrite to form equivalence classes
Index such equivalences
When the document contains automobile, index it under car as well (usually, also vice-versa)
Or expand query?
When the query contains automobile, look under car as well
Traditional class of heuristics to expand a query into phonetic equivalents
Language specific – mainly for names
More on this later ...
Reduce inflectional/variant forms to base form
am, are,is be
car, cars, car's, cars'car
the boy's cars are different colorsthe boy car be different color
Lemmatization implies doing “proper” reduction to dictionary headword form
Reduce terms to their “roots” before indexing
“Stemming” suggests crude affix chopping
e.g., automate(s), automatic, automation all reduced to automat.
for exampl compress and
compress ar both accept
as equival to compress
for example compressed
and compression are both
accepted as equivalent to
Commonest algorithm for stemming English
Results suggest at least as good as other stemming options
Conventions + 5 phases of reductions
phases applied sequentially
each phase consists of a set of commands
sample convention: Of the rules in a compound command, select the one that applies to the longest suffix.
(m>1) EMENT →
replacement → replac
cement → cement
Other stemmers exist, e.g., Lovins stemmer http://www.comp.lancs.ac.uk/computing/research/stemming/general/lovins.htm
Single-pass, longest suffix removal (about 250 rules)
Motivated by Linguistics as well as IR
Full morphological analysis – at most modest benefits for retrieval (in English)
Do stemming and other normalizations help?
Often very mixed results: really help recall for some queries but harm precision on others
Many of the above features embody transformations that are
These are “plug-in” addenda to the indexing process
Both open source and commercial plug-ins available for handling these
Accents: résumé vs. resume.
Most important criterion:
How are your users likely to write their queries for these words?
Even in languages that standardly have accents, users often may not type them
German: Tuebingen vs. Tübingen
Should be equivalent
Need to “normalize” indexed text as well as query terms into the same form
Character-level alphabet detection and conversion
Tokenization not separable from this.
7月30日 vs. 7/30
Morgen gehe ich zum MIT …
These may be grouped by language. More on this in ranking/query processing.
Walk through the two postings simultaneously, in time linear in the total number of postings entries
If the list lengths are m and n, the merge takes O(m+n)
Can we do better?
Yes, if index isn’t changing too fast.
When we get to 16 on the top list, we see that its
successor is 32.
But the skip successor of 8 on the lower list is 31, so
we can skip ahead past the intervening postings.
Suppose we’ve stepped through the lists until we process 8 on each list.
More skips shorter skip spans more likely to skip. But lots of comparisons to skip pointers.
Fewer skips few pointer comparison, but then long skip spans few successful skips.
Simple heuristic: for postings of length L, use L evenly-spaced skip pointers.
This ignores the distribution of query terms.
Easy if the index is relatively static; harder if L keeps changing because of updates.
This definitely used to help; with modern hardware it may not (Bahle et al. 2002)
The cost of loading a bigger postings list outweighs the gain from quicker in memory merging
Want to answer queries such as “stanford university” – as a phrase
Thus the sentence “I went to university at Stanford” is not a match.
The concept of phrase queries has proven easily understood by users; about 10% of web queries are phrase queries
No longer suffices to store only
<term : docs> entries
Index every consecutive pair of terms in the text as a phrase
For example the text “Friends, Romans, Countrymen” would generate the biwords
Each of these biwords is now a dictionary term
Two-word phrase query-processing is now immediate.
Longer phrases are processed as we did with wild-cards:
stanford university palo alto can be broken into the Boolean query on biwords:
stanford university AND university palo AND palo alto
Without the docs, we cannot verify that the docs matching the above Boolean query do contain the phrase.
Can have false positives!
Parse the indexed text and perform part-of-speech-tagging (POST).
Bucket the terms into (say) Nouns (N) and articles/prepositions (X).
Now deem any string of terms of the form NX*N to be an extended biword.
Each such extended biword is now made a term in the dictionary.
Example: catcher in the rye
N X X N
Query processing: parse it into N’s and X’s
Segment query into enhanced biwords
Look up index
False positives, as noted before
Index blowup due to bigger dictionary
For extended biword index, parsing longer queries into conjunctions:
E.g., the query tangerine trees and marmalade skies is parsed into
tangerine trees AND trees and marmalade AND marmalade skies
Not standard solution (for all biwords)
Store, for each term, entries of the form:
<number of docs containing term;
doc1: position1, position2 … ;
doc2: position1, position2 … ;
Can compress position values/offsets
Nevertheless, this expands postings storage substantially
1: 7, 18, 33, 72, 86, 231;
2: 3, 149;
4: 17, 191, 291, 430, 434;
5: 363, 367, …>
Which of docs 1,2,4,5
could contain “to be
or not to be”?
Extract inverted index entries for each distinct term: to, be, or, not.
Merge their doc:position lists to enumerate all positions with “to be or not to be”.
1:17,19; 4:17,191,291,430,434;5:14,19,101; ...
Same general method for proximity searches
LIMIT! /3 STATUTE /3 FEDERAL /2 TORT Here, /k means “within k words of”.
Clearly, positional indexes can be used for such queries; biword indexes cannot.
Exercise: Adapt the linear merge of postings to handle proximity queries. Can you make it work for any value of k?
Can compress position values/offsets as we did with docs in the last lecture
Nevertheless, this expands postings storage substantially
Need an entry for each occurrence, not just once per document
Index size depends on average document size
Average web page has <1000 terms
SEC filings, books, even some epic poems … easily 100,000 terms
Consider a term with frequency 0.1%
A positional index is 2-4 as large as a non-positional index
Positional index size 35-50% of volume of original text
Caveat: all of this holds for “English-like” languages
These two approaches can be profitably combined
For particular phrases (“Michael Jackson”, “Britney Spears”) it is inefficient to keep on merging positional postings lists
Even more so for phrases like “The Who”
Williams et al. (2004) evaluate a more sophisticated mixed indexing scheme
A typical web query mixture was executed in ¼ of the time of using just a positional index
It required 26% more space than having a positional index alone
MG 3.6, 4.3; MIR 7.2
Porter’s stemmer: http://www.tartarus.org/~martin/PorterStemmer/
H.E. Williams, J. Zobel, and D. Bahle. 2004. “Fast Phrase Querying with Combined Indexes”, ACM Transactions on Information Systems.
D. Bahle, H. Williams, and J. Zobel. Efficient phrase querying with an auxiliary index. SIGIR 2002, pp. 215-221.