1 / 85

Chapter 7 Text Operations

Chapter 7 Text Operations. Hsin-Hsi Chen Department of Computer Science and Information Engineering National Taiwan University. Logical View of a Document. automatic or manual indexing. accents, spacing, etc . noun groups. stemming. document. stopwords. text+ structure. text.

libitha
Download Presentation

Chapter 7 Text Operations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7Text Operations Hsin-Hsi Chen Department of Computer Science and Information Engineering National Taiwan University

  2. Logical View of a Document automatic or manual indexing accents, spacing, etc. noun groups stemming document stopwords text+ structure text structure recognition index terms full text structure

  3. Document Preprocessing • Lexical analysis • Elimination of stopwords • Stemming of the remaining words • Selection of index terms • Construction of term categorization structures

  4. Lexical Analysis for Automatic Indexing • Lexical AnalysisConvert an input stream of characters into stream words or token. • What is a word or a token?Tokens consist of letters. • digits: Most numbers are not good index terms.counterexamples: case numbers in a legal database, “B6” and “B12” in vitamin database. • hyphens • break hyphenated words: state-of-the-art, state of the art • keep hyphenated words as a token: “Jean-Claude”, “F-16”

  5. Lexical Analysis for Automatic Indexing(Continued) • punctuation marks: often used as parts of terms, e.g., OS/2, 510B.C. • case: usually not significant in index terms • Issues: recall and precision • breaking up hyphenated termsincrease recall but decrease precision • preserving case distinctionsenhance precision but decrease recall • commercial information systemsusually take recall enhancing approach(numbers and words containing digits are index terms, and all are case insensitive)

  6. Lexical Analysis for Query Processing • Tasks • depend on the design strategies of the lexical analyzer for automatic indexing (search terms must match index terms) • distinguish operators like Boolean operators, stemming or truncating operators, and weighting functions • distinguish grouping indicators like parentheses and brackets

  7. stoplist (negative dictionary) • Avoid retrieving almost very item in a database regardless of its relevance. • Examples • conservative approach (ORBIT Search Service): and, an, by, from, of, the, with • (derived from Brown corpus): 425 wordsa, about, above, across, after, again, against, all, almost, alone, along, already, also, although, always, among, an, and, another, any, anybody, anyone, anything, anywhere, are, area, areas, around, as, ask, asked, asking, asks, at, away, b, back, backed, backing, backs, be, because, became, ... • Articles, prepositions, conjunctions, …

  8. Chinese Stop Words? • 一 Neu 58388 我 Nh 40332 不 D 39014 • 了 Di 31873 他 Nh 30025 也 D 29646 • 就 D 29211 人 Na 24269 都 D 20403 • 說 VE 19625 我們 Nh 18152 你 Nh 17298 • 要 D 15955 會 D 14066 很 Dfa 13013 • 大 VH 11577 能 D 11125 著 Di 11026 • 她 Nh 10776 還 D 9698 可以 D 9670 • 最 Dfa 9416 自己 Nh 9069 來 D 8992 • 所 D 8869 他們 Nh 8818 兩 Neu 8692 • 可 D 8508 為 VG 8369 好 VH 8304 • 又 D 8037 將 D 7858 更 D 7298 • 才 Da 7266 已 D 7256 ...

  9. Implementing Stoplists • approaches • examine lexical analyzer output and remove any stopwords • Every token must be looked up in the stoplist, and removed from further analysis if found • A standard list searching problem • remove stopwords as part of lexical analysis • best implementation of stoplist

  10. Stemming • stem • Portion of a word which is left after the removal of its affixes • connect  connected, connecting, connection, connections • benefits of stemming? • Some favor the usage of stemming • Many Web search engines do not adopt any stemming algorithm • issues • correctness • retrieval performance • compression performance

  11. Stemmers • programs that relate morphologically similar indexing and search terms • stem at indexing time • advantage: efficiency and index file compression • disadvantage: information about the full terms is lost • example (CATALOG system), stem at search timeLook for: system users Search Term: users Term Occurrences 1. user 15 2. users 1 3. used 3 4. using 2 Which terms (0=none, CR=all): The user selects the terms he wants by numbers

  12. Conflation Methods • manual • automatic (stemmers) • affix removallongest match vs. simple removal • successor variety • table lookup • n-gram • evaluation • correctness • retrieval effectiveness • compression performance Term Stem engineering engineer engineered engineer engineer engineer

  13. Successor Variety • Definition (successor variety of a string)the number of different characters that follow it in words in some body of text • Examplea body of text: able, axle, accident, ape, aboutsuccessor variety of apple1st: 4 (b, x, c, p)2nd: 1 (e)

  14. Successor Variety (Continued) • IdeaThe successor variety of substrings of a term will decrease as more characters are added until a segment boundary is reached, i.e., the successor variety will sharply increase. • ExampleTest word: READABLECorpus: ABLE, BEATABLE, FIXABLE, READ, READABLE, READING, RED, ROPE, RIPEPrefix Successor Variety LettersR 3 E, O, IRE 2 A, DREA 1 DREAD 3 A, I, SREADA 1 BREADAB 1 LREADABL 1 EREADABLE 1 blank

  15. The successor variety stemming process • Determine the successor variety for a word. • Use this information to segment the word. • cutoff methoda boundary is identified whenever the cutoff value is reached • peak and plateau methoda character whose successor variety exceeds that of the character immediately preceding it and the character immediately following it • complete word methoda segment is a complete word • entropy method • Select one of the segments as the stem.

  16. n-gram stemmers • diagrama pair of consecutive letters • shared diagram methodassociation measures are calculated between pairs of termswhere A: the number of unique diagrams in the first word, B: the number of unique diagrams in the second, C: the number of unique diagrams shared by A and B.

  17. n-gram stemmers (Continued) • Examplestatistics => st ta at ti is st ti ic cs unique diagrams => at cs ic is st ta ti statistical => st ta at ti is st ti ic ca al unique diagrams => al at ca ic is st ta ti

  18. n-gram stemmers (Continued) • similarity matrixdetermine the semantic measures for all pairs of terms in the databaseword1 word2 word3 ... wordn-1 word1 wrod2 S21 word3 S31 S32 . . wordn Sn1 Sn2 Sn3 … Sn(n-1) • terms are clustered using a single link clustering method • more a term clustering procedure than a stemming one

  19. Affix Removal Stemmers • procedureRemove suffixes and/or prefixes from terms leaving a stem, and transform the resultant stem. E.g., Porter algorithm • example: plural formsIf a word ends in “ies” but not “eies” or “aies” then “ies” --> “y”If a word ends in “es” but not “aes”, “ees”, or “oes” then “es” --> “e”If a word ends in “s”, but not “us” or “ss” then “s” --> NULL • ambiguity

  20. Affix Removal Stemmers(Continued) • longest match stemmerremove the longest possible string of characters from a word according to a set of rules • recoding: AxC--> AyC, e.g., ki --> ky • partial matching: only n initial characters of stems are used in comparing • different versionsLovins, Slaton, Dawson, Porter, …Students can refer to the rules listed in appendix of the text book (pp. 433-436)

  21. Index Term Selection(see Chapter 2)

  22. Fast Statistical Parsing of Noun Phrases for Document Indexing Chengxiang Zhai Laboratory for Computational Linguistics Carnegie Mellon University (ANLP’97, pp. 312-319)

  23. Phrases for Document Indexing • Indexing by single words • single words are often ambiguous and not specific enough for accurate discrimination of documents • bank terminology vs. terminology bank • Indexing by phrases • Syntactic phrases are almost always more specific than single words • Indexing by single words and phrases

  24. No significant improvement? • Fagan, Joel L., Experiments in Automatic Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-syntactic methods, Ph.D. thesis, Cornel University, 1987. • Lewis, D., Representation and Learning in Information Retrieval, Ph.D. thesis, University of Massachusetts, 1991. • Many syntactic phrases have very low frequency and tend to be over-weighted by normal weighting method.

  25. author’s points • A larger document collection may increase the frequency of most phrases, and thus alleviate the problem of low frequency. • The phrases are used only to supplement, not replace the single words for indexing. • The new issue is:how to parse gigabytes of text in practically feasible time.(133MH DEC alpha workstation, 8 hours/GB, 20 hours of training with 1GB text.)

  26. Experiment Design • CLARIT commercial retrieval system • {original document set} ---->CLARIT NP Extractor ---->{Raw Noun Phrases} ---->Statistical NP Parser, Phrase Extractor ---->{Indexing Term Set} ---->CLARIT Retrieval Engine

  27. Different Indexing Units • example • [[[heavy construction] industry] group] (WSJ90) • single words • heavy, construction, industry, group • head modifier pairs • heavy construction, construction industry, industry group • full noun phrases • heavy construction industry group

  28. Different Indexing Units (Continued) • WD-SET • single word only (no phrases, baseline) • WD-HM-SET • single word + head modifier pair • WD-NP-SET • single word + full NP • WD-HM-NP-SET • single word + head modifier + full NP

  29. Result Analysis • Collection: Tipster Disk 2 (250MB) • Query: TREC-5 ad hoc topics (251-300) • relevance feedback: top 10 documents returned from initial retrieval • evaluation • total number of relevant documents retrieved • highest level of precision over all the points of recall • average precision

  30. Effects of phraseswith feedback and TREC-5

  31. Summary • When only one kind of phrase is used to supplement the single words, each can lead to a great improvement in precision. • When we combine the two kinds of phrases, the effect is a greater improvement in recall rather than precision. • How to combine and weight different phrases effectively becomes an important issue.

  32. Thesaurus Construction • IR thesaurus: coordinate indexing and retrievala list of terms (words or phrases) along with relationships among them physics, EE, electronics, computer and control • INSPEC thesaurus (1979)cesium (銫,Cs) USE caesium (the preferred form) computer-aided instruction see also education (cross-referenced terms) UF teaching machines (a set of alternatives) BT educational computing (broader terms, cf. NT) TT computer applications (root node/top term) RT education (related terms) teaching CC C7810C (subject area) FC C7810Cf (subject area) For indexer and searcher

  33. Roget thesaurus • example cowardlyadjective (膽小的) Ignobly lacking in courage: cowardly turncoats Syns: chicken (slang), chicken-hearted, craven, dastardly, faint-hearted, gutless, lily-livered, pusillanimous, unmanly, yellow (slang), yellow- bellied (slang)

  34. Functions of thesauri • Provide a standard vocabulary for indexing and searching • Assist users with locating terms for proper query formulation • Provide classified hierarchies that allow the broadening and narrowing of the current query request

  35. Usage • IndexingSelect the most appropriate thesaurus entries for representing the document. • SearchingDesign the most appropriate search strategy. • If the search does not retrieve enough documents, the thesaurus can be used to expand the query. • If the search retrieves too many items, the thesaurus can suggest more specific search vocabulary.

  36. Features of Thesauri Construction of phrases from individual terms • Coordination Level • pre-coordination: phrases • phrases are available for indexing and retrieval • advantage: reducing ambiguity in indexing and searching • disadvantage: searcher has to be know the phrase formulation rules • post-coordination: words • phrases are constructed while searching • advantage: users do not worry about the exact word ordering • disadvantage: the search precision may fall, e.g.,library school vs. school library • immediate level: phrases and single words • the higher the level of coordination, the greater the precision of the vocabulary but the larger the vocabulary size length of phrases?? Two or three words or more

  37. Features of Thesauri (Continued) • Term Relationships • Aitchison and Gilchrist (1972) • equivalence relationships • synonymy: trade names, popular and local usage, superseded terms • quasi-synonymy, e.g., harshness and tenderness • hierarchical relationships, e.g., genus-species, BT vs. NT • nonhierarchical relationships, e.g., thing-part (bus and seat), thing-attribute (rose and fragrance) 親切 嚴肅 dog-german shepherd

  38. Features of Thesauri (Continued) • Wang, Vandendorpe, and Evens (1985) • parts-wholes, e.g., set-element, count-mass • collocation relations: words that frequently co-occur in the same phrase or sentence • paradigmatic relations: words that have the same semantic core, e.g., “moon” and “lunar” • taxonomy and synonymy (分類與同義) • antonymy relations (反義)

  39. Features of Thesauri (Continued) • Number of entries for each term • homographs: words with multiple meanings • each homograph entry is associated with its own set of relations • problem: how to select between alternative meanings • Specificity of vocabulary • the precision associated with the component terms • a highly specific vocabulary promotes precision in retrieval (rules of phrase construction)

  40. Features of Thesauri (Continued) • Control on term frequency of class members • for statistical thesaurus construction methods • terms included in the same thesaurus class have roughly equal frequencies • the total frequency in each class should also be roughly similar • Normalization of vocabulary • terms should be in noun form • noun phrases should avoid prepositions unless they are commonly known • a limited number of adjectives should be used • singularity, ordering, spelling, capitalization, transliteration, abbreviations, ...

  41. Thesaurus Construction • manual thesaurus construction • define the boundaries of the subject area • collect the terms for each subareasources: indexes, encyclopedias, handbooks, textbooks, journal titles and abstracts, catalogues, relevant thesauri, vocabulary systems, ... • organize the terms and their relationship into structures • review (and refine) the entire thesaurus for consistency • automatic thesaurus construction • from a collection document items • by merging existing thesaurus

  42. Thesaurus Construction from Texts 1. Construction of vocabulary normalization and selection of terms phrase construction depending on the coordination level desired 2. Similarity computations between terms identify the significant statistical associations between terms 3. Organization of vocabulary organize the selected vocabulary into a hierarchy on the basis of the associations computed in step 2.

  43. Construction of Vocabulary • Objectiveidentify the most informative terms (words and phrases) • Procedure(1) Identify an appropriate document collection. The document collection should be sizable and representative of the subject area.(2) Determine the required specificity for the thesaurus.(3) Normalize the vocabulary terms. (a) Eliminate very trivial words such as prepositions and conjunctions. (b) Stem the vocabulary. (4) Select the most interesting stems, and create interesting phrases for a higher coordination level.

  44. Stem evaluation and selection • selection by frequency of occurrence • each term may belong to category of high, medium or low frequency • terms in the mid-frequency range are the best for indexing and searching

  45. Stem evaluation and selection(Continued) • selection by discrimination value (DV) • the more discriminating a term, the higher its value as an index term • procedure • Compute the average inter-document similarity in the collection • Remove the term K from the indexing vocabulary, and recompute the average similarity • DV(K)=(average similarity without K)-(average similarity with k) • The DV for good discriminators is positive. 由retrieval的角度來看,鑑別率越高的terms越好

  46. Phrase Construction Decrease the frequency of high-frequency terms and increase their value of retrieval • Salton and McGill procedure1. Compute pairwise co-occurrence for high-frequency words.2. If this co-occurrence is lower than a threshold, then do not consider the pair any further.3. For pairs that qualify, compute the cohesion value. COHESION(ti, tj)= co-occurrence-frequency/(sqrt(frequency(ti)*frequency(tj))) COHESION(ti, tj)=size-factor* co-occurrence-frequency/(frequency(ti)*frequency(tj)) where size-factor is the size of thesaurus vocabulary 4. If cohesion is above a second threshold, retain the phrase (vs. syntactic/semantic methods)

  47. Phrase Construction (Continued) • Choueka Procedure1. Select the range of length allowed for each collocational expression. E.g., 2-6 wsords2. Build a list of all potential expressions from the collection with the prescribed length that have a minimum frequency.3. Delete sequences that begin or end with a trivial word(e.g., prepositions, pronouns, articles, conjunctions, etc.)4. Delete expressions that contain high-frequency nontrivial words.5. Given an expression, evaluate any potential sub-expressions for relevance. Discard any that are not sufficiently relevant.6. Try to merge smaller expressions into larger and more meaningful ones. e.g, abcd  abc and bcd

  48. Similarity Computation • Cosinecompute the number of documents associated with both terms divided by the square root of the product of the number of documents associated with the first term and the number of documents associated with the second term. • Dicecompute the number of documents associated with both terms divided by the sum of the number of documents associated with one term and the number associated with the other.

  49. Vocabulary Organization Assumptions: (1) high-frequency words have broad meaning, while low- frequency words have narrow meaning. (2) if the density functions of two terms have the same shape, then the two words have similar meaning. 1. Identify a set of frequency ranges. 2. Group the vocabulary terms into different classes based on their frequencies and the ranges selected in step 1. 3. The highest frequency class is assigned level 0, the next, level 1, and so on. 4. Parent-child links are determined between adjacent levels as follows. For each term t in level i, compute similarity between t and every term in level i-1. Term t becomes the child of the most similar term in level i-1. If more than one term in level i-1 qualifies for this, then each becomes a parent of t. In other words, a term is allowed to have multiple parents. 5. After all terms in level i have been linked to level i-1 terms, check level i-1terms and identify those that have no children. Propagate such terms to level i by creating an identical “dummy” term as its child. 6. Perform steps 4 and 5 for each level starting with level.

  50. Merging Existing Thesauri • simple mergelink hierarchies wherever they have terms in common • complex merge • link terms from different hierarchies if they are similar enough. • similarity is a function of the number of parent and child terms in common

More Related