1 / 35

Distance functions and IE

This talk by Carlos Guestrin explores the use of distance functions in information extraction, focusing on record linkage and string distance metrics such as Levenshtein and Smith-Waterman.

farber
Download Presentation

Distance functions and IE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distance functions and IE William W. Cohen CALD

  2. Announcements • March 25 Thus – talk from Carlos Guestrin (Assistant Prof in Cald as of fall 2004) on max-margin Markov nets • time constraints? • Writeups: • nothing today • “distance metrics for text” – three papers due next Monday, 3/22

  3. Record linkage: definition • Record linkage: determine if pairs of data records describe the same entity • I.e., find record pairs that are co-referent • Entities: usually people (or organizations or…) • Data records: names, addresses, job titles, birth dates, … • Main applications: • Joining two heterogeneous relations • Removing duplicates from a single relation

  4. Record linkage: terminology • The term “record linkage” is possibly co-referent with: • For DB people:data matching, merge/purge, duplicate detection, data cleansing, ETL (extraction, transfer, and loading), de-duping • For AI/ML people: reference matching, database hardening, object consolidation, • In NLP: co-reference/anaphora resolution • Statistical matching, clustering, language modeling, …

  5. Motivation • Q: What does this have to do with IE? • A: Quite a lot, actually... Webfind (Monge & Elkan, 1995)

  6. Finding a technical paper c. 1995 • Start with citation: " Experience With a Learning Personal Assistant", T.M. Mitchell, R. Caruana, D. Freitag, J. McDermott, and D. Zabowski, Communications of the ACM, Vol. 37, No. 7, pp. 81-91, July 1994. • Find author’s institution (w/ INSPEC) • Find web host (w/ NETFIND) • Find author’s home page and (hopefully) the paper by browsing

  7. Automatically finding a technical paper c. 1995 with WebFind • Start with citation: " Experience With a Learning Personal Assistant", T.M. Mitchell, R. Caruana, D. Freitag, J. McDermott, and D. Zabowski, Communications of the ACM, Vol. 37, No. 7, pp. 81-91, July 1994. • Find author’s institution (w/ automated search against INSPEC) • Find web host (w/ auto search on NETFIND) • Find author’s home page and (hopefully) the paper by heuristic spidering

  8. The data integration problem

  9. The data integration problem • Control flow (modulo details about querying • Extract (author, department) pairs from DB1 • Extract (department ,www server) pairs from DB2 • Execute the two-step plan to get paper: • author -> department -> wwwServer • two steps means matching (linking, integrating, deduping, ....) department names in DB1/DB2 • issues are completely different if user is executing a one-step plan: • one-step plan is retrieval

  10. String distance metrics: Levenshtein • Edit-distance metrics • Distance is shortest sequence of edit commands that transform s to t. • Simplest set of operations: • Copy character from s over to t • Delete a character in s (cost 1) • Insert a character in t (cost 1) • Substitute one character for another (cost 1) • This is “Levenshtein distance”

  11. Levenshtein distance - example • distance(“William Cohen”, “Willliam Cohon”) s alignment t op cost

  12. Levenshtein distance - example • distance(“William Cohen”, “Willliam Cohon”) s gap alignment t op cost

  13. D(i-1,j-1), if si=tj //copy D(i-1,j-1)+1, if si!=tj //substitute D(i-1,j)+1 //insert D(i,j-1)+1 //delete = min Computing Levenshtein distance - 1 D(i,j) = score of best alignment from s1..si to t1..tj

  14. Computing Levenshtein distance - 2 D(i,j) = score of best alignment from s1..si to t1..tj D(i-1,j-1) + d(si,tj) //subst/copy D(i-1,j)+1 //insert D(i,j-1)+1 //delete = min (simplify by letting d(c,d)=0 if c=d, 1 else) also let D(i,0)=i (for i inserts) and D(0,j)=j

  15. = D(s,t) Computing Levenshtein distance - 3 D(i-1,j-1) + d(si,tj) //subst/copy D(i-1,j)+1 //insert D(i,j-1)+1 //delete D(i,j)= min

  16. Computing Levenshtein distance – 4 D(i-1,j-1) + d(si,tj) //subst/copy D(i-1,j)+1 //insert D(i,j-1)+1 //delete D(i,j) = min A trace indicates where the min value came from, and can be used to find edit operations and/or a best alignment (may be more than 1)

  17. d(c,d) is an arbitrary distance function on characters (e.g. related to typo frequencies, amino acid substitutibility, etc) William Cohen Wukkuan Cigeb Needleman-Wunch distance D(i-1,j-1) + d(si,tj) //subst/copy D(i-1,j) +G//insert D(i,j-1) + G//delete D(i,j) = min G = “gap cost”

  18. Smith-Waterman distance - 1 0 //start over D(i-1,j-1) - d(si,tj) //subst/copy D(i-1,j) - G //insert D(i,j-1) - G //delete D(i,j) = max Distance is maximum over all i,j in table of D(i,j)

  19. Smith-Waterman distance - 2 0 //start over D(i-1,j-1) - d(si,tj) //subst/copy D(i-1,j) - G //insert D(i,j-1) - G //delete D(i,j) = max G = 1 d(c,c) = -2 d(c,d) = +1

  20. Smith-Waterman distance - 3 0 //start over D(i-1,j-1) - d(si,tj) //subst/copy D(i-1,j) - G //insert D(i,j-1) - G //delete D(i,j) = max G = 1 d(c,c) = -2 d(c,d) = +1

  21. Smith-Waterman distance: Monge & Elkan’s WEBFIND (1996)

  22. Smith-Waterman distance in Monge & Elkan’s WEBFIND (1996) Used a standard version of Smith-Waterman with hand-tuned weights for inserts and character substitutions. Split large text fields by separators like commas, etc, and explore different pairings (since S-W assigns a large cost to large transpositions) Result competitive with plausible competitors.

  23. Smith-Waterman distance in Monge & Elkan’s WEBFIND (1996) • String s=A1 A2 ... AK, string t=B1 B2 ... BL • sim’ is editDistance scaled to [0,1] • Monge-Elkan’s “recursive matching scheme” is average maximal similarity of Aito Bj:

  24. Smith-Waterman distance: Monge & Elkan’s WEBFIND (1996) 0.51 0.92 0.5 1.0

  25. Results: S-W from Monge & Elkan

  26. Affine gap distances • Smith-Waterman fails on some pairs that seem quite similar: William W. Cohen William W. ‘Don’t call me Dubya’ Cohen Intuitively, a single long insertion is “cheaper” than a lot of short insertions Intuitively, are springlest hulongru poinstertimon extisn’t “cheaper” than a lot of short insertions

  27. Affine gap distances - 2 • Idea: • Current cost of a “gap” of n characters: nG • Make this cost: A + (n-1)B, where A is cost of “opening” a gap, and B is cost of “continuing” a gap.

  28. D(i-1,j) - A IS(i-1,j) - B Best score in which si is aligned with a ‘gap’ IS(i,j) = max Best score in which tj is aligned with a ‘gap’ D(i,j-1) - A IT(i,j-1) - B IT(i,j) = max Affine gap distances - 3 D(i-1,j-1) + d(si,tj) IS(I-1,j-1) + d(si,tj) IT(I-1,j-1) + d(si,tj) D(i-1,j-1) + d(si,tj) //subst/copy D(i-1,j)-1 //insert D(i,j-1)-1 //delete D(i,j) = max

  29. Affine gap distances - 4 -B IS -d(si,tj) -A D -d(si,tj) -A -d(si,tj) -B IT

  30. Affine gap distances – experiments (from McCallum,Nigam,Ungar KDD2000) • Goal is to match data like this:

  31. Affine gap distances – experiments (from McCallum,Nigam,Ungar KDD2000) • Hand-tuned edit distance • Lower costs for affine gaps • Even lower cost for affine gaps near a “.” • HMM-based normalization to group title, author, booktitle, etc into fields

  32. Affine gap distances – experiments

More Related