1 / 34

Text Categorization Moshe Koppel Lecture 5: Authorship Verification

Text Categorization Moshe Koppel Lecture 5: Authorship Verification. with Jonathan Schler and Shlomo Argamon. Attribution vs. Verification. Attribution – Which of authors A 1 ,…,A n wrote doc X? Verification – Did author A write doc X?.

jericho
Download Presentation

Text Categorization Moshe Koppel Lecture 5: Authorship Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Text CategorizationMoshe KoppelLecture 5: Authorship Verification with Jonathan Schler and Shlomo Argamon

  2. Attribution vs. Verification • Attribution – Which of authors A1,…,An wrote doc X? • Verification – Did author A write doc X?

  3. Authorship Verification: Did the author of S also write X? Story: Ben Ish Chai, a 19th C. Baghdadi Rabbi, is the author of a corpus, S, of 500+ legal letters. Ben Ish Chai also published another corpus of 500+ legal letters, X, but denied authorship of X, despite external evidence that he wrote it. How can we determine if the author of S is also the author of X?

  4. Verification is Harder than Attribution In the absence of a closed set of alternate suspects to S, we’re never sure that we have a representative set of not-S documents. Let’s see why this is bad.

  5. Round 1: “The Lineup” D1,…,D5 are corpora written by other Rabbis of the same region and period as Ben Ish Chai. They will play the role of “impostors”.

  6. Round 1: “The Lineup” D1,…,D5 are corpora written by other Rabbis of the same region and period as Ben Ish Chai. They will play the role of “impostors”. • Learn model for S vs. (each of) impostors. • For each document in X, check if it is classed as S or an impostor. • If “many” are classed as impostors, exonerate S.

  7. Round 1: “The Lineup” D1,…,D5 are corpora written by other Rabbis of the same region and period as Ben Ish Chai. They will play the role of “impostors”. • Learn model for S vs. (each of) impostors. • For each document in X, check if it is classed as S or an impostor. • If “many” are classed as impostors, exonerate S. In fact, almost all are classified as S. (i.e. many mystery documents seem to point to S as the “guilty” author.) Does this mean S really is the author?

  8. Why “The Lineup” Fails No. This only shows that S is a better fit than these impostors, not that he is guilty. The real author may simply be some other person more similar to S than to (any of) these impostors. (One important caveat: suppose we had, say, 10000 impostors. That would be a bit more convincing.) Well, at least we can rule out these impostors.

  9. Round 2: Composite Sketch Does X Look Like S? Learn a model for S vs. X. If CV “fails” (so that we can’t distinguish S from X), S is probably guilty (esp. since we already know that we can distinguish S [and X] from each of the impostors).

  10. Round 2: Composite Sketch Does X Look Like S? Learn a model for S vs. X. If CV “fails” (so that we can’t distinguish S from X), S is probably guilty (esp. since we already know that we can distinguish S [and X] from each of the impostors). In fact, we obtain 98% CV accuracy for S vs. X. So can we exonerate S?

  11. Why Composite Sketch Fails No. Superficial differences, due to: • thematic differences, • chronological drift, • different purposes or contexts, • deliberate ruses would be enough to allow differentiation between S and X even if they were by the same author. We call these differences “masks”.

  12. Example: House of Seven Gables This is a crucial point, so let’s consider an example where we know the author’s identity. With what CV accuracy can we distinguish House of Seven Gables from the known works of Hawthorne, Melville and Cooper (respectively)?

  13. Example: House of Seven Gables This is a crucial point, so let’s consider an example where we know the author’s identity. With what CV accuracy can we distinguish House of Seven Gables from the known works of Hawthorne, Melville and Cooper (respectively)? In each case, we obtain 95+% (even though Hawthorne really wrote it).

  14. Example (continued) A small number of features allow House to be distinguished from other Hawthorne work (Scarlet Letter). For example: he, she What happens when we eliminate features like those?

  15. Round 3: Unmasking • Learn models for X vs. S and for X vs. each impostor. • For each of these, drop the k (k=5,10,15,..)best(=highest weight in SVM)features and learn again. • “Compare curves.”

  16. House of Seven Gables (concluded) Melville Cooper Hawthorne

  17. Does Unmasking Always Work? Experimental setup: • Several similar authors each with multiple books (chunked into approx. equal-length examples) • Construct unmasking curve for each pair of books • Compare same-author pairs to different-author pairs

  18. Unmasking: 19th C. American Authors (Hawthorne, Melville, Cooper)

  19. Unmasking: 19th C. English Playwrights (Shaw, Wilde)

  20. Unmasking: 19th C. American Essayists (Thoreau, Emerson)

  21. Experiment • 21 books ; 10 authors (= 210 labelled examples) • Represent unmasking curves as vectors Leave-out-one-book experiments • Use training books to learn same-author curves from diff-author curves • Classify left out book (yes/no) for each author (independently) • Use “The Lineup” to filter false positives

  22. Results • 2 misclassed out of 210 • Simple rule that almost always works: · accuracy after 6 elimination rounds is lower than 89%and ·second highest accuracy drop in two consec. iterations greater than 16% books are by same author

  23. Unmasking Ben Ish Chai

  24. Unmasking: Summary • This method works very well in general (provided X and S are both large). • Key is not how similar/different two texts are, but how robust that similarity/difference is to changes in the feature set.

  25. Now let’s try a much harder problem… • Suppose, instead of one candidate, we have 10,000 candidate authors – and we aren’t even sure if any of them is the real author. (This is two orders of magnitude more than has ever been tried before.) • Building a classifier doesn’t sound promising, but information retrieval methods might have a chance. • So, let’s try assigning an anonymous document to whichever author’s known writing is most similar (using the usual vector space/cosine model).

  26. IR Approach • We tried this on a corpus of 10,000 blogs, where the object was to attribute a short snippet from each blog. (Each attribution problem is handled independently.) • Our feature set consisted of character 4-grams.

  27. IR Approach • We tried this on a corpus of 10,000 blogs, where the object was to attribute a short snippet from each blog. (Each attribution problem is handled independently.) • Our feature set consisted of character 4-grams. • 46% of “snippets” are correctly attributed.

  28. IR Approach • 46% is not bad but completely useless in most applications. • What we’d really like to do is figure out which attributions are reliable and which are not. • In an earlier attempt (KSA 2006), we tried building a meta-classifier that could solve that problem (but meta-classifiers are a bit fiddly).

  29. When does most similar = actual author? • Can generalize unmasking idea. • Check if similarity between snippet and an author’s known text is robust wrt changes in feature set. • If it is, that’s the author. • If not, we just say we don’t know.(If in fact none of the candidates wrote it, that’s the best answer).

  30. Algorithm • Randomly choose subset of features. • Find most similar author (using that FS). • Iterate. • If S is most similar, for at least k% of iterations, S is author. Else, say Don’t Know. (Choice of k trades off precision against recall.)

  31. Results • 100 iterations, 50% of features per iteration • training text= 2000 words, snippet = 500 words # candidates 1000 candidates: 93.2% precision at 39.2% recall. (k=90)

  32. Results How often do we attribute a snippet not written by any candidate to somebody? K=90 • 10,000 candidates – 2.5% • 5,000 candidates – 3.5% • 1,000 candidates – 5.5% (The fewer candidates, the greater the chance some poor shnook will consistently be most similar.)

  33. Comments • Can give estimate of probability A is author. Almost all variance in recall/precision explained by: • Snippet length • Known-text length • Number of candidates • Score (number iterations A is most similar) • Method is language independent.

  34. So Far… • Have covered cases of many authors (closed or open set). • Unmasking covers cases of open set, few authors, lots of text. • Only uncovered problem is the ultimate problem: open set, few authors, little text. • Can we convert this case to our problem by adding artificial candidates?

More Related