1 / 25

Leanne Seaward, Diana Inkpen, Amiya Nayak University of Ottawa Ottawa, Canada

Using the Complexity of the Distribution of Lexical Elements as a Feature in Authorship Attribution. Leanne Seaward, Diana Inkpen, Amiya Nayak University of Ottawa Ottawa, Canada lspra072@uottawa.ca, diana@site.uottawa.ca, anayak@site.uottawa.ca. Overview.

kobe
Download Presentation

Leanne Seaward, Diana Inkpen, Amiya Nayak University of Ottawa Ottawa, Canada

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using the Complexity of the Distribution of Lexical Elements as a Feature in Authorship Attribution Leanne Seaward, Diana Inkpen, Amiya Nayak University of Ottawa Ottawa, Canada lspra072@uottawa.ca, diana@site.uottawa.ca, anayak@site.uottawa.ca

  2. Overview • Introduction to Authorship Attribution • BOW representation and features • Measuring Distribution • Introduction to Kolmogorov Complexity Measures (KCM) • Using KCMs as features in Authorship Attribution • Blog Dataset • Results • Conclusion • Future Work

  3. Introduction • Most Authorship Attribution use normalized counts of stylistic features to fingerprint an author, with the exception of n-grams, structure is ignored. • We propose quantifying the distribution of tokens using Kolmogorov Complexity Measures and using this as a feature to increase accuracy.

  4. Authorship Attribution • Stylometry is concerned with analyzing the linguistic style of text to determine authorship or genre. • If one assumes that an author has a consistent style, then one can assume that the author of a text can be identified by analyzing its style.

  5. Features in Authorship Attribution Normalized counts: • word/sentence counts - words per sentence, number of sentences • part-of-speech counts – number of noun phrases, number of verbs • vocabulary richness – number of common/unique words Treats text as Bag-of words (BOW)

  6. Analyzing Distribution/Structure • When humans read text, structure is important. Inconsistencies in tone and style point to plagiarism or an attempt to deceive the reader (ie. spam). • Suppose we could quantify the distribution of two token types say common words and all other words.

  7. Structure vs. Normalized Count ratio 2/3 for both distributions common words all other words

  8. Quantifying distribution • Generic machine learning algorithms use a set of features in order to learn a classification problem. • Features must be measurable or quantifiable ie. weather = ‘sunny’ or length = 55 inches • Can we reduce a distribution to a meaningful measure which captures information about that distribution.

  9. Distribution Complexity complex/random distribution some pattern is evident Can we quantify the complexity of the distribution?

  10. Kolmogorov Complexity • Kolmogorov Complexity is used to describe the complexity or degree of randomness of a binary string. It was independently developed by Andrey N. Kolmogorov, Ray Solomonoff and Gregory Chaitin in the late 1960’s. • The Kolmogorov Complexity of a binary string is the length of the shortest program which can output the string on a universal Turing machine and then stop

  11. Approximating Kolmogorov Complexity • The Kolmogorov Complexity of a binary string can be estimated with any compression algorithm. This would be an upper bound on the complexity (the distribution may be less complex using some other compression algorithm). This is known as the Kolmogorov Complexity Measure or KCM.

  12. Computing the Kolmogorov Complexity Measure • x is the string to be compressed • Kc(x) – KCM with respect to compression algorithm C • C(x) – compressed representation of x • q – number of bits needed to code compression algorithm (ignored in practice)

  13. Run-length Compression KR(x) = 22/48 = 0.458 001111000110110110101110101111111111011100011011 KR(x) = 5/48 = 0.104 000000111111111111111000111111111111111110000000

  14. Blog Corpus • Blog is a combination of the words “web” and “log” and is thus a weblog or internet diary. Generally blogs are posted frequently through a website which supports such postings. • Moshe Koppel’s Blog Corpus available for free download. • Contains 681,288 blogs from 19,320 authors or bloggers (www.blogger.com). • This experiment extracted 19 authors each of which had over 37 blogs of length over 1000 words.

  15. Data Set A

  16. Data Set B

  17. Features

  18. Features Cont…

  19. Distributions of various token types common word noun verb unique word adverb slang

  20. ARFF

  21. Support Vector Machine Results

  22. Confusion matrix Data Set A both males age 24

  23. Conclusion • Accuracy is increased 5-10% • KMC is trivial to implement into a model which already computes normalized counts • Can be used as a feature in any generic machine learning algorithm

  24. Future Work • Future Research will focus on using this method to increase classification accuracy in music classification and plagiarism detection.

  25. References Brill E. (1995) “Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part-of-Speech Tagging" Computational Linguistics vol. 21 no. 4 pp 543-565, 1995. Internet Slang Dictionary www.noslang.com Joachims T. (1998). “Text categorization with Support Vector Machines: Learning with many relevant features” In Machine Learning: ECML-98, Tenth European Conference on Machine Learning, pp. 137—142, 1998. Keselj V., Peng F., Cercone N., and Thomas C. (2003) “N-gram-based Author Profiles for Authorship Attribution” In Proceedings of the Conference Pacific Association for Computational Linguistics, PACLING'03, Halifax, Nova Scotia, Canada, pp. 255--264, August 2003. Koppel M., Schler J., Argamon S. and Messeri E. (2006). “Authorship Attribution with Thousands of Candidate Authors” (poster) in Proc. Of 29th Annual International ACM SIGIR Conference on Research & Developmenton Information Retrieval, August 2006. Li M. and Vitanyi P. (1997) “An Introduction to Kolmogorov Complexity and its Applications” Second Edition, Springer Verlag, Berlin, pages 1-188, 1997. Manning C., Schütze H. (1999) “Foundations of Statistical Natural Language Processing” pp 23-35, MIT Press, 1999. Schler J., Koppel M., Argamon S. and Pennebaker J. (2006) “Effects of Age and Gender on Blogging” in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs. Seaward L. and Saxton L.V. (2007), "Filtering spam using Kolmogorov complexity measures", to appear in The Proceedings of the 2007 IEEE International Symposium on Data Mining and Information Retrieval (DMIR-07), (Niagara Falls, May 21-23, 2007). Stamatatos E., Fakotakis N., and Kokkinakis G. (2001). “Computer-Based Authorship Attribution without Lexical Measures” Computers and the Humanities, 35(2), pp. 193-214, Kluwer, 2001. Stamatatos E., Fakotakis N., and Kokkinakis G. (2000). “Automatic Text Categorization in Terms of Genre and Author” Computational Linguistics, 26:4, pp. 461-485, 2000. Uzuner O. and Katz B. (2005) “A comparative study of language models for book and author recognition” In Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), 2005. Weka Project http://www.cs.waikato.ac.nz/ml/weka/ Wiener J. (2006) NLP Parts of Speech Tagger http://jcay.com/python/scripts-and-programs/development-tools/nlp-part-of-speech-tagger.html Witten I.H., and Frank E. (2005) "Data Mining: Practical machine learning tools and techniques", pp. 341-410 2nd Edition, Morgan Kaufmann, San Francisco, 2005.

More Related