1 / 20

Algorithmic Information Theory, Similarity Metrics and Google

Varun Rao. Algorithmic Information Theory, Similarity Metrics and Google. Algorithmic Information Theory. Kolmogorov Complexity Information Distance Normalized Information Distance Normalized Compression Distance Normalized Google Distance. Kolmogorov Complexity.

mahon
Download Presentation

Algorithmic Information Theory, Similarity Metrics and Google

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Varun Rao Algorithmic Information Theory, Similarity Metrics and Google

  2. Algorithmic Information Theory • Kolmogorov Complexity • Information Distance • Normalized Information Distance • Normalized Compression Distance • Normalized Google Distance

  3. Kolmogorov Complexity • The Kolmogorov complexity of a string x is the length, in bits, of the shortest computer program of the fixed reference computing system that produces x as output.1 • First million bits of Pi vs. First million bits of your favourite song recording

  4. Information Distance • Given two strings x & y, Information Distance is the length of the shortest binary program that computes output y from input x, and also output x from input y 1 • ID minorizes all other computable distance metrics

  5. Normalized Information Distance • Roughly speaking, two objects are deemed close if we can significantly “compress” one given the information in the other, the idea being that if two pieces are more similar, then we can more succinctly describe one given the other.2 • NID is characterized as the most informative metric • Sadly, completely and utterly uncomputable

  6. Normalized Compression Distance • But we have compressors (lossless) • If C is a compressor, and C(x) is the compressed length of a string x then • NCD gets closer to NID as the compressor approximates the ultimate compression, Kolmogorov complexity

  7. Normalized Compression Distance II • Basic Process to compute NCD for x & y • Use compressor to compute C(x), C(y) • Append x to y and compute C(xy) • Use relatively simple clustering methods to use NCD as a similarity metric to group strings

  8. Normalized Compression Distance III • Using Bzip2 on various types of files

  9. Normalized Compression Distance IV • The evolutionary tree built from complete mammalian mtDNA sequences of 24 species2

  10. Normalized Compression Distance V • Clustering of Native-American, Native-African, and Native-European languages (translations of The Universal Declaration of Human Rights)2

  11. Normalized Compression Distance VI • Optical Character Recognition using NCD. More complex clustering techniques achieved 85% success rate as opposed to industry standard 90%-95%2

  12. What about Semantic meaning ? • Or what about how different a horse is from a car, or a hawk from a handsaw for that matter ? • Compressors are semantically indifferent to their data • To insert semantic relationships, turn to Google

  13. Google • Massive database, containing lots of information about semantic relationships • The Quick Brown ___ ? • Use simple page counts as indicators of closeness • Use relative number of hits as a measure of probability to create a Google Distribution i.e. p(x) = hits in a search of x/total number of pages indexed

  14. Google II • Given that we can construct a distribution we can construct a Google Shannon Fano code (conceptually) because we can apply the Kraft inequality (after some normalization) .... ???

  15. Normalized Google Distance • After all that hand waving, we can create a distance (like) metric NGD that has all kinds of nice properties

  16. Applying NGD • NGD as applied to 15 painting names by 3 Dutch artists

  17. Applying NGD II • Using SVM to learn the concept of primes2

  18. Applying NGD III • Using SVM to learn “electrical” terms 2

  19. Applying NGD IV • Using SVM to learn “religious” terms 2

  20. Sources • R. Cilibrasi and P. Vitanyi, “Automatic Meaning Discovery Using Google” http://www.arxiv.org/pdf/cs.CL/0412098 • R. Cilibrasi, P. Vitanyi. Clustering by compression, Submitted to IEEE Trans. Information Theory. http://www.archiv.org/abs/cs.CV/0312044 • C.H. Bennett, P. G´acs, M. Li, P.M.B. Vit´anyi,W. Zurek, Information Distance, IEEE Trans. Information Theory, 44:4(1998), 1407–1423. • M. Li, X. Chen, X. Li, B. Ma, P. Vitanyi. The similarity metric, IEEE Trans. Information Theory, 50:12(2004), 3250- 3264. • “Algorithmic Information Theory”, Wikipedia, accessed 25th January 2005. http://en.wikipedia.org/wiki/Algorithmic_information_theory • Greg Harfst, “Kolmogorov Complexity”, accessed 25th January 2005. http://nms.lcs.mit.edu/~gch/kolmogorov.html

More Related