automatic acquisition of synonyms using the web as a corpus n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Automatic Acquisition of Synonyms Using the Web as a Corpus PowerPoint Presentation
Download Presentation
Automatic Acquisition of Synonyms Using the Web as a Corpus

Loading in 2 Seconds...

play fullscreen
1 / 28

Automatic Acquisition of Synonyms Using the Web as a Corpus - PowerPoint PPT Presentation


  • 114 Views
  • Uploaded on

Automatic Acquisition of Synonyms Using the Web as a Corpus. Svetlin Nakov, Sofia University "St. Kliment Ohridski" nakov@fmi-uni-sofia.bg. 3rd Annual South East European Doctoral Student Conference (DSC2008): Infusing Knowledge and Research in South East Europe. Introduction.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Automatic Acquisition of Synonyms Using the Web as a Corpus' - judah


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
automatic acquisition of synonyms using the web as a corpus
Automatic Acquisition of Synonyms Using the Web as a Corpus

Svetlin Nakov, Sofia University "St. Kliment Ohridski"

nakov@fmi-uni-sofia.bg

3rd Annual South East European Doctoral Student Conference (DSC2008): Infusing Knowledge and Research in South East Europe

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

introduction
Introduction
  • We want to automatically extract all pairs of synonyms inside given text
  • Our goal is:
    • Design an algorithm that can distinguish between synonyms and non-synonyms
  • Our approach:
    • Measure semantic similarity using the Web as a corpus
    • Synonyms are expected to have higher semantic similarity than non-synonyms

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

the paper in one slide
The Paper in One Slide
  • Measuring semantic similarity
    • Analyze the words local contexts
    • Use the Web as a corpus
    • Similar contexts  similar words
    • TF.IDF weighting & reverse context lookup
  • Evaluation
    • 94 words (Russian fine arts terminology)
      • 50 synonym pairs to be found
    • 11pt average precision: 63.16%

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

contextual web similarity
Contextual Web Similarity
  • What is local context?
    • Few words before and after the target word
  • The words in the local context of given word are semantically related toit
  • Need to exclude the stop words: prepositions, pronouns, conjunctions, etc.
    • Stop words appear in all contexts
  • Need of sufficiently big corpus

Same day delivery of fresh flowers, roses, and unique gift baskets from our online boutique. Flower delivery online by local florists for birthday flowers.

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

contextual web similarity1
Contextual Web Similarity
  • Web as a corpus
    • The Web can be used as a corpus to extract the local context for given word
      • The Web is the largest possible corpus
      • Contains large corpora in any language
    • Searching some word in Google can return up to 1 000 snippets of texts
      • The target word is given along with its local context: few words before and after it
      • Target language can be specified

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

contextual web similarity2
Contextual Web Similarity
  • Web as a corpus
    • Example: Google query for "flower"

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

contextual web similarity3
Contextual Web Similarity
  • Measuring semantic similarity
    • For given two words their local contexts are extracted from the Web
      • A set of words and their frequencies
    • Semantic similarity is measured as similarity between these local contexts
      • Local contexts are represented as frequency vectors for given set of words
      • Cosine between the frequency vectors in the Euclidean space is calculated

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

contextual web similarity4
Contextual Web Similarity
  • Example of context words frequencies

word: flower

word: computer

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

contextual web similarity5
Contextual Web Similarity
  • Example of frequency vectors
  • Similarity = cosine(v1, v2)

v1: flower

v2: computer

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

tf idf weighting
TF.IDF Weighting
  • TF.IDF (term frequency times inverted document frequency)
    • Statistical measure in information retrieval
    • Shows how important is a certain word for a given document in a set of documents
    • Increases proportionally to the number of word's occurrences in the document
    • Decreases proportionally to the total number of documents containing the word

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

reverse context lookup
Reverse Context Lookup
  • Local context extracted from the Web can contain arbitrary parasite words like "online", "home", "search", "click", etc.
    • Internet terms appear in any Web page
    • Such words are not likely to be associated with the target word
  • Example (for the word flowers)
    • "send flowers online", "flowers here", "order flowers here"
  • Will the word "flowers" appear in the local context of "send", "online" and "here"?

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

reverse context lookup1
Reverse Context Lookup
  • If two words are semantically related, then
    • Both of them should appear in the local contexts of each other
  • Let #{x,y}= number of occurrences of x in the local context of y
  • For any word w and a word from its local context wc, we define their strength of semantic associationp(w,wc) as follows:
    • p(w, wc) = min{ #(w, wc), #(wc,w) }
  • We use p(w, wc) as vector coordinates
  • We introduce a minimal occurrence threshold (e.g. 5) to filter words appearing just by chance

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

data set
Data Set
  • We use a list of 94 Russian words:
    • Terms extracted from texts in the subject of fine arts
    • Limited to nouns only
  • The data set:
  • There are 50 synonym pairs in these words
    • We expect to find them by our algorithms

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

experiments
Experiments
  • We tested few modifications of our contextual Web similarity algorithm
    • Basic algorithm (without modifications)
    • TF.IDF weighting
    • Reverse context lookup with different frequency threshold

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

experiments1
Experiments
  • RAND – random ordering of all the pairs
  • SIM – the basic algorithm for extraction of semantic similarity from the Web
    • Context size of 3 words
    • Without analyzing the reverse context
    • With lemmatization
  • SIM+TFIDF – modification of the SIM algorithm with TF.IDF weighting
  • REV2, REV3, REV4, REV5, REV6, REV7 – the SIM algorithm + “reverse context lookup” with frequency thresholds of: 2, 3, 4, 5, 6 and 7

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

resources used
Resources Used
  • We used the following resources:
    • Google Web search engine: extracted the first 1 000 results for 82 645 Russian words
    • Russian lemma dictionary: 1 500 000 wordforms and 100 000 lemmata
    • A list of 507 Russian stop words

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

evaluation
Evaluation
  • Our algorithms arrange all pairs of words according to their semantic similarity
    • We expect the 50 synonyms pairs to be at the top of the result list
  • We count how many synonyms are found in the top N results (e.g. top 5, top 10, etc.)
  • We measure precision and recall
  • We measure 11pt average precision to evaluate the results

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

sim algorithm results
SIM Algorithm – Results

Precision and recall obtained by the SIM algorithm

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

comparison of the algorithms
Comparison of the Algorithms

Comparison of the algorithms (number of synonyms in the top results)

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

comparison of the algorithms 11pt average precision

11pt Average Precision

70,00%

63,16%

58,98%

60,00%

50,00%

40,00%

30,00%

20,00%

10,00%

1,15%

n/a

n/a

n/a

n/a

n/a

n/a

0,00%

RAND

SIM

SIM+TFIDF

REV2

REV3

REV4

REV5

REV6

REV7

Comparison of the Algorithms(11pt Average Precision)

Comparing RAND, SIM, SIM+TDIDF and REV2 … REV7

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

results precision recall graph
Results (Precision-Recall Graph)

Comparing the recall-precision graphs of evaluated algorithms

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

discussion
Discussion
  • Our approach is original because:
    • Measures automatically semantic similarity
    • Uses the Web as a corpus
      • Does not rely on any preexisting corpora
      • Does not requires semantic resources like WordNet and EuroWordNet
    • Works for any language
      • Tested for Bulgarian and Russian
    • Uses reverse-context lookup and TF.IDF
      • Significant improvement in quality

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

discussion1
Discussion
  • Good accuracy, but far away from 100%
  • Known problems of the proposed algorithms:
    • Semantically related words are not always synonyms
      • red – blue
      • wood – pine
      • apple – computer
    • Similar contexts does not always mean similar words (distributional hypothesis)
    • The Web as a corpus introduces noise
    • Google returns the first 1 000 results only

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

discussion2
Discussion
  • Known problems of the proposed algorithms:
    • Google ranks higher news portals, travel agencies and retail sites than books, articles and forum messages
    • Local context always contain noise
    • Working with words, not capturing phrases

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

conclusion and future work
Conclusion and Future Work
  • Conclusion
    • Our algorithms can distinguish between synonyms and non-synonyms
    • Accuracy should be improved
  • Future Work
    • Additional techniques to distinguish between synonyms and semantically related words
    • Improve the semantic similarity measure algorithm

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

references
References
  • Hearst M. (1991). "Noun Homograph Disambiguation Using Local Context in Large Text Corpora". In Proceedings of the 7th Annual Conference of the University of Waterloo Centre for the New OED and Text Research, Oxford, England, pages 1-22.
  • Nakov P., Nakov S., Paskaleva E. (2007a). “Improved Word Alignments Using the Web as a Corpus”. In Proceedings of RANLP'2007, pages 400-405, Borovetz, Bulgaria.
  • Nakov S., Nakov P., Paskaleva E. (2007b). “Cognate or False Friend? Ask the Web!”. In Proceedings of the Workshop on Acquisition and Management of Multilin­gual Lexicons, held in conjunction with RANLP'2007, pages 55-62, Borovetz, Bulgaria.
  • Sparck-Jones K. (1972). “A Statistical Interpretation of Term Specificity and its Application in Retrieval”. Journal of Documentation, volume 28, pages 11-21.
  • Salton G., McGill M. (1983), Introduction to Modern Information Retrieval, McGraw-Hill, New York.
  • Paskaleva E. (2002). “Processing Bulgarian and Russian Resources in Unified Format”. In Proceedings of the 8th International Scientific Symposium MAPRIAL, Veliko Tarnovo, Bulgaria, pages 185-194.
  • Harris, Z. (1954). "Distributional structure”. Word, 10, pages 146-162.
  • Lin D. (1998). "Automatic Retrieval and Clustering of Similar Words". In Proceedings of COLING-ACL'98, Montreal, Canada, pages 768-774.
  • Curran J., Moens M. (2002). "Improvements in Аutomatic Тhesaurus Еxtraction". In Proceedings of the Workshop on Unsupervised Lexical Acquisition, SIGLEX 2002, Philadelphia, USA, pages 59-67.

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

references1
References
  • Plas L., Tiedeman J. (2006). "Finding Synonyms Using Automatic Word Alignment and Measures of Distribu­tional Similarity". In Proceedings of COLING/ACL 2006, Sydney, Australia.
  • Och F., Ney H. (2003). "A Systematic Comparison of Various Statistical Alignment Models". Computational Linguistics, 29 (1), 2003.
  • Hagiwara М., Ogawa Y., Toyama K. (2007). "Effectiveness of Indirect Dependency for Automatic Synonym Acquisition". In Proceedings of CoSMo 2007 Workshop, held in conjuction with CONTEXT 2007, Roskilde, Denmark.
  • Kilgarriff A., Grefenstette G. (2003). "Introduction to the Special Issue on the Web as Corpus", Computational Linguistics, 29(3):333–347.
  • Inkpen D. (2007). "Near-synonym Choice in an Intelligent Thesaurus". In Proceedings of the NAACL-HLT, New York, USA.
  • Chen H., Lin M., Wei Y. (2006). "Novel Association Measures Using Web Search with Double Checking". In Proceedings of the COLING/ACL 2006, Sydney, Australia, pages 1009-1016.
  • Sahami M., Heilman T. (2006). "A Web-based Kernel Function for Measuring the Similarity of Short Text Snippets". In Proceedings of 15th International World Wide Web Conference, Edinburgh, Scotland.
  • Bollegala D., Matsuo Y., Ishizuka M. (2007). "Measuring Semantic Similarity between Words Using Web Search Engines", In Proceedings of the 16th International World Wide Web Conference (WWW2007), Banff, Canada, pages 757-766.
  • Sanchez D., Moreno A. (2005), "Automatic Discovery of Synonyms and Lexicalizations from the Web". Artificial Intelligence Research and Development, Volume 131, 2005.

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece

questions
Questions?

Automatic Acquisition of Synonyms Using the Web as a Corpus

DSC 2008 – 26-27 June 2008, Thessaloniki, Greece