What is text mining what are the application areas what are the challenges what are the tools
This presentation is the property of its rightful owner.
Sponsored Links
1 / 44

What is Text Mining? What are the application areas? What are the challenges? What are the tools? PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

What is Text Mining? What are the application areas? What are the challenges? What are the tools?. Prelude. Amount of information is growing exponentially Majority of Information is stored in text documents : journals , web pages , emails, reports, memos, social networks...

Download Presentation

What is Text Mining? What are the application areas? What are the challenges? What are the tools?

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

What is text mining what are the application areas what are the challenges what are the tools

What is Text Mining?What are the application areas?

What are the challenges?

What are the tools?



  • Amount of information is growing exponentially

  • Majority of Information is stored in text documents:

    • journals, web pages, emails, reports, memos, social networks...

  • Extracting useful knowledge from all this information is vital

What is text mining what are the application areas what are the challenges what are the tools

Need to automatically analyze, summarize text

Text mining briefly

Text Mining : Briefly 


What is text mining

What is Text Mining?

genuinely new information

The discovery by computer of

, information,

by automatically extracting information from a usually large amount of different

textual resources.


previously unknown

UN/SEMI structured

What are the important implications of this definition

What are the important implications of this definition?

  • Genuinely new information.

  • Not merely finding patterns

  • Free naturally occurring text.

  • Not DB records, HTML,XML, …

What is text mining1

What is Text Mining?

  • seeks to extract useful information from a collection of documents.

  • similar to data mining

    • but the data sources are unstructured or semi-structured documents.

Deals with structured numeric data

Typically in a data warehouse

What is text mining2

What Is Text Mining?

“The objective of Text Mining is to exploit information contained in textual documents in various ways, including …discovery of patterns and trends in data, associations among entities, predictive rules, etc.”

“Another way to view text data mining is as a process of exploratory data analysis that leads to heretofore unknown information, or to answers for questions for which the answer is not currently known.”

What is text mining3

What is Text Mining?

  • “…finding interestingregularities in large textual datasets…”

  • “…finding semantic and abstract information from the surface form of textual data…”

  • The process of deriving high quality information from text

Is this easy?

Text mining is difficult

Text Mining is difficult

  • Abstract concepts are difficult to represent

  • “Countless” combinations of subtle, abstract relationships among concepts

  • Many ways to represent similar concepts

    • E.g. space ship, flying saucer, UFO

  • Concepts are difficult to visualize

  • High dimensionality

  • Tens or hundreds of thousands of features

Text mining is difficult1

Text Mining is difficult

  • Automatic learning methods are typically supervised

    • annotating training data is a time-consuming and expensive task.

  • Develop better unsupervised algorithm?

  • Use of a small set of labeled example more efficiently?

Text mining is difficult2

Text Mining is Difficult

  • Data collection is “free text”

    • Data is not well-organized

      • Semi-structured or unstructured

    • Natural language text contains ambiguities on many levels

      • Lexical, syntactic, semantic, and pragmatic

    • Learning techniques for processing text typically need annotated training examples

      • Consider bootstrapping techniques

Text mining is easy

Text Mining is Easy

  • Highly redundant data

    • …most of the methods count on this property

  • Just about any simple algorithm can get “good” results for simple tasks:

    • Pull out “important” phrases

    • Find “meaningfully” related words

    • Create some sort of summary from documents

How is text mining done

How is Text Mining done?

  • structure the input text

    • pre-processing & representation

      • identification / extraction of representative features

  • derive patterns within the structured data

    • analysis

  • evaluation and interpretation of the output

    • evaluation

  • Typical tasks:

    • Categorization, clustering, concept/entity extraction, production of taxonomies, sentiment analysis, document summarization, and entity relation modeling

What is text mining what are the application areas what are the challenges what are the tools

  • TM exploits techniques / methodologiesfromdata mining, machine learning, information retrieval, corpus-based computational linguistics

  • Typical tasks:

    • Categorization, clustering, concept/entity extraction, production of taxonomies, sentiment analysis, document summarization, and entity relation modeling

Sample application areas of text mining

Sample Application Areas of Text Mining

  • Find relevant documents among a vast repository of documents based on a word or phrase, which may include misspellings

  • Search engine technology :

  • Analyze information in open ended survey questions.

  • Analysis of survey data :

  • Analyze title line and contents of e-mails are analyzed to identify which are spam and which are legitimate

  • Spam identification :

  • Monitor telephone, internet and other communications for evidence of terrorism

  • Surveillance :

  • Route calls to help desks and technical support lines based on verbal answers to questions.

  • Call center routing :

  • Global Public Health Intelligence Network (GPHIN) monitors global newspaper articles and other media to provide an early warning of potential public health threats including disease epidemics such as SARS, and chemical or radioactive threats.

  • Public health early warning :

  • The aliases of health care and other providers are analyzed to detect over billing and fraud. For instance, a bill may have been submitted by John Smith, J. Smith and Smith, John. The same approaches may be used to identify abuse by claimants, where given claimants submit numerous insurance claims under different aliases.

  • Alias identification :

Levels of text representations

Levels of text representations

Character level

Character level

  • Character level representation of a text consists from sequences of characters…

    • …a document is represented by a frequency distribution of sequences

    • Usually we deal with contiguous strings…

    • …each character sequence of length 1, 2, 3, … represent a feature w 127361 ithits frequency

  • Lexical

Good and bad sides

Good and bad sides

  • Representation has several important strengths:

    • …it is very robust since avoids language morphology

      • (useful for e.g. language identification)

    • …it captures simple patterns on character level

      • (useful for e.g. spam detection, copy detection)

    • …because of redundancy in text data it could be used for many analytic tasks

      • (learning, clustering, search)

      • It is used as a basis for “string kernels” in combination with SVM for capturing complex character sequence patterns

  • …for deeper semantic tasks, the representation is too weak

  • Lexical

Word level

Word level

  • The most common representation of text used for many techniques

    • …there are many tokenization software packages which split text into the words

  • Important to know:

    • Word is well defined unit in western languages – e.g. Chinese has different notion of semantic unit

  • Lexical

Words properties

Words Properties

  • Relations among word surface forms and their senses:

    • Homonomy: same form, but different meaning (e.g. bank: river bank, financial institution)

    • Polysemy: same form, related meaning (e.g. bank: blood bank, financial institution)

    • Synonymy: different form, same meaning (e.g. singer, vocalist)

    • Hyponymy: one word denotes a subclass of an another (e.g. breakfast, meal)

  • Word frequencies in texts have power distribution:

    • …small number of very frequent words

    • …big number of low frequency words

  • Lexical

Stop words


  • Stop-words are words that from non-linguistic view do not carry information

    • …they have mainly functional role

    • …usually we remove them to help the methods to perform better

  • Stop words are language dependent – examples:


    • Dutch: de, en, van, ik, te, dat, die, in, een, hij, het, niet, zijn, is, was, op, aan, met, als, voor, had, er, maar, om, hem, dan, zou, of, wat, mijn, men, dit, zo, ...


  • Lexical

Word character level normalization

Word character level normalization

  • Hassle which we usually avoid:

    • Since we have plenty of character encodings in use, it is often nontrivial to identify a word and write it in unique form

    • …e.g. in Unicode the same word could be written in many ways – canonization of words:

  • Lexical

Stemming 1 2

Stemming (1/2)

  • Different forms of the same word are usually problematic for text data analysis, because they have different spelling and similar meaning (e.g. learns, learned, learning,…)

  • Stemming is a process of transforming a word into its stem (normalized form)

    • …stemming provides an inexpensive mechanism to merge

  • Lexical

Stemming 2 2

Stemming (2/2)

  • For English is mostly used Porter stemmer at http://www.tartarus.org/~martin/PorterStemmer/

  • Example cascade rules used in English Porter stemmer

    • ATIONAL -> ATE relational-> relate

    • TIONAL -> TION conditional->condition

    • ENCI -> ENCEvalenci->valence

    • ANCI -> ANCE hesitanci->hesitance

    • IZER -> IZE digitizer-> digitize

    • ABLI -> ABLE conformabli -> conformable

    • ALLI -> AL radicalli->radical

    • ENTLI -> ENT differentli->different

    • ELI -> E vileli->vile

    • OUSLI -> OUS analogousli ->analogous

  • Lexical

Phrase level

Phrase level

  • Instead of having just single words we can deal with phrases

  • We use two types of phrases:

    • Phrases as frequent contiguous word sequences

    • Phrases as frequent non-contiguous word sequences

    • …both types of phrases could be identified by simple dynamic programming algorithm

  • The main effect of using phrases is to more precisely identify sense

  • Lexical

Google n gram corpus

Google n-gram corpus

  • In September 2006 Google announced availability of n-gram corpus:

    • http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html#links

    • Some statistics of the corpus:

      • File sizes: approx. 24 GB compressed (gzip'ed) text files

      • Number of tokens: 1,024,908,267,229

      • Number of sentences: 95,119,665,584

      • Number of unigrams: 13,588,391

      • Number of bigrams: 314,843,401

      • Number of trigrams: 977,069,902

      • Number of fourgrams: 1,313,818,354

      • Number of fivegrams: 1,176,470,663

  • Lexical

Example google n grams

ceramics collectables collectibles 55ceramics collectables fine 130ceramics collected by 52ceramics collectible pottery 50ceramics collectibles cooking 45ceramics collection , 144ceramics collection . 247ceramics collection </S> 120ceramics collection and 43ceramics collection at 52ceramics collection is 68ceramics collection of 76ceramics collection | 59ceramics collections , 66ceramics collections . 60ceramics combined with 46ceramics come from 69ceramics comes from 660ceramics community , 109ceramics community . 212ceramics community for 61ceramics companies . 53ceramics companies consultants 173ceramics company ! 4432ceramics company , 133ceramics company . 92ceramics company </S> 41ceramics company facing 145ceramics company in 181ceramics company started 137ceramics company that 87ceramics component ( 76ceramics composed of 85

serve as the incoming 92serve as the incubator 99serve as the independent 794serve as the index 223serve as the indication 72serve as the indicator 120serve as the indicators 45serve as the indispensable 111serve as the indispensible 40serve as the individual 234serve as the industrial 52serve as the industry 607serve as the info 42serve as the informal 102serve as the information 838serve as the informational 41serve as the infrastructure 500serve as the initial 5331serve as the initiating 125serve as the initiation 63serve as the initiator 81serve as the injector 56serve as the inlet 41serve as the inner 87serve as the input 1323serve as the inputs 189serve as the insertion 49serve as the insourced 67serve as the inspection 43serve as the inspector 66serve as the inspiration 1390serve as the installation 136serve as the institute 187

Example: Google n-grams

  • Lexical

N-gram here concerns words, in other contexts it is used for characters!

Part of speech level

Part-of-Speech level

  • By introducing part-of-speech tags we introduce word-types enabling to differentiate words functions

    • For text-analysis part-of-speech information is used mainly for “information extraction” where we are interested in e.g. named entities which are “noun phrases”

    • Another possible use is reduction of the vocabulary (features)

      • …it is known that nouns carry most of the information in text documents

  • Part-of-Speech taggers are usually learned by HMM algorithm on manually tagged data

  • Lexical

Part of speech table

Part-of-Speech Table

  • Lexical


Part of speech examples

Part-of-Speech examples

  • Lexical


Taxonomies thesaurus level

Taxonomies/thesaurus level

  • Thesaurus has a main function to connect different surface word forms with the same meaning into one sense (synonyms)

    • …additionally we often use hypernym relation to relate general-to-specific word senses

    • …by using synonyms and hypernym relation we compact the feature vectors

  • The most commonly used general thesaurus is WordNet which exists in many other languages (e.g. EuroWordNet)

    • http://www.illc.uva.nl/EuroWordNet/

  • Lexical

Wordnet database of lexical relations

WordNetdatabase of lexical relations

  • WordNet is the most well developed and widely used lexical database for English

    • …it consist from 4 databases (nouns, verbs, adjectives, and adverbs)

  • Each database consists from sense entries – each sense consists from a set of synonyms, e.g.:

    • musician, instrumentalist, player

    • person, individual, someone

    • life form, organism, being

  • Lexical

Wordnet excerpt from the graph

WordNet – excerpt from the graph


  • Lexical



26 relations

116k senses

Wordnet relations

WordNet relations

  • Each WordNet entry is connected with other entries in the graph through relations

  • Relations in the database of nouns:

Vector space model level

Vector-space model level

  • The most common way to deal with documents is first to transform them into sparse numeric vectors and then deal with them with linear algebra operations

    • …by this, we forget everything about the linguistic structure within the text

    • …this is sometimes called “structural curse” because this way of forgetting about the structure doesn’t harm efficiency of solving many relevant problems

    • This representation is referred to also as “Bag-Of-Words” or “Vector-Space-Model”

    • Typical tasks on vector-space-model are classification, clustering, visualization etc.

  • Syntactic

Bag of words document representation

Bag-of-words document representation

  • Syntactic

Example document and its vector representation

Example document and its vector representation

  • TRUMP MAKES BID FOR CONTROL OF RESORTS Casino owner and real estateDonald Trump has offered to acquire all Class B common sharesof Resorts International Inc, a spokesman for Trump said.The estate of late Resorts chairman James M. Crosby owns340,783 of the 752,297 Class B shares. Resorts also has about 6,432,000 Class A common sharesoutstanding. Each Class B share has 100 times the voting powerof a Class A share, giving the Class B stock about 93 pct ofResorts' voting power.

  • [RESORTS:0.624] [CLASS:0.487] [TRUMP:0.367] [VOTING:0.171] [ESTATE:0.166] [POWER:0.134] [CROSBY:0.134] [CASINO:0.119] [DEVELOPER:0.118] [SHARES:0.117] [OWNER:0.102] [DONALD:0.097] [COMMON:0.093] [GIVING:0.081] [OWNS:0.080] [MAKES:0.078] [TIMES:0.075] [SHARE:0.072] [JAMES:0.070] [REAL:0.068] [CONTROL:0.065] [ACQUIRE:0.064] [OFFERED:0.063] [BID:0.063] [LATE:0.062] [OUTSTANDING:0.056] [SPOKESMAN:0.049] [CHAIRMAN:0.049] [INTERNATIONAL:0.041] [STOCK:0.035] [YORK:0.035] [PCT:0.022] [MARCH:0.011]

Original text

  • Syntactic



(high dimensional

sparse vector)

Ontologies level

Ontologies level

  • Ontologies are the most general formalism for describing data objects

    • …in the recent years ontologies got popular through Semantic Web and OWL standard

    • Ontologies can be of various complexity – from relatively simple ones (light weight described with simple) to heavy weight (described with first order theories.

    • Ontologies could be understood also as very generic data-models where we can store extracted information from text

  • Semantic

What is text mining what are the application areas what are the challenges what are the tools


  • Login