Unsupervised learning of natural languages
Download
1 / 31

Unsupervised learning of Natural languages - PowerPoint PPT Presentation


  • 138 Views
  • Uploaded on

Unsupervised learning of Natural languages. Eitan Volsky Yasmine Meroz. Introduction. Grammar learning methods can be grouped into two kinds: supervised and unsupervised.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Unsupervised learning of Natural languages' - dakota


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Unsupervised learning of natural languages

Unsupervised learning ofNatural languages

Eitan Volsky

Yasmine Meroz


Introduction
Introduction

  • Grammar learning methods can be grouped into two kinds: supervised and unsupervised.

  • Roughly speaking, unsupervised methodsuse only pre-tagged sentences, while supervised methods are first initialized with structured sentences.

  • Other Forms of supervision exist as well, for example, probabilistic grammars.



The bootstrapping process
The bootstrapping process they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • The process generates the syntactic structure of a sentence while it begins from scratch (when it’s completely unsupervised)

  • The structure has to be useful, thusarbitrary, random or incomplete structures are avoided.

  • The system should try to minimize the amount of the information it needs to learn structure.


The scope of the article
The Scope of the article they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • The article presents two unsupervised learning frameworks :

    • EMILE 4.1

    • ABL (Alignment-based learning)

  • We’ll present the frameworks and the algorithms that underlay them, and compare them on the ATIS and the OVIS corpora.


Emile 4 1
EMILE 4.1 they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • Some definitions first :

  • The sentence : David makes tea

    “David tea” is a “Context”

    makes is an “Expression”


Substitution classes intuition
Substitution Classes - intuition they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • If a language has a CFG then expressions, which are generated from thesame non-terminal can substitute each other in each context where that non-terminal is a valid constituent.

  • If we have a sufficiently rich example we can expect to find classes of expressions that cluster together.


Primary and characteristic contexts and expressions
Primary and characteristic contexts and expressions they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • A grammatical type is defined as a pair<TC, TE> where TC is a set of context and TE is a set of expressions.

  • Expressions and Contexts from those sets are called primary.

  • Characteristic Context for T is a context which appears only with expressions oftype T. The same for characteristic expressions.


Example
Example they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • “walk” can be both noun and a verb. So it cannot be characteristic neither for noun phrases nor for verb phrases.

  • “thing” can only be a noun, thus it appears only in noun phrases.

  • “thing” is characteristic for the type “noun”.


Shallow languages within chomsky hierarchy
Shallow languages they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.within Chomsky Hierarchy

  • Seems to be an independent category

type0

C

Context sensitive

Context-free

regular

Shallow


Shallow languages first criterion
Shallow languages - first criterion they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • Grammar G has context separability if each type of G has characteristic context and expression separability if each type of G has characteristic context.

  • Shallow language has to be context and expression separable.


Shallow languages second criterion
Shallow languages - second criterion they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • Shallow language has to have a sample set of sentences S inducing characteristic contexts and expressions for all types of GL. It’s called characteristic sample.

  • For all sentences of this sample set :

    K(s) < log(|G|)

Kolmogorov complexity = descriptive length of s


Why shallow languages
Why Shallow languages ? they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • If the sample is taken under simple distribution (dominated by recursively enumerable), The last criteria promises us the sample can be learnt (its grammar to be induced) in Polynomial Time to |G|,

  • Shallow grammars can be learned efficiently from positive examples, what turns the argument of poverty of stimulus, based on Gold’s results to unconvincing.


Natural languages are shallow
Natural Languages are Shallow they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • It is claimed (unproven) that natural language are shallow.

  • NL have large lexicons and relatively few rules.

  • Their Shallowness ensures us that if wesample enough sentences, the sample will be characteristic with large confidence.


How does emile really work
How does EMILE really work ? they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • Two Phases :

    • Clustering

    • Rule Induction


Corpus
Corpus they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • John makes tea.

  • John likes coffee.

  • John is eating.

  • John likes tea.

  • John likes eating.

  • John makes coffee.


Clustering
Clustering they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.


Identification of clusters settings
identification of clusters - settings they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • The identification of clusters depends on the following settings :

    • Total_support%

    • Context_support%

    • Expression_support%

Suppose that :

Total_support% = Context_support% = 75%

Expression_support% = 80%


Rule induction
Rule Induction they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • T => s0 [T1] s1 [T2] s2 [T3] s3

  • EMILE attempts to transform the collection of derivation rules found into CFG, consisting of those rules.

  • [0] => what is a family fare

  • [19] => a family fare.

  • [0] => what is [19]


Abl alignment based learning
ABL they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.(Alignment-Based Learning)

  • ABL is based on Harris’ principle of substitutability (1951) :

All constituents of the same kind can be replaced by each other.

ABL uses a reversed version of this principle :

If parts of sentences can be substituted by each other, they are constituents of the same type.


The algorithm
The Algorithm they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • The output of algorithm is a labeled, bracketed version of the input corpus.

  • The model learns by comparing all sentences in the input corpus to each other in pairs.

  • Two Phases :

    • Alignment learning

    • Selection learning


A comparison of two sentences
A Comparison of two sentences they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • The comparison of two sentences falls into one of three different categories :

    • All words in the two sentences are the same

    • The sentences are completely unequal

    • Some words in the sentences are same in both and some are not.


Alignment learning
Alignment Learning they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • What is a family fare

  • What is a payload of an African swallow ?

  • The unequaled parts of the sentence are possible constituents.

  • {a family fare, the payload of an African swallow}


The edit distance
The Edit Distance they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • The edit distance is the minimum edit cost needed to transform one sentence into another

  • (Wagner & Fischer 1971)

  • The algorithm which finds the edit distance

    also finds the longest common subsequence, and it also gives an estimation how far is the link between the two parts.


Example1
Example they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

From (San Francisco to)1 Dallas ()2

From ()1 Dallas (to San Francisco)2

From (San Francisco)1 to (Dallas)2

From (Dallas)1 to (San Francisco)2


Overlapping constituents
Overlapping Constituents they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • I didn’t take my passport.

  • I didn’t like this plane.

  • If {this plane} was already stored, “like this plane” overlaps with it, and we cannot assign them different types because it would prevent us from inducing a CFG in a later stage.


Selection learning
Selection Learning they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • In the Selection Learning phase, we try to get rid of the overlapping constituents by finding the best combination of constituents of each type.

  • 3 ways to compute constituent’s probability :

    • ABL : first-is-correct

    • ABL : leaf

    • ABL : branch


Selection learning cont
Selection Learning (cont’) they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • After the probabilities of the overlapping constituents were computed, The probability of each combination is computed using geometric mean, while using the Viterbi algorithm optimization, in order to do it efficiently.


Theoretical comparison
Theoretical Comparison they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • ABL is much more greedy, and thus learns faster and better on small corpora, but cannot learn on large corpora out of efficiency reasons. It stores all possible constituents, and only then selects the best ones.

  • EMILE is developed for large corpora (more than 100K sentences) and is much less greedy.It finds a grammar rule only when enough information was found to support it.


Conclusions
Conclusions they are much more time consuming, and in many cases it’s impossible to find a treebank or corpus, suitable for a specific task.

  • Both frameworks work pretty well for unsupervised learning models.

  • Their underlying ideas match rather well.

  • It should be possible to develop a hybrid version, which uses the best qualities of both algorithms.


ad