machine translation
Download
Skip this Video
Download Presentation
Machine Translation

Loading in 2 Seconds...

play fullscreen
1 / 72

Machine Translation - PowerPoint PPT Presentation


  • 122 Views
  • Uploaded on

Machine Translation. Om Damani (Ack: Material taken from JurafskyMartin 2 nd Ed., Brown et. al. 1993). State of the Art. The spirit is willing but the flesh is weak. English-Russian Translation System. Дух охотно готов но плоть слаба. Russian-English Translation System.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Machine Translation' - sloane-bryan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
machine translation

Machine Translation

Om Damani

(Ack: Material taken from JurafskyMartin 2nd Ed., Brown et. al. 1993)

state of the art
State of the Art

The spirit is willing but the flesh is weak

English-Russian Translation System

Дух охотно готов но плоть слаба

Russian-English Translation System

The vodka is good, but the meat is rotten

Babelfish: Spirit is willingly ready but flesh it is weak

Google: The spirit is willing but the flesh is week

state of the art english hindi march 19 2009
State of the Art (English-Hindi) – March 19, 2009

The spirit is willing but the flesh is weak

Google English-Hindi Translation System

आत्मा पर शरीर दुर्बल है

Google Hindi-English Translation System

Spirit on the flesh is weak

is state of the art english hindi so bad
Is State of the Art (English-Hindi) so bad

Is state of the art so bad

Google English-Hindi Translation System

कला की हालत इतनी खराब है

Google Hindi-English Translation System

The state of the art is so bad

state of the english hindi translation is not so bad
State of the english-hindi translation is not so bad

State of the english hindi translation is not so bad

Google English-Hindi Translation System

राज्य के अंग्रेज़ी हिन्दी अनुवाद का इतना बुरा नहीं है

Google Hindi-English Translation System

State of the English translation of English is not so bad

OK. Maybe it is __ bad.

state of the english hindi translation is not so bad1
State of the English-Hindi translation is not so bad

State of the English Hindi translation is not so bad

Google English-Hindi Translation System

राज्य में अंग्रेजी से हिंदी अनुवाद का इतना बुरा नहीं है

राज्य के अंग्रेज़ी हिन्दी अनुवाद का इतना बुरा नहीं है

Google Hindi-English Translation System

English to Hindi translation in the state is not so bad

OK. Maybe it is __ __ bad.

direct transfer limitations
Direct Transfer: Limitations

कई बंगाली कवियों ने इस भूमि के गीत गाए हैं

Kai Bangali kaviyon ne is bhoomi ke geet gaaye hain

Morph: कई बंगाली कवि-PL,OBL ने इस भूमि के गीत {गाए है}-PrPer,Pl

Kai Bangali kavi-PL,OBL ne is bhoomi ke geet {gaaye hai}-PrPer,Pl

Lexical Transfer: Many Bengali poet-PL,OBL this land of songs {sing has}- PrPer,Pl

Local Reordering: Many Bengali poet-PL,OBL of this land songs {has sing}- PrPer,Pl

Final: Many Bengali poets of this land songs have sung

Many Bengali poets have sung songs of this land

syntax transfer analysis transfer generation
Syntax Transfer (Analysis-Transfer-Generation)

Here phrases NP, VP etc. can be arbitrarily large

syntax transfer limitations
Syntax Transfer Limitations

He went to Patna -> Vah Patna gaya

He went to Patil -> Vah Patil ke pas gaya

Translation of went depends on the semantics of the object of went

Fatima eats salad with spoon – what happens if you change spoon

Semantic properties need to be included in transfer rules

– Semantic Transfer

interlingua based transfer

contact

obj

agt

you

pur

farmer

this

plc

:01

region

or

nam

taluka

manchar

nam

khatav

Interlingua Based Transfer

For this, you contact the farmers of Manchar region or of Khatav taluka.

In theory: N analysis and N transfer modules in stead of N2

In practice: Amazingly complex system to tackle N2 language pairs

slide14
Difficulties in Translation – Language Divergence(Concepts from Dorr 1993, Text/Figures fromDave, Parikh and Bhattacharyya 2002)

Constituent Order

Prepositional Stranding

Null Subject

Conflational Divergence

Categorical Divergence

lost in translation we are talking mostly about syntax not semantics or pragmatics
Lost in Translation: We are talking mostly about syntax, not semantics, or pragmatics

Image from http://inicia.es/de/rogeribars/blog/lost_in_translation.gif

You: Could you give me a glass of water

Robot: Yes.

….wait..wait..nothing happens..wait…

…Aha, I see…

You: Will you give me a glass of water

…wait…wait..wait..

checkpoint
CheckPoint
  • State of the Art
  • Different Approaches
  • Translation Difficulty
  • Need for a novel approach
statistical machine translation most ridiculous idea ever
Statistical Machine Translation: Most ridiculous idea ever

Consider all possible partitions of a sentence.

For a given partition,

Consider all possible translations of each part.

Consider all possible combinations of all possible translations

Consider all possible permutations of each combination

And somehow select the best partition/translation/permutation

कई बंगाली कवियोंने इस भूमिकेगीत गाए हैं

Kai Bangalikaviyon ne isbhoomi ke geet gaayehain

To thisspacehave sung songsofmany poets from Bangal

how many combinations are we talking about
How many combinations are we talking about

Number of choices for a N word sentence

N=20 ??

Number of possible chess games

how do we get the phrase table

इसके लिएआप मंचरक्षेत्र के किसानों सॆसंपर्क कीजिए

Forthis youcontact thefarmersof Manchar region

How do we get the Phrase Table

Collect large amount of bi-lingual parallel text.

For each sentence pair,

Consider all possible partitions of both sentences

For a given partition pair,

Consider all possible mapping between parts (phrases) on two side

Somehow assign the probability to each phrase pair

data sparsity problems in creating phrase table
Data Sparsity Problems in Creating Phrase Table

Sunil is eating mangoe -> Sunil aam khata hai

Noori is eating banana -> Noori kela khati hai

Sunil is eating banana ->

We need examples of everyone eating everything !!

We want to figure out that eating can be either khata hai or khati hai

And let Language Model select from ‘Sunil kela khata hai’ and

‘Sunil kela khati hai’

Select well-formed sentences among all candidates using LM

formulating the problem
Formulating the Problem

. A language model to compute P(E)

. A translation model to compute P(F|E)

. A decoder, which is given F and produces the most probable E

p f e vs p e f
P(F|E) vs. P(E|F)

P(F|E) is the translation probability – we need to look at the generation

process by which <F,E> pair is obtained.

Parts of F correspond to parts of E. With suitable independence assumptions,

P(F|E) measures whether all parts of E are covered by F.

E can be quite ill-formed.

It is OK if {P(F|E) for an ill-formed E} is greater than the {P(F|E) for a well formed

E}. Multiplication by P(E) should hopefully take care of it.

We do not have that luxury in estimating P(E|F) directly – we will need to

ensure that well-formed E score higher.

Summary: For computing P(F|E), we may make several independence

assumptions that are not valid. P(E) compensated for that.

We need to estimate P(It is raining| बारिश हो रही है) vs.P(rain is happening| बारिश हो रही है)

P(बारिश हो रही है|It is raining) = .02

P(बरसात आ रही है| It is raining) = .03

P(बारिश हो रही है|rain is happening) = .420

checkpoint1
CheckPoint
  • From a parallel corpus, generate probabilistic phrase table
  • Give a sentence, generate various candidate translations using the phrase table
  • Evaluate the candidates using Translation and Language Models
what is the meaning of probability of translation
What is the meaning of Probability of Translation
  • What is the meaning of P(F|E)
  • By Magic: you simply know P(F|E) for every (E,F)pair – counting in a parallel corpora
  • Or, each word in E generates one word of F, independent of every other word in E or F
  • Or, we need a ‘random process’ to generate F from E
  • A semantic graph G is generated from E and F is generated from G
    • We are no better off. We now have to estimate P(G|E) and P(F|G) for various G and then combine them – How?
    • We may have a deterministic procedure to convert E to G, in which case we still need to estimate P(F|G)
  • A parse tree TE is generated from E; TE is transformed to TF; finally TF is converted into F
    • Can you write the mathematical expression
the generation process
The Generation Process
  • Partition: Think of all possible partitions of the source language
  • Lexicalization: For a give partition, translate each phrase into the foreign language
  • Spurious insertion: add foreign words that are not attributable to any source phrase
  • Reordering: permute the set of all foreign words - words possibly moving across phrase boundaries

Try writing the probability expression for the generation process

We need the notion of alignment

alignment
Alignment

A function from target position to source position:

The alignment sequence is: 2,3,4,5,6,6,6

Alignment function A: A(1) = 2, A(2) = 3 ..

A different alignment function will give the sequence:1,2,1,2,3,4,3,4 for A(1), A(2)..

To allow spurious insertion, allow alignment with word 0 (NULL)

No. of possible alignments:

(I+1)J

ibm model 1 details
IBM Model 1: Details
  • No assumptions. Above formula is exact.
  • Choosing length: P(J|E) = P(J|E,I) = P(J|I) =
  • Choosing Alignment: all alignments equiprobable
  • Translation Probability
hmm alignment
HMM Alignment
  • All alignments are not equally likely
  • Can you guess what properties does an alignment have
  • Alignments tend to be locality preserving – neighboring words tend to get aligned together
  • We would like P(aj) to depend on aj-1
hmm alignment details
HMM Alignment: Details
  • P(F,A|J,E) decomposed as P(A|J,E)*P(F|A,J,E) in Model 1
  • Now we will decompose it differently
    • (J is implict, not mentioned in conditional expressions)
  • Alignment Assumption (Markov): Alignment probability of Jth word P(aj) depends only on the alignment of the previous word aj-1
  • Translation assumption: probability of the foreign word fj depends only on the aligned English word eaj
computing the alignment probability
Computing the Alignment Probability
  • P(aj|aj-1, I) is written as P(i|i’, I)
  • Assume - probability does not depend on absolute word positions but on the jump-width (i-i’) between words: P (4 | 6, 17) = P (5 | 7, 17)
    • Note: Denominator counts are collected over sentences of all lengths. But sum is performed over only those jump-widths relevant to (i,i’) – For I’=6: -5 to 11 is relevant
hmm model example
HMM Model - Example

P(F,A|E) = P(J=10|I=9)*P(2|start,9)*P(इसके|this)*P(-1|2,9)

*P(लिए|this)*P(2|1,9)*….*P(0|4,9)*P(कीजिए|contact)

enhancing the hmm model
Enhancing the HMM model
  • Add NULL words in the English to which foreign words can align
  • Condition the alignment on the word class of the previous English word
  • Other suggestions ??
  • What is the problem in making more realistic assumptions
    • How to estimate the parameters of the model
checkpoint2
Checkpoint
  • Generative Process is important for computing probability expressions
  • Model1 and HMM model
  • What about Phrase Probabilities
training alignment models
Training Alignment Models
  • Given a parallel corpora, for each (F,E) learn the best alignment A and the component probabilities:
    • t(f|e) for Model 1
    • lexicon probability P(f|e) and alignment probability P(ai|ai-1,I) for the HMM model
  • How will you compute these probabilities if all you have is a parallel corpora
intuition interdependence of probabilities
Intuition : Interdependence of Probabilities
  • If you knew which words are probable translation of each other then you can guess which alignment is probable and which one is improbable
  • If you were given alignments with probabilities then you can compute translation probabilities
  • Looks like a chicken and egg problem
  • Can you write equations expressing one in terms of other
computing alignment probabilities
Computing Alignment Probabilities
  • Align. Prob. In terms of trans. Prob. :

P(A,F|J,E)

  • Compute P(A) in terms of P(A,F)
    • Note: Prior Prob. for all Alignments are equal. We are interested in posterior probabilities.
  • Can you specify translation prob. in terms of align. prob.
computing translation probabilities
Computing Translation probabilities

.5

.3

.9

P(संपर्क | contact) =

2/6

= (.5*1+.3*1+.9*0)/(.5*3+.3*2+.9*1)

=.8/3

What if alignments had probabilities

Note: It is not .7*1/3 + .5*1/2 + .9*0 ??

expectation maximization em algorithm
Expectation Maximization (EM) Algorithm
  • Used when we want maximum likelihood estimate of the parameters of
  • a model when the model depends on hidden variables
  • In present case, parameters are Translation Probabilities, and hidden
  • Variables are alignment probabilities

Init: Start with an arbitrary estimate of parameters

E-step: compute the expected value of hidden variables

M-Step: Recompute the parameters that maximize the likelihood of

data given the expected value of the hidden variables from E-step

working out alignments for a simplified model 1
Working out alignments for a simplified Model 1
  • Ignore the NULL words
  • Assume that every english word aligns with some foreign word (just to reduce the number of alignments for the illustration)
example of em
Example of EM

Green house

Casa verde

The house

La case

Init: Assume that any word can generate any word with equal prob:

P(la|house) = 1/3

e step
E-Step

E-Step:

e step again
E-Step again

1/3

2/3

2/3

1/3

Repeat till convergence

computing translation probabilities in model 1
Computing Translation Probabilities in Model 1
  • E-M algo is fine, but it requires exponential computation
    • For each alignment we recompute alignment probability
    • Translation probability is computed from all alignment probabilities
  • We need efficient algo
checkpoint3
Checkpoint
  • Use of EM algorithm for estimating phrase probabilities under IBM Model-1
  • An example
  • And an efficient algorithm
generating bi directional alignments
Generating Bi-directional Alignments
  • Existing models only generate uni-directional alignments
  • Combine two uni-directional alignments to get many-to-many bi-directional alignments
combining alignments
Combining Alignments

P=4/5=.8,R=4/7=.6

P=2/3=.67, R=2/7=.3

P=5/6=.83,R=5/7=.7

P=6/9=.67,R=6/7=.85

a different heuristic from moses site
A Different Heuristic from Moses-Site

GROW-DIAG-FINAL(e2f,f2e):

neighboring = ((-1,0),(0,-1),(1,0),(0,1),(-1,-1),(-1,1),(1,-1),(1,1))

alignment = intersect(e2f,f2e);

GROW-DIAG(); FINAL(e2f); FINAL(f2e);

GROW-DIAG():

iterate until no new points added

for english word e = 0 ... en

for foreign word f = 0 ... fn

if ( e aligned with f )

for each neighboring point ( e-new, f-new ):

if (( e-new, f-new ) in union( e2f, f2e ) and

( e-new not aligned and f-new not aligned ))

add alignment point ( e-new, f-new )

FINAL(a):

for english word e-new = 0 ... en

for foreign word f-new = 0 ... fn

if ( ( ( e-new, f-new ) in alignment a) and

( e-new not aligned or f-new not aligned ) )

add alignment point ( e-new, f-new )

Proposed Changes:

After growing diagonal

Align the shorter sentence first

And use alignments only from

corresponding directional alignment

generating phrase alignments
Generating Phrase Alignments

premier beach vacation

प्रमुखसमुद्र-तटीय

a premier beach vacation destination

एकप्रमुख समुद्र-तटीयगंतव्य है

phrase alignment probabilities
Phrase Alignment Probabilities
  • We have been dealing with just one sentence pair.
  • In fact, we have been dealing with just one alignment – the most probable alignment
  • Such a alignment can easily have mistakes, and generate garbage phrases
  • Compute phrase alignment probabilities over entire corpus
ibm model 3
IBM Model 3

Model 1 Generative story

  • Model 1 story seems bizarre
  • Who will first chose the sentence length and then align and then generate
  • A more likely case is
  • - generate translation for each word and then reorder
model 3 formula
Model 3 Formula
  • Ignore NULL for a moment
  • Choosing Fertility:
  • Generating words:
  • Aligning words:
generating spurious words
Generating Spurious Words
  • Instead of using n(2|NULL) or n(1|NULL)
  • With probability p1, generate a spurious word every time a valid word is generated
  • Ensures that longer sentences generate more spurious words
slide66

इसके लिए आप मंचर क्षेत्र के किसानों सॆसंपर्क कीजिए

For this you contact the farmers of Manchar region

slide67

इसके लिएआप मंचरक्षेत्र के किसानों सॆसंपर्क कीजिए

Forthis youcontact thefarmersof Manchar region

slide68

इसके लिए किसानों सॆमिलिये

For this you contact the farmers

slide69

इसके लिए आप मंचर क्षेत्र के किसानों सॆसंपर्क कीजिए

For this you contact the farmers of Manchar region

ochney03 heuristic intution
OchNey03 Heuristic: Intution
  • Decide the intersection
  • Extend it by adding alignments from the union if both the words in union alignment are not already aligned in the final alignment
  • Then add an alignment only if:
    • It already has an adjacent alignment in the final alignment, and,
    • Adding it will not cause any final alignment to have both horizontal and vertical neighbors as final alignments
smt example
SMT Example

कई बंगाली कवियोंने इसभूमिकेगीत गाए हैं

Kai Bangalikaviyon ne isbhoomi ke geet gaayehain

To thisspacehave sung songsofmany poets from Bangal

translation model notations
Translation Model - Notations
  • F: f1, f2,..,fJ ; E: e1, e2,..eJ
  • P(F|E) not same as P (f1..fJ | e1..eI)
    • What is P(फातिमा चावल खाती है| Fatima eats rice)
  • P(F|E) = P (J, f1..fJ | I, e1..eI)
    • We explicitly mention I and J only when needed
  • We will work with above formulation instead of the alternative
    • P(F|E) = P (w1(F)=f1..wJ(F)=fJ w[J+1](F)=$ | …)
ad