1 / 35

The Expectation Maximization (EM) Algorithm

The Expectation Maximization (EM) Algorithm. … continued!. Repeat until convergence!. General Idea. Start by devising a noisy channel Any model that predicts the corpus observations via some hidden structure (tags, parses, …) Initially guess the parameters of the model!

jteague
Download Presentation

The Expectation Maximization (EM) Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Expectation Maximization (EM) Algorithm … continued! 600.465 - Intro to NLP - J. Eisner

  2. Repeat until convergence! General Idea • Start by devising a noisy channel • Any model that predicts the corpus observations via some hidden structure (tags, parses, …) • Initially guess the parameters of the model! • Educated guess is best, but random can work • Expectation step: Use current parameters (and observations) to reconstruct hidden structure • Maximization step: Use that hidden structure (and observations) to reestimate parameters 600.465 - Intro to NLP - J. Eisner

  3. initialguess E step Guess of unknown parameters (probabilities) M step General Idea Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  4. initialguess M step For Hidden Markov Models E step Guess of unknown parameters (probabilities) Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  5. initialguess M step For Hidden Markov Models E step Guess of unknown parameters (probabilities) Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  6. initialguess M step For Hidden Markov Models E step Guess of unknown parameters (probabilities) Guess of unknown hidden structure (tags, parses, weather) Observed structure(words, ice cream) 600.465 - Intro to NLP - J. Eisner

  7. correct test trees accuracy expensive and/or wrong sublanguage cheap, plentifuland appropriate Grammar LEARNER training trees Grammar Reestimation P A R S E R s c o r e r test sentences 600.465 - Intro to NLP - J. Eisner

  8. EM by Dynamic Programming: Two Versions • The Viterbi approximation • Expectation: pick the best parse of each sentence • Maximization: retrain on this best-parsed corpus • Advantage: Speed! • Real EM • Expectation: find all parses of each sentence • Maximization: retrain on all parses in proportion to their probability (as if we observed fractional count) • Advantage: p(training corpus) guaranteed to increase • Exponentially many parses, so don’t extract them from chart – need some kind of clever counting why slower? 600.465 - Intro to NLP - J. Eisner

  9. Viterbi for tagging • Naïve idea: Give each word its highest-prob tag according to forward-backward. • Do this independently of other words. • Det Adj 0.35 • Det N 0.2 • N V 0.45 • Oops! We get Det V. • This is defensible, but not a coherent sequence. • Might screw up subsequent processing (parsing, chunking …) that depends on coherence. 600.465 - Intro to NLP - J. Eisner

  10. Viterbi for tagging • Naïve idea: Give each word its highest-prob tag according to forward-backward. • Do this independently of other words. • Det Adj 0.35 • Det N 0.2 • N V 0.45 • Better idea: Use Viterbi to pick single best tag sequence (best path). • Can still use full EM to train the parameters, if time permits, but use Viterbi to tag once trained. 600.465 - Intro to NLP - J. Eisner

  11. compose these? Examples • Finite-State case: Hidden Markov Models • “forward-backward” or “Baum-Welch” algorithm • Applications: • explain ice cream in terms of underlying weather sequence • explain words in terms of underlying tag sequence • explain phoneme sequence in terms of underlying word • explain sound sequence in terms of underlying phoneme • Context-Free case: Probabilistic CFGs • “inside-outside” algorithm: unsupervised grammar learning! • Explain raw text in terms of underlying PCFG • In practice, local maximum problem gets in the way • But canimprove a good starting grammar via raw text • Clustering case: explain points via clusters 600.465 - Intro to NLP - J. Eisner

  12. Inside-Outside Algorithm • How would we do the Viterbi approximation? • How would we do full EM? 600.465 - Intro to NLP - J. Eisner

  13. PCFG S NP time VP | S) = p(S  NP VP | S) * p(NP  time | NP) p( PP V flies * p(VP  V PP | VP) P like NP * p(V  flies | V) * … Det an N arrow 600.465 - Intro to NLP - J. Eisner

  14. The dynamic programming computation of a. (b is similar but works back from Stop.) Day 1: 2 cones Day 2: 3 cones Day 3: 3 cones a=0.1*0.08+0.1*0.01 =0.009 a=0.009*0.08+0.063*0.01 =0.00135 a=0.1 p(C|C)*p(3|C) p(C|C)*p(3|C) C C C p(C|Start)*p(2|C) 0.8*0.1=0.08 0.8*0.1=0.08 0.5*0.2=0.1 Start p(H|Start)*p(2|H) p(H|H)*p(3|H) 0.5*0.2=0.1 p(H|H)*p(3|H) H H H 0.8*0.7=0.56 0.8*0.7=0.56 p(C|H)*p(3|C) p(C|H)*p(3|C) a=0.1 0.1*0.1=0.01 0.1*0.1=0.01 p(H|C)*p(3|H) p(H|C)*p(3|H) a=0.1*0.07+0.1*0.56 =0.063 a=0.009*0.07+0.063*0.56 =0.03591 0.1*0.7=0.07 0.1*0.7=0.07 Analogies to a, b in PCFG? Call these aH(2) and bH(2) aH(3) and bH(3) 600.465 - Intro to NLP - J. Eisner

  15. “Inside Probabilities” S NP time VP | S) = p(S  NP VP | S) * p(NP  time | NP) p( PP V flies * p(VP  V PP | VP) P like NP * p(V  flies | V) * … Det an N arrow • Sum over all VP parses of “flies like an arrow”: VP(1,5) = p(flies like an arrow | VP) • Sum over all S parses of “time flies like an arrow”: S(0,5) = p(time flies like an arrow | S) 600.465 - Intro to NLP - J. Eisner

  16. Compute  Bottom-Up by CKY S NP time VP | S) = p(S  NP VP | S) * p(NP  time | NP) p( PP V flies * p(VP  V PP | VP) P like NP * p(V  flies | V) * … Det an N arrow VP(1,5) = p(flies like an arrow | VP) = … S(0,5) = p(time flies like an arrow | S) = NP(0,1) * VP(1,5) * p(S  NP VP|S) + … 600.465 - Intro to NLP - J. Eisner

  17. Compute  Bottom-Up by CKY 1 S  NP VP 6 S  Vst NP 2 S  S PP 1 VP  V NP 2 VP  VP PP 1 NP  Det N 2 NP  NP PP 3 NP  NP NP 0 PP  P NP

  18. Compute  Bottom-Up by CKY 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP S 2-22

  19. Compute  Bottom-Up by CKY 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP S 2-27

  20. The Efficient Version: Add as we go 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP

  21. s(0,5) S(0,2) s(0,2)* PP(2,5)*p(S  S PP | S) PP(2,5) The Efficient Version: Add as we go 2-1 S  NP VP 2-6 S  Vst NP 2-2 S  S PP 2-1 VP  V NP 2-2 VP  VP PP 2-1 NP  Det N 2-2 NP  NP PP 2-3 NP  NP NP 2-0 PP  P NP +2-22 +2-27

  22. X Y Z what if you changed + to max? what if you replaced all rule probabilities by 1? j k i Compute  probs bottom-up (CKY) need some initialization up here for the width-1 case for width := 2 to n (* build smallest first *) for i := 0 to n-width (* start *) let k := i + width (* end *) for j := i+1 to k-1 (* middle *) for all grammar rules X  Y Z X(i,k) +=p(X Y Z | X) * Y(i,j) *Z(j,k) 600.465 - Intro to NLP - J. Eisner

  23. S “outside” the VP NP time VP VP(1,5) = p(time VP today | S) VP NP today VP(1,5) = p(flies like an arrow | VP) “inside” the VP Inside & Outside Probabilities PP V flies P like NP VP(1,5) *VP(1,5) = p(time [VPflies like an arrow] today| S) Det an N arrow 600.465 - Intro to NLP - J. Eisner

  24. S NP time VP VP(1,5) = p(time VP today | S) VP NP today / S(0,6) p(time flies like an arrow today | S) Inside & Outside Probabilities PP V flies VP(1,5) = p(flies like an arrow | VP) P like NP VP(1,5) *VP(1,5) = p(timeflies like an arrowtoday & VP(1,5) | S) Det an N arrow = p(VP(1,5) | time flies like an arrow today, S) 600.465 - Intro to NLP - J. Eisner

  25. S NP time VP VP(1,5) = p(time VP today | S) C C C VP NP today Start H H H Inside & Outside Probabilities PP V flies VP(1,5) = p(flies like an arrow | VP) P like NP strictly analogousto forward-backward in the finite-state case! Det an N arrow So VP(1,5) *VP(1,5) / s(0,6) is probability that there is a VP here, given all of the observed data (words) 600.465 - Intro to NLP - J. Eisner

  26. S NP time VP VP(1,5) = p(time VP today | S) VP NP today Inside & Outside Probabilities PP V flies V(1,2) = p(flies | V) P like NP PP(2,5) = p(like an arrow | PP) Det an N arrow So VP(1,5) * V(1,2) *PP(2,5) / s(0,6) is probability that there is VP  V PP here, given all of the observed data (words) … or is it? 600.465 - Intro to NLP - J. Eisner

  27. S NP time VP VP(1,5) = p(time VP today | S) C C C VP NP today Start H H H sum prob over all position triples like (1,2,5) to get expected c(VP V PP); reestimate PCFG! Inside & Outside Probabilities strictly analogousto forward-backward in the finite-state case! PP V flies V(1,2) = p(flies | V) P like NP PP(2,5) = p(like an arrow | PP) Det an N arrow So VP(1,5) * p(VP V PP) *V(1,2) *PP(2,5) / s(0,6) is probability that there is VP V PP here (at 1-2-5), given all of the observed data (words) 600.465 - Intro to NLP - J. Eisner

  28. Summary: VP(1,5) += p(V PP |VP) * V(1,2) * PP(2,5) inside 1,5 inside 1,2 inside 2,5 VP(1,5) p(flies like an arrow |VP) VP += p(V PP |VP) * p(flies |V) p(like an arrow |PP) * Compute  probs bottom-up PP(2,5) PP V flies V(1,2) P like NP Det an N arrow 600.465 - Intro to NLP - J. Eisner

  29. Summary: V(1,2) += p(V PP |VP) * VP(1,5) * PP(2,5) S outside 1,2 outside 1,5 inside 2,5 p(time Vlike an arrow today | S) V(1,2) Compute  probs top-down (uses  probs as well) NP time VP NP today VP(1,5) VP PP V flies PP(2,5) +=p(time VP today | S) * p(V PP |VP) * p(like an arrow | PP) P like NP Det an N arrow 600.465 - Intro to NLP - J. Eisner

  30. Summary: PP(2,5) += p(V PP |VP) * VP(1,5) * V(1,2) outside 2,5 outside 1,5 inside 1,2 p(time fliesPP today | S) PP(2,5) Compute  probs top-down (uses  probs as well) S NP time VP NP today VP(1,5) VP PP V flies +=p(time VP today | S) * p(V PP |VP) * p(flies| V) V(1,2) P like NP Det an N arrow 600.465 - Intro to NLP - J. Eisner

  31. VP(1,5) VP Compute  probs bottom-up When you build VP(1,5), from VP(1,2) and VP(2,5)during CKY,increment VP(1,5) by p(VP VP PP) * VP(1,2) *PP(2,5) Why? VP(1,5) is total probability of all derivations p(flies like an arrow | VP)and we just found another. (See earlier slide of CKY chart.) PP(2,5) VP(1,2) PP VP flies P like NP Det an N arrow 600.465 - Intro to NLP - J. Eisner

  32. X Y Z j k i Compute  probs bottom-up (CKY) for width := 2 to n (* build smallest first *) for i := 0 to n-width (* start *) let k := i + width (* end *) for j := i+1 to k-1 (* middle *) for all grammar rules X  Y Z X(i,k) +=p(X Y Z) * Y(i,j) *Z(j,k) 600.465 - Intro to NLP - J. Eisner

  33. Compute  probs top-down (reverse CKY) n downto 2 “unbuild” biggest first for width := 2 to n (* build smallest first *) for i := 0 to n-width (* start *) let k := i + width (* end *) for j := i+1 to k-1 (* middle *) for all grammar rules X  Y Z Y(i,j) += ??? Z(j,k) += ??? X Y Z j k i 600.465 - Intro to NLP - J. Eisner

  34. PP(2,5) VP(1,2) PP VP VP(1,2) PP(2,5) already computed on bottom-up pass Compute  probs top-down S After computing  during CKY, revisit constits in reverse order (i.e., bigger constituents first).When you “unbuild” VP(1,5) from VP(1,2) and VP(2,5),increment VP(1,2) by VP(1,5)* p(VP  VP PP) *PP(2,5) and increment PP(2,5) by VP(1,5)* p(VP  VP PP) *VP(1,2) NP time VP VP(1,5) VP NP today VP(1,2) is total prob of all ways to gen VP(1,2) and all outside words. 600.465 - Intro to NLP - J. Eisner

  35. Compute  probs top-down (reverse CKY) n downto 2 “unbuild” biggest first for width := 2 to n (* build smallest first *) for i := 0 to n-width (* start *) let k := i + width (* end *) for j := i+1 to k-1 (* middle *) for all grammar rules X  Y Z Y(i,j) +=X(i,k)* p(X  Y Z) *Z(j,k) Z(j,k) +=X(i,k)* p(X  Y Z) *Y(i,j) X Y Z j k i 600.465 - Intro to NLP - J. Eisner

More Related