1 / 34

Unsupervised language acquisition Carl de Marcken 1996

Unsupervised language acquisition Carl de Marcken 1996. Viterbi-best parse. T H E R E N T I S D U E. A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8. T H E R E N T I S D U E.

ajaxe
Download Presentation

Unsupervised language acquisition Carl de Marcken 1996

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unsupervised language acquisitionCarl de Marcken 1996

  2. Viterbi-best parse T H E R E N T I S D U E A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8

  3. T H E R E N T I S D U E After character 1: Best analysis: T: 3 bits A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8

  4. T H E R E N T I S D U E After character 1: Best (only) analysis: T: 3 bits After character 2: (1,2) not in lexicon (1,1)’s best analysis + (2,2) which exists: 3 + 3 = 6 bits A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8

  5. T H E R E N T I S D U E After character 1: Best (only) analysis: T: 3 bits After character 2: (1,2) not in lexicon (1.1)’s best analysis + (2,2) which exists: 3 + 3 = 6 bits (WINNER) A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8 After character 3: (1,3) is in lexicon: THE: 3 bits (1,1) best analysis + (2,3) which exists: 3 + 4 = 7 bits T-HE (1,2) best analysis + (3,3): T-H-E: 6 + 3 = 9 bits THE wins (3 bits)

  6. T H E R E N T I S D U E A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8 After character 1: Best (only) analysis: T: 3 bits After character 2: (1,2) not in lexicon (1,1)’s best analysis + (2,2) which exists: 3 + 3 = 6 bits After character 3: (1,3) is in lexicon: THE: 3 bits (1,1) best analysis + (2,3) which exists: 3 + 4 = 7 bits T-HE (1,2) best analysis + (3,3): T-H-E: 6 + 3 = 9 THE wins After character 4: (1,4) not in lexicon; (2,4) HER (4 bits): with best analysis afterafter (1), yields T-HER (7 bits) (3,4) not in lexicon; best up to 3: THE plus R yields THE-R, cost is 3 + 3 = 6. (Winner)

  7. T H E R E N T I S D U E A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8 1: T 3 2: T-H 6 3: THE 3 4: THE-R 6 5: (1,5) THERE: 5 (1,1) + (2,5)HERE = 3 + 5 = 8 (1,2) + (3,5)ERE = 6 + 8= 14 (4,5) not in lexicon (1,4) + (5,5) = THE-R-E = 6 + 3 = 9 THERE is the winner (5 bits) 6: (1,6) not checked because exceeds lexicon max length (2,6) HEREN not in lexicon (3,6) EREN not in lexicon (4,6) REN not in lexicon (5,6) EN not in lexicon (1,5) + (6,6) = THERE-N = 5 + 3 = 8 Winner

  8. T H E R E N T I S D U E A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8 1: T 3 2: T-H 6 3: THE 3 4: THE-R 6 5 THERE 5 6 THERE-N 8 7: start with ERENT: not in lexicon (1,3) + (4,7): THE-RENT=3 + 7 = 10 ENT not in lexicon NT not in lexicon (1,6) + (7,7) = THERE-N-T 8 + 3 = 11 THE-RENT winner (10 bits)

  9. T H E R E N T I S D U E A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8 1: T 3 2: T-H 6 3: THE 3 4: THE-R 6 5 THERE 5 6 THERE-N 8 7 THE-RENT 10 8 Start with RENTI: not in lexicon ENTI, NTI, TI, none in lexicon (1,7) THE-RENT + (8,8) I = 10 + 3 = 13 The winner by default 9: Start with ENTIS: not in lexicon, nor is NTIS (1,6) THERE-N + (7,9)TIS = 8 + 8 = 16 (1,7) THE-RENT + (8,9) IS= 10 + 4 = 14 (1,8) THE-RENT-I + (9,9) S = 13 + 3 = 16 THE-RENT-IS is the winner (14)

  10. T H E R E N T I S D U E A,B,C… 3 bits each HE 4 HER 4 THE 3 HERE 5 THERE 5 RENT 7 IS 4 TIS 8 DUE 7 ERE 8 1: T 3 2: T-H 6 3: THE 3 4: THE-R 6 5 THERE 5 6 THERE-N 8 7 THE-RENT 10 8 THE-RENT-I 13 9 THE-RENT-IS 14 10: Not found: NTISD, TISD, ISD, SD (1,9) THE-RENT-IS + (10,10) D = 14 + 3 = 17 11: Not found: TISDU, ISDU, SDU, DU; (1,10) THE-RENT-IS-D + U = 17 + 3 = 20 Winner: THE-RENT-IS-D-U (20) 12: Not found: ISDUE, SDUE; (1,9) THE-RENT-IS + (10,12) DUE = 14+7 = 21; UE not found; WINNER! (1,11) THE-RENT-IS-D-U + (12,12) E = 20 + 3 = 23

  11. Broad outline • Goal: take a large unbroken corpus (no indication of where word boundaries are), find the best analysis of the corpus into words. • “Best”? Interpret the goal in the context of MDL (Minimum Description Length) theory We begin with a corpus, and a lexicon which initially has all and only the individual characters (letters, or phonemes) as its entries.

  12. Iterate several times (e.g., 7 times): • Construct tentative new entries for lexicon with tentative counts; From counts, calculate rough probabilities. • EM (Expectation/Maximization): iterate 5 times: • Expectation: find all possible occurrences of each lexical entry in the corpus; assign relative weights to each occurrence found, based on its probability; use this to assign (non-integral!) counts of words in the corpus. • Maximization: convert counts into probabilities. • Test each lexical entry to see whether description length is better without it in the lexicon. If true, remove it. • Find best parse (Viterbi-parse), the one with highest probability.

  13. T H E R E N T I S D U E Lexicon: D E H I N R S T U T 2 E 3 All others 1 Total count: 12

  14. Step 0 • Initialize the lexicon with all of the symbols in the lexicon (the alphabet, the set of phonemes, whatever it is). • Each symbol has a probability, which is simply its frequency. • There are no (non-trivial) chunks in the lexicon.

  15. Step 1 • 1.1 Create tentative members • TH HE ER RE EN NT TI IS SD DU UE • Give each of these a count of 1. • Now the total count of “words” in the corpus is 12 + 11= 23. • Calculate new probabilities: pr(E) = 3/23; pr(TH) = 1/23. • Prob’s of the lexicon form a distribution.

  16. Expectation/Maximization (EM)iterative: • This is a widely used algorithm to do something important and almost miraculous: to find the best parameters for hidden parameters. • Expectation: Find all occurrences of each lexical item in the corpus. Use the Forward/Backward algorithm.

  17. Forward algorithm • Find all ways of parsing the corpus from the beginning to each point, and associate with each point the sum of the probabilities for all of those ways. We don’t know which is the right one, really.

  18. Forward Start at position 1, after T: THERENTISDUE The only way to get there and put a word break there (“T HERENTISDUE”) utilizes the word(?) “T”, whose probability is 2/23. Forward(1) = 2/23. Now, after position 2, after TH: There are 2 ways to get this: T H ERENTISDUE (a) or TH ERENETISDUE (b) • has probability 2/23 * 1/23 = 2/529 =.003781 • Has prob 1/23 = 0.0435

  19. There are 2 ways to get this: T H ERENTISDUE (a) or TH ERENETISDUE (b) • has probability 2/23 * 1/23 = 2/529 =.003781 • Has prob 1/23 = 0.0435 So the Forward probability after letter 2 (after “TH”) is 0.0472. After letter 3 (after “THE”), we have to consider the possibilities: (1)T-HE and (2)TH-E and (3)T-H-E

  20. (1)T-HE (2)TH-E (3)T-H-E • We calculate this prob as Prob of a break after (1) = “T” = 2/23 = .0869 * prob (HE) ( which is 1/23 = 0.0434) = 0.00378 • We combine cases (2) and (3), giving us for both, together: Prob of a break after position 2 (the H), already calculated as 0.0472 * prob of (E) = 0.0472 * 0.13 =0.00616.

  21. Forward T H E P1a P1b P2 Value of Forward here is the sum of the probabilities going by the two paths, P1 and P2

  22. Forward • T H E P2a P2b P3 Value of Forward here is the sum of the probabilities going by the two paths, P2 and P3 You only need to back (from where you are) the length of the longest lexical entry (which is now 2).

  23. Conceptually • We are computing for each break (between letters) what the probability is that there is a break there, by considering all possible chunkings of the (prefix) string, the string up to that point from the left. • This is the Forward probability of that break.

  24. Backward • We do exactly the same thing from right to left, giving us a backward probability: • …. D U E

  25. Now the tricky step: • T H E R E N T I S D U E • Note that we know the probability of the entire string (it’s Forward(12), which is the sum of the probabilities of all the ways of chunking the string)=Pr(string) • What is the probability that -R- is a word, given the string?

  26. T H E R E N T I S D U E • That is, we’re wondering whether the R here is a chunk, or part of the chunk ER, or part of the chunk RE. It can’t be all three, but we’re not in a position (yet) to decide which it is. How do we count it? • We take the count of 1, and divide it up among the three options in proportion to their probabilities.

  27. T H E R E N T I S D U E • Probability that R is a word can be found in this expression: (a) This is the fractional count that goes to R.

  28. Do this for all members of the lexicon • Compute Forward and Backward just once for the whole corpus, or for each sentence or subutterance if you have that information. • Compute the counts of all lexical items that conceivably could occur (in each sentence, etc.). • End of Expectation.

  29. We’ll go through something just like this again in a few minutes, when we calculate the Viterbi-best parse.

  30. Maximization • We now have a bunch of counts for the lexical items. None of the counts are integral (except by accident). • Normalize: take sum of counts over the lexicon = N, and calculate frequency of each word = Count (word)/N; • Set prob(word) = freq (word).

  31. Why “Maximization”? • Because the values for probabilities that maximize the probability for the whole are obtained by using the frequency values for the probabilities. • That’s not obvious….

  32. Testing a lexical entry A lexical entry makes a positive contribution to the analysis iff the Description Length of the corpus is lower when we incorporate that lexical entry than when we don’t, all other things being equal. What is the description length (DL), and how is it calculated?

  33. Approximately? • Since the plog is rarely an integer, you may have to round up to the next integer, but that’s all. • So the more often something is used in the lexicon, the cheaper it is for it to be used by the words that use it. • Just like in morphology.

  34. Let’s look at results • It doesn’t know whether it’s finding letters, letter chunks, morphemes, words, or phrases. • Why not? • Statistical learning is heavily structure-bound: don’t forget that! If the structure is there, it must be found.

More Related