1 / 10

CS626-449: Speech, NLP and the Web/Topics in AI

CS626-449: Speech, NLP and the Web/Topics in AI. Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture-15: Probabilistic parsing; PCFG (contd.). Example of Sentence labeling: Parsing. [ S1 [ S [ S [ VP [ VB Come][ NP [ NNP July]]]] [ , ,] [ CC and]

gavin
Download Presentation

CS626-449: Speech, NLP and the Web/Topics in AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS626-449: Speech, NLP and the Web/Topics in AI Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture-15: Probabilistic parsing; PCFG (contd.)

  2. Example of Sentence labeling: Parsing [S1[S[S[VP[VBCome][NP[NNPJuly]]]] [,,] [CC and] [S [NP [DT the] [JJ UJF] [NN campus]] [VP [AUX is] [ADJP [JJ abuzz] [PP[IN with] [NP[ADJP [JJ new] [CC and] [ VBG returning]] [NNS students]]]]]] [..]]]

  3. Corpus • A collection of text called corpus, is used for collecting various language data • With annotation: more information, but manual labor intensive • Practice: label automatically; correct manually • The famous Brown Corpus contains 1 million tagged words. • Switchboard: very famous corpora 2400 conversations, 543 speakers, many US dialects, annotated with orthography and phonetics

  4. Discriminative vs. Generative Model W* = argmax (P(W|SS)) W Generative Model Discriminative Model Compute directly from P(W|SS) Compute from P(W).P(SS|W)

  5. Notion of Language Models

  6. Language Models • N-grams: sequence of n consecutive words/chracters • Probabilistic / Stochastic Context Free Grammars: • Simple probabilistic models capable of handling recursion • A CFG with probabilities attached to rules • Rule probabilities  how likely is it that a particular rewrite rule is used?

  7. PCFGs • Why PCFGs? • Intuitive probabilistic models for tree-structured languages • Algorithms are extensions of HMM algorithms • Better than the n-gram model for language modeling.

  8. Formal Definition of PCFG • A PCFG consists of • A set of terminals {wk}, k = 1,….,V {wk} = { child, teddy, bear, played…} • A set of non-terminals {Ni}, i = 1,…,n {Ni} = { NP, VP, DT…} • A designated start symbol N1 • A set of rules {Ni j}, where j is a sequence of terminals & non-terminals NP  DT NN • A corresponding set of rule probabilities

  9. Rule Probabilities • Rule probabilities are such that E.g., P( NP  DT NN) = 0.2 P( NP  NN) = 0.5 P( NP  NP PP) = 0.3 • P( NP  DT NN) = 0.2 • Means 20 % of the training data parses use the rule NP  DT NN

  10. Probability of a sentence • Notation : • wab – subsequence wa….wb • Nj dominates wa….wb or yield(Nj) = wa….wb Nj NP wa……………..wb the..sweet..teddy ..bear • Probability of a sentence = P(w1m) Where t is a parse tree of the sentence If t is a parse tree for the sentence w1m, this will be 1 !!

More Related