1 / 17

A Joint Segmenting and Labeling Approach for Chinese Lexical Analysis

ECML PKDD 2008, Antwerp. A Joint Segmenting and Labeling Approach for Chinese Lexical Analysis. Xinhao Wang, Jiazhong Nie, Dingsheng Luo, and Xihong Wu Speech and Hearing Research Center, Department of Machine Intelligence, Peking University September 18 th , 2008. Cascaded Subtasks in NLP.

maryrudy
Download Presentation

A Joint Segmenting and Labeling Approach for Chinese Lexical Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECML PKDD 2008, Antwerp A Joint Segmenting and Labeling Approach for Chinese Lexical Analysis Xinhao Wang, Jiazhong Nie, Dingsheng Luo, and Xihong Wu Speech and Hearing Research Center, Department of Machine Intelligence, Peking University September 18th, 2008

  2. Cascaded Subtasks in NLP • Drawbacks: • Errors introduced by earlier subtasks propagate through the pipeline and will never be recovered in downstream subtasks. • The information sharing among different subtasks is prohibited by this pipeline manner. Word Sense Disambiguation Chunking and Parsing POS Tagging Word Segmentation and Named Entity Recognition

  3. Researchers’ Efforts on Joint Processing • Reranking (Shi, 2007; Sutton, 2005; Zhang, 2003) • As an approximation of joint processing, it may miss the true optimal result, which often lies out of the k-best list. • Taking multiple subtasks as a single one (Luo, 2003; Miller, 2000; Yi, 2005; Nakagawa, 2007, Ng, 2004) • The obstacle is the requirement of corpus annotated with multi-level information. • Unified probabilistic models (Sutton, 2004, Duh, 2005) • Dynamical Conditional Random Fields (DCRFs) and Factorial Hidden Markov Models (FHMMs), which are trained jointly and performs the subtasks all at once. • Both DCRFs and FHMMs suffer from the absence of multi-level data annotation.

  4. A Unified Framework for Joint Processing • A WFSTs based approach is presented to jointly perform a cascade of segmentation and labeling tasks, which holds two remarkable features as below: • WFST offers a unified framework that can represent many widely used models, like lexical constraints, n-gram language model and Hidden Markov Models (HMMs), and thus a unified transducer representation for modeling multiple knowledge sources can be achieved. • Multiple WFSTs can be integrated into a fully composed single WFST, which makes it possible to perform a cascade of subtasks with a one-pass decoding.

  5. Weighted Finite State Transducers (WFSTs) • The WFST is the generalization of the finite state automata, which is capable of realizing a weighted relation between strings. • Composition operation Example of WFSTs composition. Two simple WFSTs are showed in (a) and (b), in which states are represented by circles and labeled with their unique numbers. The bold circles represented initial states and double circles of final states. The input and output labels as well as weight of transition t are marked as in(t):out(t)/weight(t). In (c), the composition of (a) and (b) is illustrated.

  6. Joint Chinese Lexical Analysis • The WFST based approach • Uniform Representation for Multiple Subtask Models. • Integration of Multiple Models. • Tasks • word segmentation, part-of-speech tagging, and person and location names recognition.

  7. Multiple Subtasks Modeling • An n-gram language model based on word classes is adopted for word segmentation. • Hidden Markov Models (HMMs) are adopted both for names recognition and POS tagging. • In names recognition, both Chinese characters and words are considered as model units, and it is performed with word segmentation simultaneously

  8. The Pipeline System vs. The Joint System Decode The Best Segmentation Compose Compose Decode Decode Output Output Pipeline Baseline Integrated Analyzer

  9. Simulation Setup • Corpus: People’s Daily of China annotated by the Institute of Computational Linguistics of Peking University • 01-05(98) is used as the training set • 06(98) is the test set • The first 2000 sentences of the test set are taken as the development set

  10. The Statistical Significance Test • The approximate randomization approach (Yeh, 2000) is adopted to test the performance improvement produced by the joint processing. • The evaluation metric F1-value of word segmentation is tested. • The responses for each sentence produced by two systems are shuffled and equally resigned to each system, and then the significance level is computed based on the shuffled results • 10 sets, 500 sentences for each, are randomly selected and tested. For all the selected 10 sets, the significance level p-values are all far smaller than 0.001.

  11. Discussions • This approach holds the full search space and chooses the optimal results based on the multi-level sources, rather than reranking the k-best candidates . • The models for each level subtask are trained separately, while the decoding is conducted jointly. Accordingly, it avoids the necessary of corpus annotated with multi-level information. • In the case when a segmentation task precedes a labeling task, the WFSTs based approach naturally ensures the consistency restriction imposed by the segmentation. • The unified framework of WFSTs provides the opportunity to easily apply the presented analyzer in other natural language related applications which are also based on WFSTs, such as speech recognition and machine translation

  12. Conclusion • In this research, within the unified framework of WFSTs, a joint processing approach is presented to perform a cascade of segmentation and labeling subtasks. • It has been demonstrated that the joint processing is superior to the traditional pipeline manner. • The finding suggests two directions for future research • More linguistic knowledge will be integrated in the analyzer, such as organization names recognition and shallow parsing. • Since rich linguistic knowledge will play an important role for the tough tasks, such as ASR and MT, incorporating our integrated analyzer may lead to a promising performance improvement.

  13. Thank you for your attention!

  14. Uniform Representation (1) Lexicon WFSTs. (a) is the FSA representing an input example; (b) is the FST representing a toy dictionary.

  15. Uniform Representation (2) The WFSA representing a toy bigram language model, where un(w1) denotes the unigram of w1; bi(w1;w2) and back(w1) respectively denotes the bigram of w2 and the backoff weight given the word history w1.

  16. Uniform Representation (3) surname the first character of the given name CNAME The second character of the given name POS WFSTs. (a) is the WFST representing the relationship between the word and the pos; (b) is the WFSA representing a toy bigram of POS

  17. The Statistical Significance Test • The approximate randomization approach (Yeh, 2000) . • The responses for each sentence produced by two systems are shuffled and equally resigned to each system, and then the significance level is computed based on the shuffled results. • The shuffle times is fixed as: • Since in our test set there are more than 21,000 sentences, the use of 220 shuffles to approximate 221000 shuffles turns unreasonable any more. Thus, 10 sets, 500 sentences for each, are randomly selected and tested. • For all the selected 10 sets, the significance level p-values are all far smaller than 0.001.

More Related