1 / 23

Meta-level Statistical Machine Translation System

Amirkabir University of Technology. Meta-level Statistical Machine Translation System. Sajad Ebrahimi , Kourosh Meshgi , Shahram Khadivi and Mohammad Ebrahim Shiri Ahmad Abady. Human Language Technology Lab Amirkabir University of Technology IJCNLP 2013, Nagoya, Japan.

adina
Download Presentation

Meta-level Statistical Machine Translation System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Amirkabir University of Technology Meta-levelStatistical Machine Translation System SajadEbrahimi, KouroshMeshgi, ShahramKhadivi and Mohammad EbrahimShiri Ahmad Abady Human Language Technology Lab AmirkabirUniversity of Technology IJCNLP 2013, Nagoya, Japan

  2. AmirkabirUniversity of Technology

  3. 1. Introduction AmirkabirUniversity of Technology • Traditional approaches to System Combinationneed multiple structurally different SMT systems. • In this research, we focus on a single SMT system. • We try to introduce a meta-level SMTwhich can learn how to decrease or modify translation errors. • To do this, we utilize an Ensemble Learning algorithm, called Stacking. • The basic idea : • a collection ofbase-levelSMTs is generated for obtaining a meta-level corpus • Then a meta-level SMT is trained on this corpus • We address the issue of how to adapt Stacking to SMT.

  4. 2. Background AmirkabirUniversity of Technology 2.1 Log-linear model and statistical machine translation : • Given a source string S , the goal of SMT is to find a target string from all possible translations : • In meta-SMT, given a machinery output , the goal is to find a target sentence :

  5. 2.1 Stacking for Classification AmirkabirUniversity of Technology Overview • Proposed by Wolpert(1992) • learn a meta-level (or level-1) classifier based on the output of base-level (or level-0) classifiers, estimated via cross-validation as follows: • Define Feature Vector Class Value J-fold cross-validation • disjoint almost equal parts • : set of different Learning algorithms

  6. 2.1 Stacking for Classification AmirkabirUniversity of Technology Overview • Define , Test set Training set full meta-level data set • At each j-th step, given the learning algorithms, we invoke each of them on to induce and apply to the test part . • The concatenated predictions + the original class value => • At the end of the entire cross-validation :

  7. 2.1 Stacking for Classification AmirkabirUniversity of Technology 3.1. Overview • is applied to a learning algorithm to induce meta-level classifier . • Finally , • All the learning algorithms () are applied to the entire data set inducing the final base-level classifiers to be used at runtime. • to classify a new instance : the concatenated predictions of all base-level classifiers form a meta-level vector that is assigned a class value by the meta-level classifier We adopt stacking to SMT in a principled way…

  8. 3. Adapting Stacking to SMT AmirkabirUniversity of Technology New Source Sentences • we adapt it to SMT as follows: Target Sentences

  9. 3.1 Training base-level SMTs AmirkabirUniversity of Technology • we train 5 phrase-based SMT systems on the training part and obtain the result of these systems on the corresponding test sets. We need these results for the next step. 3.2 Training meta-level SMTs • We gathered the n-best outputs of base-level SMTs on the corresponding test sets to : • build a meta-level corpus using these outputs along with correct human translations • Then, train a meta-SMT on this new corpus. • We train our meta-SMT on 10 meta-level corpus which is progressively created from n-best outputs of base-level systems, . • we call these systems as meta-SMT (1-best) and meta-SMT (2-best) and so on.

  10. 3.3 Tuning meta-level SMTs AmirkabirUniversity of Technology • To build a meta-level development set, we tune 5 base-level SMT systems on the tuning part and obtain the result of these systems on the corresponding test sets. • Finally a meta-level development set is created by gathering these outputs paired with correct human translations to tune meta-level SMTs.

  11. 4. Experiments AmirkabirUniversity of Technology 4.1 Data • The corpus that is used for training and cross-validation process is Verbmobil project corpus

  12. 4. Experiments AmirkabirUniversity of Technology 4.2 Experimental setup • Giza++=>bi-directional word alignment • SRILM =>language model training • case-insensitive BLEU=>quality measuring • Moses decoder =>a phrase-based SMT (both base-level and meta-level) • MERT=>tune the feature weights on the development data

  13. 4. Experiments AmirkabirUniversity of Technology 4.3 Evaluation • BLEU (%) scores of baseline SMT and meta-SMTs on the Verbmobil test set that has 250 sentences with four reference translations.

  14. 4. Experiments AmirkabirUniversity of Technology 4.3 Evaluation • Some examples: • Delete a wrong word: • EN : that is perfect . then we have talked about everything . goodbye . • FA (main) : آن عالی است . پس ما همه چیز درباره ‌ اش صحبت کردیم میبینم. خداحافظ . • FA (meta) : آن عالی است . پس ما همه چیز دیروز صحبت کردیم . خداحافظ . • Translate an untranslated word: • EN : I think we will take the Metropol hotel . could you reserve two single rooms ? • FA (main): من فکر میکنم ما را هتل Metropol . میتوانیم شما دو رزرو roomsمجزا ؟ • FA (meta): من فکر میکنم ما را هتل Metropol . میتوانیم شما دو رزرو اتاقها بیندازم تک ؟ • EN : yes , I would suggest the flight at a quarter past seven . • FA (main) : بله ، من را پیشنهاد میکنم flightیک ربع بعد از ساعت هفت . • FA (meta) : بله ، من را پیشنهاد میکنم پرواز یک ربع بعد از ساعت هفت .

  15. 4. Experiments AmirkabirUniversity of Technology 4.3 Evaluation • Some examples: • Rephrase and reordering : • EN : the best thing would be for us to take the subway from our hotel to the station. • FA (main) : بهترین چیز برای ما خواهد بود تارااز متروهتل ما تا ایستگاه . • FA (meta) : بهترین چیز برای ما خواهد بود تا از هتل ما تا ایستگاهمترو .

  16. 4. Experiments AmirkabirUniversity of Technology 4.3 Evaluation • two factors possibly contribute to these results : • performing cross-validation on the training set • the re-optimization on the system • we perform two experiments to investigate the effect of each factor : • (Straight1) => test the approach without any cross-validation process, but with the development set obtained from stacking. • (Straight2) => to build meta-level SMTs tuned with a development set which is obtained directly from baseline SMT (i.e., without performing cross-validation on it).

  17. 4. Experiments AmirkabirUniversity of Technology 4.3 Evaluation Comparison of Stacking, Straight1 and Straight2

  18. 4. Experiments AmirkabirUniversity of Technology 4.3 Evaluation • After analyzing the results: • it can be concluded that both factors, i.e., cross-validation and re-optimizing the system with the stacking-based development set, are important to outperform the baseline SMT system. Since use of both factors, consistently lead to the best results. • We conducted statistical significance tests using paired bootstrap resampling proposed by Koehn (2004) to measure the reliability of the conclusion that meta-SMTs are really better than baseline SMT. It is observed that all stacking-based meta-SMTs are really better than the baseline SMT in 99% of the times.

  19. 5. Related Work AmirkabirUniversity of Technology • Xiao et al. (2010) presented a general solution for adaption of bagging and boosting to SMT. Their results showed that ensemble learning algorithms are promising in SMT. • Simardet al. (2007a), trained a “mono-lingual” Phrase-based SMT system on the output of an RBMT system for the source side of the training set of the Phrase-based SMT system and the corresponding human translated (manually post-edited) reference. • Bécharaet al. (2011) designed a full phrase-based SMT pipeline that included a translation step and a post-editing step. They use a novel context aware approach.

  20. 5. Conclusion and future work AmirkabirUniversity of Technology • We have presented a simple and effective approach to translation error modification by building a meta-level SMT using a meta-level corpus that is created form original corpus by cross validation. • Experimental results showed that such a meta-SMT can fix many translation errors that occur in the baseline translations. • As a future work, we have planned to develop a technique for combining multiple SMT systems using stacked generalization algorithm

  21. 5. Conclusion and future work AmirkabirUniversity of Technology • Moreover, we are running more tests with different language-pairs and larger corpora. • As another future work, we will apply our framework under different SMT paradigms such as hierarchical phrase-based SMT and syntax-based SMT.

  22. 6. References AmirkabirUniversity of Technology AlmutSilja Hildebrand and Stephan Vogel. 2008. Combination of machine translation systems via hypothesis selection from combined n-best lists. In Proc. of the 8thAMTA conference, pages 254-261. EvegenyMatusov, Nicola Ueffing and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment. In Proc. of EACL 2006, pages 33-40. Antti-VeikkoRosti, Spyros Matsoukas and Richard Schwartz. 2007. Improved word-level system combination for machine translation. In Proc. of the 45th Annual Meeting of the Association for Computational Linguistics, pages. 312-319. Michel Simard, Cyril Goutte, and Pierre Isabelle. 2007a. Statistical phrase-based post-editing. In NAACL-HLT, pages 508-515 Béchara, H., Y. Ma, and J. van Genabith. 2011. Post-editing for a statistical MT system. In MT Summit XIII, pages 308-315 David H. Wolpert. 1992. Stacked generalization.Neural Networks, 5(2): 241-259. Leo Breiman. 1996b. Bagging predictors.Machine Learning,24(2):123-140.

  23. AmirkabirUniversity of Technology THANK YOU

More Related