Purchase instant access (PDF download and unlimited online access):
Quality estimation (qe) and error analysis of machine translation (mt) output remain active areas in Natural Language Processing (nlp) research. Many recent efforts have focused on machine learning (ml) systems to estimate the mt quality, translation errors, post-editing speed or post-editing effort. As the accuracy of such ml tasks relies on the availability of corpora, there is an increasing need for large corpora of machine translations annotated with translation errors and the error annotation guidelines to produce consistent annotations. Drawing on previous work on translation error taxonomies, we present the scate (Smart Computer-aided Translation Environment) mt error taxonomy, which is hierarchical in nature and is based upon the familiar notions of accuracy and fluency. In the scate annotation framework, we annotate fluency errors in the target text and accuracy errors in both the source and target text, while linking the source and target annotations. We also propose a novel method for alignment-based inter-annotator agreement (iaa) analysis and show that this method can be used effectively on large annotation sets. Using the scate taxonomy and guidelines, we create the first corpus of mt errors for the English-Dutch language pair, consisting of statistical machine translation (smt) and rule-based machine translation (rbmt) errors, which is a valuable resource not only for nlp tasks in this field but also to study the relationship between mt errors and post-editing efforts in the future. Finally, we analyse the error profiles of the smt and the rbmt systems used in this study and compare the quality of these two different mt architectures based on the error types.