Evaluating Automatic Metrics with Incremental Machine Translation Systems

G Wu, SB Cohen, R Sennrich - arXiv preprint arXiv:2407.03277, 2024 - arxiv.org
arXiv preprint arXiv:2407.03277, 2024arxiv.org
We introduce a dataset comprising commercial machine translations, gathered weekly over
six years across 12 translation directions. Since human A/B testing is commonly used, we
assume commercial systems improve over time, which enables us to evaluate machine
translation (MT) metrics based on their preference for more recent translations. Our study
confirms several previous findings in MT metrics research and demonstrates the dataset's
value as a testbed for metric evaluation. We release our code at https://github …
We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions. Since human A/B testing is commonly used, we assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations. Our study confirms several previous findings in MT metrics research and demonstrates the dataset's value as a testbed for metric evaluation. We release our code at https://github.com/gjwubyron/Evo
arxiv.org