MT Series: Assessing Human Parity in Machine Translation
28 May 2020, 5 pm CEST
If the human evaluation is considered the golden standard in MT assessment, which factors need to be taken into consideration when designing the human-machine parity evaluation experiment that would make its results reliable? We’ve invited the authors of the paper A Set of Recommendations for Assessing Human–Machine Parity in Language Translation to share with us the results of their empirical analysis and their recommendations.
In this one hour, you will learn about:
- Assessing Human-Machine Parity in Language Translations: Results & Recommendations, Sheila Castilho (ADAPT Centre, Dublin City University), Antonio Toral (Center for Language and Cognition, University of Groningen)
- MT Trends 2020, TAUS