The objective of the NIST Open Machine Translation (OpenMT) evaluation series is to support research in, and help advance the state of the art of, machine translation (MT) technologies - technologies that translate text between human languages. Input may include all forms of text. The goal is for the output to be an adequate and fluent translation of the original.
The MT evaluation series started in 2001 as part of the DARPA TIDES program. In their current form, the evaluations are driven and coordinated by NIST as NIST OpenMT. They provide an important contribution to the direction of research efforts and the calibration of technical capabilities in MT. The OpenMT evaluations are intended to be of interest to all researchers working on the general problem of automatic translation between human languages. To this end, they are designed to be simple, to focus on core technology issues, to be fully supported, and to be accessible to all those wishing to participate. The most recently completed NIST OpenMT evaluation was MT09 and took place in June 2009. MT09 featured three language pairs, the second cycle of a progress test, and, for the first time, system combination categories. Results of past NIST OpenMT and DARPA TIDES MT evaluations as well as resources specific to each evaluation can be accessed via the year-specific links at the bottom.
OpenMT introduced an MT Challenge that began in the fall of 2015.
Email firstname.lastname@example.org for with questions for NIST related to MT.
To request to be added to NIST's MT mailing list, email email@example.com.