Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
NIST Authors in Bold
|Author(s):||Brian A. Weiss; Craig I. Schlenoff; Gregory A. Sanders; Michelle P. Steves; Sherri Condon; Jon Phillips; Dan Parvaz;|
|Title:||Performance Evaluation of Speech Translation Systems|
|Published:||May 28, 2008|
|Abstract:||One of the most challenging tasks for uniformed service personnel serving in foreign countries is effective verbal communication with the local population. To remedy this problem, several companies and academic institutions have been funded to develop machine translation systems as part of the DARPA TRANSTAC (Spoken Language Communication and Translation System for Tactical Use) program. The goal of this program is to demonstrate capabilities to rapidly develop and field free-form, two-way translation systems that would enable speakers of different languages to communicate with one another in real-world tactical situations. DARPA has mandated that each TRANSTAC technology be evaluated numerous times throughout the life of the program and has tasked the National Institute of Standards and Technology (NIST) to lead this effort. This paper describes the experimental design methodology and test procedures from the most recent evaluation, conducted in July 2007, which focused on English to/from Iraqi Arabic.|
|Conference:||6th edition of the Language Resources and Evaluation Conference|
|Proceedings:||Proceedings of the 6th edition of the Language Resources and Evaluation Conference|
|Dates:||May 28-30, 2008|
|Keywords:||Performance Evaluation, SCORE Evaluation Framework, Utility, Machine Translation|
|Research Areas:||Metrology and Standards for Manufacturing Systems and Data|
|PDF version:||Click here to retrieve PDF version of paper (141KB)|