Performance Analysis of the 2017 NIST Language Recognition Evaluation
Seyed Omid Sadjadi, Timothée N. Kheyrkhah, Craig S. Greenberg, Douglas A. Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
The 2017 NIST language recognition evaluation (LRE) was held in the autumn of 2017. Similar to the past LREs, the basic task in LRE17 was language detection, with an emphasis on discriminating closely related languages (14 in total) selected from 5 language clusters, namely Arabic, Chinese, English, Iberian, and Slavic. Compared to previous LREs, LRE17 featured several new aspects including i) audio data extracted from online videos (AfV), ii) a small, yet representative, Dev set for system training and development, iii) system outputs in form of log-likelihood scores (as opposed to log-likelihood ratios), and iv) a normalized cross-entropy performance measure as an alternative metric. A total of 18 teams from 25 research and industrial organizations participated in the evaluation and submitted 79 valid system outputs under fixed and open training scenarios that were first introduced in LRE15. In this paper, we report a deeper analysis of system performance broken down by multiple factors such as data source and gender per language, as well as a cross-year (i.e., LRE15 versus LRE17) performance comparison of leading systems to measure progress over the 2-year period. In addition, we present a comparison of primary versus ``single'' best submissions to understand the importance of fusion on overall performance.