Summary of the 2015 NIST Language Recognition i-Vector Machine Learning Challenge

Published: June 24, 2016

Author(s)

Audrey N. Tong, Craig S. Greenberg, Alvin F. Martin, Desire Banse, John M. Howard, G R. Doddington, Danilo B. Romero, Douglas A. Reynolds, Lisa Mason, Tina Kohler, Jaime Hernandez-Cordero, Elliot Singer, Alan McCree, Lisa Mason

Abstract

In 2015 NIST coordinated the first language recognition evaluation (LRE) that used i-vectors as input, with the goals of attracting researchers outside of the speech processing community to tackle the language recognition problem, exploring new ideas in machine learning for use in language recognition, and improving performance. This evaluation, the Language Recognition i-Vector Machine Learning Challenge, taking place over a period of four months, was well-received with 56 participants and over 3500 submissions, surpassing the participation levels of all previous traditional track LREs. The results of 46 of the 56 participants were better than the baseline system, with the best system achieving approximately 55% relative improvement over the baseline system.
Conference Dates: June 21-24, 2016
Conference Location: Bilbao, -1
Conference Title: Odyssey 2016 the Speaker and Language Recognition Workshop
Pub Type: Conferences

Keywords

language recognition, i-vector, benchmark test
Created June 24, 2016, Updated April 17, 2018