NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation
Published
Author(s)
Sarala Padi, Omid Sadjadi, Ram D. Sriram, Dinesh Manocha
Abstract
Automatic speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction. One of the main challenges in SER is data scarcity, i.e., insufficient amounts of carefully labeled data to build and fully explore complex deep learning models for emotion classification. This paper aims to address this challenge using a transfer learning strategy combined with spectrogram augmentation. Specifically, we propose a transfer learning approach that leverages a pre-trained residual network (ResNet) model including a statistics pooling layer from speaker recognition trained using large amounts of speaker-labeled data. The statistics pooling layer enables the model to efficiently process variable-length input, thereby eliminating the need for sequence truncation which is commonly used in SER systems. In addition, we adopt a spectrogram augmentation technique to generate additional training data samples by applying random time-frequency masks to log-mel spectrograms to mitigate overfitting and improve the generalization of emotion recognition models. We evaluate the effectiveness of our proposed approach on the interactive emotional dyadic motion capture (IEMOCAP) dataset. Experimental results indicate that the transfer learning and spectrogram augmentation approaches improve the SER performance, and when combined achieve state-of-the-art results.
Proceedings Title
International Conference on Multimodal Interaction (ICMI ’21)
Conference Dates
October 18-22, 2021
Conference Location
Montreal, CA
Conference Title
ACM International Conference on Multimodal Interaction
Padi, S.
, Sadjadi, O.
, Sriram, R.
and Manocha, D.
(2021),
Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation, International Conference on Multimodal Interaction (ICMI ’21), Montreal, CA, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=932172
(Accessed October 10, 2025)