Acoustic and Lexical Representations for Affect Prediction in Spontaneous Conversations.

Philadelphia Inquirer Features CAR's Virtual Reality Study
in Partnership with Philadelphia Police
Read more and learn how you can help!
 

TitleAcoustic and Lexical Representations for Affect Prediction in Spontaneous Conversations.
Publication TypeJournal Article
Year of Publication2015
AuthorsCao, H, Savran, A, Verma, R, Nenkova, A
JournalComput Speech Lang
Volume29
Issue1
Pagination203-217
Date Published2015 Jan 01
ISSN0885-2308
Abstract

In this article we investigate what representations of acoustics and word usage are most suitable for predicting dimensions of affect|AROUSAL, VALANCE, POWER and EXPECTANCY|in spontaneous interactions. Our experiments are based on the AVEC 2012 challenge dataset. For lexical representations, we compare corpus-independent features based on psychological word norms of emotional dimensions, as well as corpus-dependent representations. We find that corpus-dependent bag of words approach with mutual information between word and emotion dimensions is by far the best representation. For the analysis of acoustics, we zero in on the question of granularity. We confirm on our corpus that utterance-level features are more predictive than word-level features. Further, we study more detailed representations in which the utterance is divided into regions of interest (ROI), each with separate representation. We introduce two ROI representations, which significantly outperform less informed approaches. In addition we show that acoustic models of emotion can be improved considerably by taking into account annotator agreement and training the model on smaller but reliable dataset. Finally we discuss the potential for improving prediction by combining the lexical and acoustic modalities. Simple fusion methods do not lead to consistent improvements over lexical classifiers alone but improve over acoustic models.

DOI10.1016/j.csl.2014.04.002
Alternate JournalComput Speech Lang
PubMed ID25382936
PubMed Central IDPMC4219625
Grant ListR01 MH073174 / MH / NIMH NIH HHS / United States