Temporal Bayesian Fusion for Affect Sensing: Combining Video, Audio, and Lexical Modalities.

Registration Now Open for

ADOS-2 Clinical Training Workshop

November 25-27. Click here for info!

TitleTemporal Bayesian Fusion for Affect Sensing: Combining Video, Audio, and Lexical Modalities.
Publication TypeJournal Article
Year of Publication2015
AuthorsSavran, A, Cao, H, Nenkova, A, Verma, R
JournalIEEE Trans Cybern
Volume45
Issue9
Pagination1927-41
Date Published2015 Sep
ISSN2168-2275
KeywordsBayes Theorem, Emotions, Facial Expression, Humans, Pattern Recognition, Automated, Video Recording
Abstract

The affective state of people changes in the course of conversations and these changes are expressed externally in a variety of channels, including facial expressions, voice, and spoken words. Recent advances in automatic sensing of affect, through cues in individual modalities, have been remarkable; yet emotion recognition is far from a solved problem. Recently, researchers have turned their attention to the problem of multimodal affect sensing in the hope that combining different information sources would provide great improvements. However, reported results fall short of the expectations, indicating only modest benefits and occasionally even degradation in performance. We develop temporal Bayesian fusion for continuous real-value estimation of valence, arousal, power, and expectancy dimensions of affect by combining video, audio, and lexical modalities. Our approach provides substantial gains in recognition performance compared to previous work. This is achieved by the use of a powerful temporal prediction model as prior in Bayesian fusion as well as by incorporating uncertainties about the unimodal predictions. The temporal prediction model makes use of time correlations on the affect sequences and employs estimated temporal biases to control the affect estimations at the beginning of conversations. In contrast to other recent methods for combination of modalities our model is simpler, since it does not model relationships between modalities and involves only a few interpretable parameters to be estimated from the training data.

DOI10.1109/TCYB.2014.2362101
Alternate JournalIEEE Trans Cybern
PubMed ID25347894
Grant ListR01-MH-073174 / MH / NIMH NIH HHS / United States