SEQUENTIAL MODELING AND CROSS-MODAL ATTENTION FOR EMOTION RECOGNITION IN HUMAN INTERACTIONS
Loading...
Files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Nazarbayev University School of Engineering and Digital Sciences
Abstract
Emotion recognition in human interactions is a main avenue of research in affective computing that is vital to developing emotionally intelligent systems. In this paper, a context-aware multimodal emotion recognition framework is presented, which includes a sequential approach to modeling and cross-modal attention, allowing for more effective integration of data from textual, audio, and visual modalities. The methodology employs modality-specific Bidirectional LSTM (BiLSTM) encoders to learn temporal dependencies as part of each modality. In the case of text, embeddings are constructed using Word2Vec, audio is transformed into features using Librosa, and visual data is encoded into vectors using OpenFace.
To accommodate for the variation in relevance of each modality across the emotional contexts, a learnable cross-modal attention mechanism was applied which allows the model to dynamically fuse modality-specific embeddings as well as attend to the most informative cues from that specific interaction. The model was trained and evaluated on the IEMOCAP and MELD datasets, both consisting of multimodal conversations and content across a variety of emotions. Experimental results achieved a performance increase against baseline fusion strategies, while retaining computational efficiency and interpretability. By integrating sequential modeling with cross-modal attention, a strong yet scalable solution was produced for emotion recognition in real-world human interactions.
Description
Citation
Mukhamadiyeva, A. (2025). Sequential Modeling and Cross-Modal Attention for Emotion Recognition in Human Interactions. Nazarbayev University School of Engineering and Digital Sciences
Collections
Endorsement
Review
Supplemented By
Referenced By
Creative Commons license
Except where otherwised noted, this item's license is described as Attribution 3.0 United States
