SEQUENTIAL MODELING AND CROSS-MODAL ATTENTION FOR EMOTION RECOGNITION IN HUMAN INTERACTIONS

dc.contributor.authorMukhamadiyeva, Aigerim
dc.date.accessioned2025-06-02T10:10:41Z
dc.date.available2025-06-02T10:10:41Z
dc.date.issued2025-05-08
dc.description.abstractEmotion recognition in human interactions is a main avenue of research in affective computing that is vital to developing emotionally intelligent systems. In this paper, a context-aware multimodal emotion recognition framework is presented, which includes a sequential approach to modeling and cross-modal attention, allowing for more effective integration of data from textual, audio, and visual modalities. The methodology employs modality-specific Bidirectional LSTM (BiLSTM) encoders to learn temporal dependencies as part of each modality. In the case of text, embeddings are constructed using Word2Vec, audio is transformed into features using Librosa, and visual data is encoded into vectors using OpenFace. To accommodate for the variation in relevance of each modality across the emotional contexts, a learnable cross-modal attention mechanism was applied which allows the model to dynamically fuse modality-specific embeddings as well as attend to the most informative cues from that specific interaction. The model was trained and evaluated on the IEMOCAP and MELD datasets, both consisting of multimodal conversations and content across a variety of emotions. Experimental results achieved a performance increase against baseline fusion strategies, while retaining computational efficiency and interpretability. By integrating sequential modeling with cross-modal attention, a strong yet scalable solution was produced for emotion recognition in real-world human interactions.
dc.identifier.citationMukhamadiyeva, A. (2025). Sequential Modeling and Cross-Modal Attention for Emotion Recognition in Human Interactions. Nazarbayev University School of Engineering and Digital Sciences
dc.identifier.urihttps://nur.nu.edu.kz/handle/123456789/8688
dc.language.isoen
dc.publisherNazarbayev University School of Engineering and Digital Sciences
dc.rightsAttribution 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/
dc.subjectMultimodal emotion recognition
dc.subjectCross-Modal Attention
dc.subjectIEMOCAP
dc.subjectMELD
dc.subjectSequential Modeling
dc.subjectBiLSTM
dc.subjectDeep Learning
dc.subjecttype of access: embargo
dc.titleSEQUENTIAL MODELING AND CROSS-MODAL ATTENTION FOR EMOTION RECOGNITION IN HUMAN INTERACTIONS
dc.typeMaster`s thesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Thesis_Aigerim_Mukhamadiyeva.pdf
Size:
5.41 MB
Format:
Adobe Portable Document Format
Description:
Master`s thesis
Access status: Embargo until 2027-12-01 , Download