EAV: EEG-Audio-Video Dataset for Emotion Recognition in Conversational Contexts
| dc.contributor.author | Lee Min-Ho | |
| dc.contributor.author | Shomanov Adai | |
| dc.contributor.author | Begim Balgyn | |
| dc.contributor.author | Kabidenova Zhuldyz | |
| dc.contributor.author | Nyssanbay Aruna | |
| dc.contributor.author | Yazici Adnan | |
| dc.contributor.author | Lee Seong-Whan | |
| dc.date.accessioned | 2025-08-26T10:06:01Z | |
| dc.date.available | 2025-08-26T10:06:01Z | |
| dc.date.issued | 2024-09-19 | |
| dc.description.abstract | Understanding emotional states is pivotal for the development of next-generation human-machine interfaces. Human behaviors in social interactions have resulted in psycho-physiological processes influenced by perceptual inputs. Therefore, efforts to comprehend brain functions and human behavior could potentially catalyze the development of AI models with human-like attributes. In this study, we introduce a multimodal emotion dataset comprising data from 30-channel electroencephalography (EEG), audio, and video recordings from 42 participants. Each participant engaged in a cue-based conversation scenario, eliciting five distinct emotions: neutral, anger, happiness, sadness, and calmness. Throughout the experiment, each participant contributed 200 interactions, which encompassed both listening and speaking. This resulted in a cumulative total of 8,400 interactions across all participants. We evaluated the baseline performance of emotion recognition for each modality using established deep neural network (DNN) methods. The Emotion in EEG-Audio-Visual (EAV) dataset represents the first public dataset to incorporate three primary modalities for emotion recognition within a conversational context. We anticipate that this dataset will make significant contributions to the modeling of the human emotional process, encompassing both fundamental neuroscience and machine learning viewpoints. | en |
| dc.identifier.citation | Lee Min-Ho; Shomanov Adai; Begim Balgyn; Kabidenova Zhuldyz; Nyssanbay Aruna; Yazici Adnan; Lee Seong-Whan. (2024). EAV: EEG-Audio-Video Dataset for Emotion Recognition in Conversational Contexts. Scientific Data. https://doi.org/10.1038/s41597-024-03838-4 | en |
| dc.identifier.doi | 10.1038/s41597-024-03838-4 | |
| dc.identifier.uri | https://doi.org/10.1038/s41597-024-03838-4 | |
| dc.identifier.uri | https://nur.nu.edu.kz/handle/123456789/10126 | |
| dc.language.iso | en | |
| dc.publisher | Springer Science and Business Media LLC | |
| dc.rights | All rights reserved | en |
| dc.source | (2024) | en |
| dc.title | EAV: EEG-Audio-Video Dataset for Emotion Recognition in Conversational Contexts | en |
| dc.type | article | en |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 10.1038_s41597-024-03838-4.pdf
- Size:
- 1.6 MB
- Format:
- Adobe Portable Document Format