EEG2FACE: EEG-DRIVEN EMOTIONAL 3D FACE RECONSTRUCTION

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Nazarbayev University School of Engineering and Digital Sciences

Abstract

The increasing use of 3D facial avatars in digital communication highlights the critical challenge of accurately capturing and replicating genuine emotional expressions. While most existing methods rely on visual data to recreate facial dynamics, the potential to decode these dynamics directly from brain activity remains largely unexplored. In this work a novel machine-learning framework that reconstructs 3D facial expressions using EEG signals alone is proposed. By leveraging synchronized 3D pseudo-ground-truth extracted from the EAV dataset as supervision, our model decodes EEG signals into dynamic 3D face meshes, faithfully replicating the corresponding facial expressions. This approach bridges deep learning and neuroscience, presenting a first-of-its-kind system for neural signal-to-3D reconstruction. Our findings establish a robust baseline for EEG-driven facial expression synthesis, with broad implications for generative modeling, representation learning, and brain-computer interface technologies. The model and code are publicly available at https://github.com/zizimars/EEG2Face

Description

Citation

Kabidenova, Zh. (2025). EEG2Face: EEG-driven emotional 3D face reconstruction. Nazarbayev University School of Engineering and Digital Sciences.

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States