DSpace Repository

EXPLAINABLE DEEP LEARNING FOR BRAIN-COMPUTER INTERFACES

Show simple item record

dc.contributor.author Mun, Vladislav
dc.date.accessioned 2022-06-20T05:40:01Z
dc.date.available 2022-06-20T05:40:01Z
dc.date.issued 2022-05
dc.identifier.citation "Mun, V. (2022). Explainable Deep Learning for Brain-Computer Interfaces (Unpublished master's thesis). Nazarbayev University, Nur-Sultan, Kazakhstan" en_US
dc.identifier.uri http://nur.nu.edu.kz/handle/123456789/6286
dc.description.abstract A Brain-Computer Interface (BCI) is a continuously evolving technological framework that has been steadily gaining popularity over the past few decades. By recording the brain activity and structure through various means like electrical potential recording, Magnetic-Resonance Imaging (MRI), or even Near-Infrared Spectroscopy (NIRS), BCIs allow us to use that data for communication between a human and an external computing device. This leads to a very wide range of possible applications, such as contributing to rehabilitation, control of a prosthesis, and managing/diagnosing disorders such as Attention-Deficit Hyperactivity Disorder (ADHD). However, the creation of a fast yet reliable BCI model remains one of the biggest challenges even today. Main complications also include the performance and testing of different BCI models on datasets with strong spatial smearing (noise), along with the general problem of non-linear classifiers being intractable (black-box). Hence, the following study tries to cover the aforementioned problems. First of all, a general analysis of existing BCI models on multiple BCI datasets is given, followed by a proposal of a custom Deep Learning architecture with a performance comparable to state-of-the-art BCI classifiers. Secondly, the practical feasibility of Layerwise Relevance Propagation (LRP) in the field of BCI is later explored. Knowing the reasoning behind a model feature selection may lead to novel insights with respect to neuroplasticity and subject-to-subject analysis. Furthermore, the study investigates the pruning potential of the LRP, showcasing an efficient removal of unnecessary network complexity in the model. Finally, the study also discusses some ideas for the further development and testing of BCI systems, including showcasing the practical feasibility and construction of a virtual environment for prosthesis training and patient rehabilitation. en_US
dc.language.iso en en_US
dc.publisher Nazarbayev University School of Engineering and Digital Sciences en_US
dc.rights Attribution-NonCommercial-ShareAlike 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-sa/3.0/us/ *
dc.subject BCI en_US
dc.subject Brain-Computer Interface en_US
dc.subject Research Subject Categories::TECHNOLOGY en_US
dc.subject Type of access: Gated Access en_US
dc.subject Magnetic-Resonance Imaging en_US
dc.subject MRI en_US
dc.subject Near-Infrared Spectroscopy en_US
dc.subject NIRS en_US
dc.title EXPLAINABLE DEEP LEARNING FOR BRAIN-COMPUTER INTERFACES en_US
dc.type Master's thesis en_US
workflow.import.source science


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 3.0 United States

Video Guide

Submission guideSubmission guide

Submit your materials for publication to

NU Repository Drive

Browse

My Account

Statistics