EXPLAINABLE DEEP LEARNING FOR BRAIN-COMPUTER INTERFACES

dc.contributor.authorMun, Vladislav
dc.date.accessioned2022-06-20T05:40:01Z
dc.date.available2022-06-20T05:40:01Z
dc.date.issued2022-05
dc.description.abstractA Brain-Computer Interface (BCI) is a continuously evolving technological framework that has been steadily gaining popularity over the past few decades. By recording the brain activity and structure through various means like electrical potential recording, Magnetic-Resonance Imaging (MRI), or even Near-Infrared Spectroscopy (NIRS), BCIs allow us to use that data for communication between a human and an external computing device. This leads to a very wide range of possible applications, such as contributing to rehabilitation, control of a prosthesis, and managing/diagnosing disorders such as Attention-Deficit Hyperactivity Disorder (ADHD). However, the creation of a fast yet reliable BCI model remains one of the biggest challenges even today. Main complications also include the performance and testing of different BCI models on datasets with strong spatial smearing (noise), along with the general problem of non-linear classifiers being intractable (black-box). Hence, the following study tries to cover the aforementioned problems. First of all, a general analysis of existing BCI models on multiple BCI datasets is given, followed by a proposal of a custom Deep Learning architecture with a performance comparable to state-of-the-art BCI classifiers. Secondly, the practical feasibility of Layerwise Relevance Propagation (LRP) in the field of BCI is later explored. Knowing the reasoning behind a model feature selection may lead to novel insights with respect to neuroplasticity and subject-to subject analysis. Furthermore, the study investigates the pruning potential of the LRP, showcasing an efficient removal of unnecessary network complexity in the model. Finally, the study also discusses some ideas for the further development and testing of BCI systems, including showcasing the practical feasibility and construction of a virtual environment for prosthesis training and patient rehabilitation.en_US
dc.identifier.citation"Mun, V. (2022). Explainable Deep Learning for Brain-Computer Interfaces (Unpublished master's thesis). Nazarbayev University, Nur-Sultan, Kazakhstan"en_US
dc.identifier.urihttp://nur.nu.edu.kz/handle/123456789/6286
dc.language.isoenen_US
dc.publisherNazarbayev University School of Engineering and Digital Sciencesen_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectBCIen_US
dc.subjectBrain-Computer Interfaceen_US
dc.subjectResearch Subject Categories::TECHNOLOGYen_US
dc.subjecttype of access: gated accessen_US
dc.subjectMagnetic-Resonance Imagingen_US
dc.subjectMRIen_US
dc.subjectNear-Infrared Spectroscopyen_US
dc.subjectNIRSen_US
dc.titleEXPLAINABLE DEEP LEARNING FOR BRAIN-COMPUTER INTERFACESen_US
dc.typeMaster's thesisen_US
workflow.import.sourcescience

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Thesis - Vladislav Mun.pdf
Size:
11.78 MB
Format:
Adobe Portable Document Format
Description:
Thesis
Loading...
Thumbnail Image
Name:
Presentation - Vladislav Mun.pdf
Size:
1.94 MB
Format:
Adobe Portable Document Format
Description:
Presentation