SUBJECT-INDEPENDENT BRAIN–COMPUTER INTERFACES BASED ON DEEP CONVOLUTIONAL NEURAL NETWORKS

dc.contributor.authorKwon, O-Yeon
dc.contributor.authorLee, Min-Ho
dc.contributor.authorGuan, Cuntai
dc.contributor.authorLee, Seong-Whan
dc.date.accessioned2021-07-01T05:47:42Z
dc.date.available2021-07-01T05:47:42Z
dc.date.issued2020-10
dc.description.abstractFor a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].en_US
dc.identifier.citationKwon, O. Y., Lee, M. H., Guan, C., & Lee, S. W. (2020). Subject-Independent Brain–Computer Interfaces Based on Deep Convolutional Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 31(10), 3839–3852. https://doi.org/10.1109/tnnls.2019.2946869en_US
dc.identifier.issn2162-2388
dc.identifier.urihttp://nur.nu.edu.kz/handle/123456789/5486
dc.language.isoenen_US
dc.publisherIEEE Transactions on Neural Networks and Learning Systemsen_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectType of access: Open Accessen_US
dc.subjectbrain-computer interfaceen_US
dc.subjectcomputeren_US
dc.titleSUBJECT-INDEPENDENT BRAIN–COMPUTER INTERFACES BASED ON DEEP CONVOLUTIONAL NEURAL NETWORKSen_US
dc.typeArticleen_US
workflow.import.sourcescience

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
08897723.pdf
Size:
2.98 MB
Format:
Adobe Portable Document Format
Description:
Article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.28 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections