SUBJECT-INDEPENDENT BRAIN–COMPUTER INTERFACES BASED ON DEEP CONVOLUTIONAL NEURAL NETWORKS
dc.contributor.author | Kwon, O-Yeon | |
dc.contributor.author | Lee, Min-Ho | |
dc.contributor.author | Guan, Cuntai | |
dc.contributor.author | Lee, Seong-Whan | |
dc.date.accessioned | 2021-07-01T05:47:42Z | |
dc.date.available | 2021-07-01T05:47:42Z | |
dc.date.issued | 2020-10 | |
dc.description.abstract | For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)]. | en_US |
dc.identifier.citation | Kwon, O. Y., Lee, M. H., Guan, C., & Lee, S. W. (2020). Subject-Independent Brain–Computer Interfaces Based on Deep Convolutional Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 31(10), 3839–3852. https://doi.org/10.1109/tnnls.2019.2946869 | en_US |
dc.identifier.issn | 2162-2388 | |
dc.identifier.uri | http://nur.nu.edu.kz/handle/123456789/5486 | |
dc.language.iso | en | en_US |
dc.publisher | IEEE Transactions on Neural Networks and Learning Systems | en_US |
dc.rights | Attribution-NonCommercial-ShareAlike 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/us/ | * |
dc.subject | Type of access: Open Access | en_US |
dc.subject | brain-computer interface | en_US |
dc.subject | computer | en_US |
dc.title | SUBJECT-INDEPENDENT BRAIN–COMPUTER INTERFACES BASED ON DEEP CONVOLUTIONAL NEURAL NETWORKS | en_US |
dc.type | Article | en_US |
workflow.import.source | science |