We develop a wearable ear-EEG acquisition tool and directly compared its performance with a conventional 32-channel scalp-EEG setup in a multi-class speech imagery classification task. Our system uses Riemannian tangent space projection of EEG covariance matrices as input features to the multi-layer extreme learning machine (MLELM). Ten subjects participated in our experiment consisting of six sessions spanning across three days. Our system achieved classification accuracy significantly above chance level (20%). The classification result averaged across all ten subjects is 38.2% and 43.1% with a maximum of 43.8% and 55.0% for ear-EEG and scalp-EEG, respectively. According to the analysis of variance (ANOVA), six out of ten subjects showed no significant difference between the performance of ear-EEG and scalp-EEG.

Related publications

1. N Kaongoen, J Choi, S Jo, Speech-imagery-based Brain-Computer Interface system using ear-EEG, Journal of Neural Engineering, 18(1), 8016023, Feb 2021  [LINK] [PDF]