Please login first
Speech Generation using BCI
* 1 , * 2 , * 3
1  Computer Engineering, University of Northampton, United Kingdom.
2  Electronics and Communication Engineering, University of Northampton, United Kingdom.
3  Electronics and Communication Engineering, Arab Academy for Science & Technology, Egypt.
Academic Editor: Evanthia Bernitsas

Abstract:

In this project, we aimed to develop a Steady-State Visually Evoked Potential (SSVEP)-based brain-computer interface (BCI) speller system to enable communication for individuals with severe physical disabilities. We utilized SSVEPs, a type of signal read from the occipital lobe of the brain in response to visual stimuli, to create an accessible and reliable solution for people with paralysis.

Our methodology involved several key components: a flicker-based data gathering interface, signal processing, and AI modeling, a prediction and auto-complete program, and a text-to-speech (TTS) API. We first worked on publicly available datasets then we gathered our own dataset using a G.Tec Unicorn Hybrid Black 8-channel EEG headset, then developed an AI model based on an unsupervised Transformer model to accurately interpret the user's intended selections.

The results of our project demonstrate the effectiveness of our SSVEP speller system. We were able to achieve 93% accuracy through our AI model on the available BETA dataset, which outperformed state-of-the-art methods. Additionally, the incorporation of prediction and auto-complete functionalities in our fully functional website further enhanced the user experience. We believe our work can provide a lifeline for severely disabled individuals, giving them a voice and improving their quality of life by reducing social and mental isolation.

Keywords: Steady-State Visually Evoked Potential, Brain-Computer Interface, Occipital Lobe, Text-to-Speech

 
 
Top