“Speak your mind” – Gamer Headset to help people with speech loss

engineering careers  “Speak your mind” – Gamer Headset to help people with speech loss

A student at the University of Baths ‘ART-AI’ centre has revealed a headset that uses a gaming headset to allow people with speech loss talk again using their thoughts.

The headset could have a positive impact on the lives of people who have suffered speech loss who currently have to rely on predictive text-to-speech systems to communicate. The tech allows users to produce speech by simply imagining saying a word.

Before we get ahead of ourselves though, the researchers behind the system emphasise it is still early days and that, currently, the systems is slow and frustrating to use at only around 10 words per minute.

The team are now investigating whether a commercially available gaming headset can be used to monitor brainwaves rather than a specialist unit.

The research used a lightweight EEG (electroencephalography) system to detect brainwaves which were then processed by a computer that uses neural networks and deep learning to identify speech from all the other thoughts of the user.

The prototype software can now detect around 16 isolated English phonemes (aka spoken sounds) and works to a comparable accuracy to more bulky and expensive research-grade EEG machines.

The system has been developed by PhD researcher Scott Wellington who had started on the project whilst working at SpeakUnique, along with colleagues at the University of Edinburgh, and is now continuing his research at the University of Bath’s Centre for Doctoral Training in Accountable, Responsible and Transparent AI.

Scott explained that the “a current constraint of existing Text-to-Speech systems like Rob Burrow uses, is that the user still has to type in what they want to say, for the device to then say it. As you can imagine, this can be inconvenient, slow, and a source of deep frustration for people with MND. My hope is that speech neuroprostheses may provide some answer to this.”

Scott said: “This device doesn’t read your thoughts exactly – you have to consciously imagine saying the word for it to work, so users don’t have to worry about all of their private thoughts being vocalised.

“It works more like hyper predictive text – it will be much quicker for the user to select the correct word they want to say from the list.

“The previous work I did as a Speech Scientist is by far the best thing I’ve ever done in my life, with real-world impact.

“It’s why I’m wanting to pursue developing brain-computer interfaces for speech, especially for people who have lost their speech due to MND, or similar neurodegenerative conditions.

“This field is very exciting and fast-moving – I believe a solution is just around the corner.”

The team has now tested several different machine learning models for speech classification with the EEG and have published the library of recordings so that it is freely available for other researchers across the world to use.


Scott presented his work at the Interspeech Virtual Conference on 29 October 2020.