The portable, noninvasive system could help those unable to speak to communicate with others using thought. (Image credit: University of Technology Sydney via YouTube)
Researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney have developed a portable, noninvasive system that uses mind-reading AI to turn thoughts into text. The technology could be used to help those unable to speak communicate with others or used as a human-machine interface to control robotic limbs.
The platform uses an EEG (Electroencephalogram) headset that records electrical brain activity and the wearer's waves, which are then segmented into distinct units that capture specific characteristics and patterns. This is done using an AI model known as DeWave, which translates EEG signals into words and sentences it leverages from large amounts of EEG data. Because the EEG signals are received through the headset and from implanted electrodes, the signal is noisier; however the DeWave model helps to mitigate that noise and even surpass previous benchmarks using EEG gear.
"The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as 'the man' instead of 'the author," states Ph. D student Yiqun Duan. "We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures."
The translation accuracy score, derived from the BLEU-1 algorithm for evaluating the quality of translated text, is around 40%. The researchers are hoping to increase that percentage so it's comparable to traditional language translation or speech recognition programs that currently sit at around 90%.
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell