(Image Credit: jarmoluk/pixabay)
The University of Texas at Austin researchers say they created a decoder algorithm that observes a person's brain activity through an fMRI to reconstruct their thoughts or what they hear. Although other teams reconstructed language via brain implants, this is the first-ever noninvasive technique. The decoder could pave the way toward brain-computer interfaces for those unable to speak or type.
Three volunteers listened to 16 hours of podcasts and radio stories while the fMRI recorded brain activity, which helped train the algorithm. Those 16 hours of recordings allowed the algorithm to generate predictions of the readings' appearance. These guesses ensured the algorithm translated thoughts unrelated to any corresponding audio recording during training. Afterward, they compared the predictions to the real-time fMRI recording, discovering that the words the algorithm generated were determined by the prediction closely resembling the real reading.
The team then determined the algorithm's success by scoring its generation similarity to the stimulus presented to the participant. Additionally, they rated the language generated by that algorithm that wasn't checked against an fMRI recording. Lastly, the team compared the scores and tested the statistical significance of the difference between both.
Their results show that the algorithm's technique produces an entire story from fMRI recordings, almost matching the story from the audio recording. Unfortunately, it's not perfectly designed since it can't preserve pronouns and frequently confuses the first and third person. The decoder could be used for more real-world applications compared to invasive techniques. However, the high cost and inconvenience of MRI machines remain a problem.
The decoder also recreated stimuli that weren't using semantic language despite it being trained on volunteers listening to spoken language. It reconstructed a silent film's meaning watched by participants and their imagined experience when telling a story. So that discovery could help scientists better understand how the brain works in different regions. However, it may be difficult to determine the decoder's overall success since it detects semantics instead of individual words.
The team also observed if the decoder would work effectively without consent from the participant. During some tests, the researchers instructed the participants to perform different mental tasks while the audio played. Doing so helped make the decoder inaccurate. It's worth noting that the decoder needs to be extensively trained on an individual before it can precisely read their thoughts.
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell