The SonicASL system relies on Doppler technology to detect tiny echoes from an individual performing sign language. (Image Credit: University at Buffalo)
Researchers at the University of Buffalo recently developed SonicASL, a system that utilizes modified noise-canceling headphones linked with a smartphone to detect and translates American Sign Language. The system relies on Doppler technology to pick up tiny echoes in acoustic soundwaves generated from an individual's hand signals. During indoors and outdoors experiments, SonicASL proved 93.8% effective in tests with 42 words, which included "love,” "space," and "camera." In the same conditions, SonicASL was 90.6% effective when using simple sentences, such as "Nice to meet you."
"SonicASL is an exciting proof-of-concept that could eventually help greatly improve communication between deaf and hearing populations," says corresponding author Zhanpeng Jin, Ph.D., associate professor in the Department of Computer Science and Engineering at UB.
However, more work needs to be implemented before commercializing the technology, which includes expanding SonicASL's vocabulary and allowing it to recognize facial expressions.
Overall, SonicASL aims to address communication barriers that occur between deaf or hard of hearing individuals and those who can hear well, especially in countries where sign language isn't practiced. Usually, communication takes place with a camera or a sign language interpreter present. Even then, those video recordings can be misused, and an interpreter isn't always available when needed.
Noise-canceling headphones feature an outward-facing microphone designed to pick up the surrounding noise. Then, the headphones generate an anti-sound, which cancels out the external noise.
"We added an additional speaker next to the outward-facing microphone. We wanted to see if the modified headphone could sense moving objects, similar to radar," says co-lead author Yincheng Jin (no relation), a Ph.D. candidate in Jin's lab.
As a result, the microphone and speaker have hand movement detection capabilities. The data gets transmitted to the SonicASL smartphone app, which uses an algorithm to recognize the words and sentences. Afterward, the app translates the hand signs into audio for the non-hearing impaired individual, who listens to it over the earphones.
"We tested SonicASL under different environments, including office, apartment, corridor and sidewalk locations," says co-lead author Yang Gao, Ph.D., who completed the research in Jin's lab before becoming a postdoctoral scholar at Northwestern University. "Although it has seen a slight decrease in accuracy as overall environmental noises increase, the overall accuracy is still quite good, because the majority of the environmental noises do not overlap or interfere with the frequency range required by SonicASL."
These soundwaves are generated from signing the phrase "I need help." (Image Credit: University at Buffalo)
SonicASL can also be adapted to support other languages rather than ASL alone.
"Different sign languages have diverse features, with their own rules for pronunciation, word formation and word order," he says. "For example, the same gesture may represent different sign language words in different countries. However, the key functionality of SonicASL is to recognize various hand gestures representing words and sentences in sign languages, which are generic and universal. Although our current technology focuses on ASL, with proper training of the algorithmic model, it can be easily adapted to other sign languages."
Language translation devices have progressively improved over the past few years, but are they good for humanity? Communicating in a different language without knowing how such a language works could potentially become problematic. That's because word meanings aren't always the same in other languages. This could have huge implications for international business and relations.
Machine translations also cannot easily manage crucial implications compared to what is said by an individual since they differ across cultures and speakers. Humans also prefer to communicate differently, which means more inferences are used, depending on the language.
Technologies can alter how we learn different languages, just like how calculators changed our approach toward learning mathematics. However, the need to learn languages hasn't changed, and deep cross-linguistic and cross-cultural knowledge shouldn't be outsourced to technology.
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell