Microsoft tanslation demonstration, screen cap. (via Microsoft)
Computer user interfaces have done exceptionally well using typed words and haptic gestures to interact, but the tech had lagged behind when it comes to voice interaction. In 1979, hidden Markov modeling gave way to a better method of matching waveforms of spoken words to recordings for speech-to-text recognition. Behind this method, the technology improved slowly but reached a plateau, at its best, giving errors in recognition in 20% to 25% of words. In the meantime, significant improvements were developed in the translation of typed words giving way to new services like Google Translate and Bing Translate that can convert words, phrases and web pages from one language to another.
Microsoft has now improved on both of these technologies using computerized learning systems, based on neural networks, that improve speech recognition and allow speech-to- text-to-speech translation while producing output in the users own voice and including cadence. So far, the program gives translations of full sentences from English to Mandarin in just a few seconds.
The speech recognition was improved by using a new type of computerized learning method called deep neural networking (DNN), a refined artificial neural network, which uses mathematical models of the low-level circuits in the brain and describes learning and behavior. This technique was developed by Microsoft researchers and the University of Toronto. Using DNN, Microsoft researchers were able to reduce the error made by speech recognition software to 12.5% to 14%. Accuracy is believed to increase as more data is input into the system and thus allows for more learning using DNN. An improved recognition of speech allows for a more accurate feed into the Bing Translating software.
Translation then happens in two stages. First, each English word is translated to its Mandarin equivalent and then the words are reorganized properly.
After translation, comes the task of producing it using the users own voice including inflections that help proper translation. To do this, the computer learns from an hour session with the user and then manipulates stock recordings, also made by the user, in order to pronounce the translated text into speech. The software was built to achieve proper cadence with help of hours of speech recorded by a native Mandarin speaker. This is the first software to personalize speech-to-speech translations in this manner.
Microsoft’s chief researcher Rick Rashid demonstrated the translator to thunderous applause at Microsoft Research Asia’s 21st Century Computing event in Tianjin, China in late October. Although Rashid stated that the software has not yet been used to translate any conversation outside Microsoft offices, he comments, “We don’t yet know the limits on accuracy of this technology—it is really too new. As we continue to ’train’ the system with more data, it appears to do better and better.”
Cabe