I happened across an embedded open source speech recognition toolkit in my internet travels recently. The software is compact and efficient enough to fit on small uCs. And after watching the video demonstration, it seems to be fairly accurate.
At about 2 minutes into the PyCon 2010 talk, you'll notice that David has some trouble with running the software. This is the problem I have with every modern software development environment. Syntax is too hyper critical. Command lines are very long and prone to issues. And development environments are adhoc and kloogy. A lot of time is spent just setting up the environment before development has even begun. Even WinCE, a long established embedded OS, has so much esoteric nuances on buried options to click that I'd rather work with Assembly. I liken software development to knowing where to hit an old TV to make it work. If you inherited this TV from your grandmother, would you know where to hit it? But you'd definitely know how to operate it, like all TVs. So like everyone would do, you'd throw it out and buy an easier to use one. SDKs need to go in a futuristic direction.
Enough ranting.
Anyone work with CMU/PocketSphinx? How does it compare to the speech recognition of the Android operating system? I know Android is not as light weight as some basic RTOSes, but this type of functionality has been worked on by the billions of Google dollars. But, how far apart are the two options here, since my Android phone converts my speech to garbage text far too often?
Cabe
Check out the SourceForge on CMU Sphinx