At the CEWIT 2013 Conference in October, I spoke about the challenges of developing an exclusively voice driven user interface. In preparation, I began by researching forecasts for the wearable computing market and I was amazed at how the forecasts were scattered all over the place. In the graph below you can see that statistics for device shipments through 2018 range from close to 500 million all the way down to less than 100 million. Although the discrepancy is significant, it’s safe to conclude that the wearable computing market is going to grow. How much, how fast, and how long will it take? Who knows? What we do know is that it will grow!
I looked at the evolution of head mounted computers and again was fascinated at how far we’ve come. From Steve Mann’s backpack computer in the 80′s to a fully self-contained head wearable computers like the Motorola HC1 and Google Glass of today. You can now put a smart device on your head and actually speak to and it listens. Or does it?
As we began developing the voice driven UI that drives one of our products we came across two major challenges related to the speech engine:
Features – What are they and are they obvious to the user?
Phrasing – What can the user say to make the application work? And How should they say it?
These two challenges may sound easy to solve but in our experience it took years to get a UI in a place that addresses these issues effectively. Now, take these two challenges related to the speech engine and add a head wearable computer to the mix where voice is the only input and a micro display and mini speaker are the only feedback as the user interacts with the computer. This combination posed several new challenges including:
Read more here