Anne Taylor, part of the Microsoft Accessibility team, is looking to use AI to benefit disabled people. (Image credit: Microsoft)
Microsoft is going on an AI kick as of late and one of their most significant announcements at this year’s Build (2018) conference was about using AI (Artificial Intelligence) for social challenges, more specifically- their AI for Accessibility program, which looks to put tools into the hands of developers to develop AI solutions for the disabled.
According to a Microsoft press release, “Already we’re witnessing this as people with disabilities expand their use of computers to hear, see and reason with impressive accuracy. At Microsoft, we’ve been putting to work stronger solutions such as real-time speech-to-text transcription, visual recognition services, and predictive text functionality. AI advances like these offer enormous potential by enabling people with vision, hearing, cognitive, learning, mobility disabilities and mental health conditions do more in three specific scenarios: employment, modern life, and human connection.”
The company’s AI for Accessibility program outlines three ways their model will be beneficial with the first being to provide seed grants to developers, academic institutions, and nongovernmental organizations to help advance AI projects and to produce the tools needed to do so. Second- they will identify the projects that hold the most promise and throw more money at it, even providing Microsoft AI experts to bring those projects to scale. Lastly, they will chuck those project designs over to Microsoft’s program partners to incorporate those designs into platform-level services, inherently empowering others to “maximize the accessibility of their offerings.”
The AI for Accessibility program will have an initial funding of $25-million and will run for five years, which is an excellent start for technology that will ultimately enable the disabled to take advantage of everything life has to offer.
Microsoft is using the Kinect to help boost Azure with refined technology. (Image credit: Evan-Amos via Wikipedia)
While staying on the topic of AI and Build 2018, Microsoft also announced their Project Kinect for Azure- a program designed to use updated Kinect technology that’s powered by an Azure brain for what Microsoft’s Satya Nadella says “will enable developers to make the intelligent edge more perceptive than ever before.” The Azure AI will power a refined ToF (Time of Flight) depth sensor, which will reportedly give intelligent edge devices greater precision while consuming less power.
Project Kinect for Azure features a refined Kinect ToF depth sensor with a broader range of view and a higher resolution. (Image credit: Microsoft)
The Project features an improved Kinect sensor with a higher resolution depth sensor at 1024 X 1024, over the Xbox One’s 640 X 480. Dual lasers integrated into the platform should provide the ability to use it in full sunlight, making it possible to use the device in outdoor locations.
Microsoft states, “Project Kinect for Azure unlocks countless new opportunities to take advantage of Machine Learning, Cognitive Services, and IoT Edge. We envision that Project Kinect for Azure will result in new AI solutions from Microsoft and our ecosystem of partners, built on the growing range of sensors integrating with Azure AI services.”
The key words there are IoT Edge, meaning developers will be able to use the new sensor in any number of IoT projects with the added benefit of Azure AI power. It will be interesting to see what people will make using the platform or where Microsoft will take it with their Intelligent Edge program.
Have a story tip? Message me at: cabe(at)element14(dot)com