Augmented reality give a new way to interact with technology, and ambitious companies are clamoring to be firsts in the field. One of those concepts being developed by the software giant is called MirageTable. The system lets the user interact with objects in both the real and virtual worlds on a table top.
For instance; a person could set up a series of virtual bowling pins that could then be knocked over with a virtual ball with only using one pin as a real model to clone the others. The researchers developed MirageTable with the idea that two people could interact with each other in the same space without actually being with one another (think of it as being like Star Trek’s Holodeck). To do this, the researchers used an Acer H5360 3D stereoscopic projector (1280 X 720) to display objects, as well as the other person, onto a curved screen. A Kinect is positioned on top of the screen and captures the objects that are being projected and also tracks the eye movements of each corresponding user. This is to give the corresponding user the correct perspective of what’s in front of them. To view the objects in an augmented reality 3D environment each user wears a pair of Nvidia 3D shutter glasses which makes them appear spatially registered in conjunction with the real world. Any object can be scanned and then cloned for interaction by either of the two parties in both the real and virtual space.
Virtual Bowling (via Microsoft)
Free-hand interaction (because no trackers, gloves or other hardware was implemented) with virtual objects in MirageTable was done by using software that takes all real-world objects and represents them as proxy particles, which are constantly updated and used for collision geometry in the virtual world. To process all of the dynamic physics constantly being updated the team relied on Nvidia’s GeForce GTX 580 along with their PhysX game software. This gives each person the ability to interact with both environments at the same time. The researchers admit that there are still limitations to overcome as the Kinect (at present) can only capture the front of an object and not all sides which leaves ‘gaps’ that make for bad texturing. Another problem the team faces is users can only scoop or catch objects from below instead of grasping or picking them up but hope to improve on these limitations with further development. I for one am very impressed at what they have already accomplished with MirageTable. What will its full capabilities be in the future if only as a gaming platform?