Infinity by Nine system showing off how the peripheral images extend the main image. (via MIT)
A group at MIT’s Media Lab has come up with a method of creating a 3D experience around standard video in a computationally efficient way. This set up, called Infinity by Nine, includes 3 ceiling mounted projectors that project on a screens located on the ceiling and sides to provide a heightened level of immersion with any piece of footage.
The system uses software that generates and renders off-focus peripheral video that accompanies the central television images. Using optical flow tracking, color analysis and pattern aware out-coloring algorithms, it can generate images that extend primary visual information to engage the viewer. All of this happens in real-time and requires no predetermined footage, though custom video could maximize the effectiveness of the Infinity by Nine system.
The off-focus, blurry peripherals make no difference in immersion since peripherals have decreased sensitivity anyway, which means that the system can be run on existing customer-ready hardware. The team made use of open-source computer vision tool kits to analyze the primary video frame by frame. Tracking pixel position and velocity and using luminance and pattern aware histograms, the images are rendered using the original image’s colors. The MIT team also used commonly available GPU’s, which minimize the required computing power, as well as rendering time. Lastly, the system synchronizes camera motion with the rendered images to complete the 3D viewing experience.
Test subjects report a definite increase in engagement with what they see on the primary screen. Some people have even reported synaesthetic effects like seeing flames or explosions and feeling heat. These results, along with tailoring footage or images to the Infinity by Nine system, could make this technology promising for cinematic application, computer gaming, and any other visual platforms where user engagement is desired.
Cabe