Three-dimensional video conferencing seems like something only found in sci-fi movies, but a research team from Queen’s University’s Human Media Lab has designed a life-sized working model made from a few readily available electronic components. The team, led by Professor Roel Vertegaal, designed the 3D video conferencing pod called ‘Telehuman’ around Microsoft’s Kinect sensor device.
The design uses an opaque acrylic cylinder that’s approximately 5.6 feet tall with a diameter of 29.5 inches mounted on a plywood platform as the Telehuman’s display screen. Six Kinect sensors are arranged in a circular fashion on top of the acrylic screen which is used to capture a person’s image from the front, back , and both sides to create a real-time 3D image at 30 fps. Located in the bottom of the screen’s base is a DepthQ projector (in conjunction with a Nvidia 3D Vision kit) that’s aimed upward toward a convex mirror which allows the projected image, at a resolution of 720p, of the other user to cover the entire screen.
The images captured from the Kindest sensors are sent to a series of PC’s (1 for every 2 Kindest sensors) to process the image data as well as distance and position relative to the screen and broadcasts the result over a gigabit LAN connection to the corresponding party in conference. The Telehuman is based off of Human Media Lab’s BodiPod 3D imaging system that allows researchers a cut-away 3D view of the human body. However, unlike the Telehuman, the BodiPod has a gestural interface allowing users to manipulate images of human anatomy. An example being using a ‘peel’ gesture to remove an imaged layer of anatomy of what’s displayed on the screen while other gestures could be used to focus on the depth of the anatomical image with ‘proximity-based slicing’. Both systems share the same technological base and prove that 3D real-time imaging systems aren’t just an aspect of science fiction any longer.
Cabe