Surgeons of the future might use a system that recognizes hand gestures as commands to control a robotic scrub nurse or tell a computer to display medical images of the patient during an operation. The ‘vision-based hand gesture recognition’ technology could have other applications, including the coordination of emergency response activities during disasters. Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria. The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot. “While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room. In that case, a robotic scrub nurse could be better,” said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University. The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine. Wachs is developing advanced algorithms that isolate the hands and apply ‘anthropometry’, or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera, which is actually Microsoft’s Kinect, mounted over the screen used for visualization of images. “Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition,” said Wachs.
Eavesdropper