Photograph of the University of Glasgow's Round Reading Room
These days, pretty much everyone owns a smart phone. The cameras found on these phones have helpful tools that allow anyone to take a descent picture. However, there are still so many ways to mess up the photos. Stephen Brewster, a researcher in human-computer interaction at the University of Glasgow, UK, is developing a new camera interface or the point of interaction, the communication between a computer and a human operator, that will help one to get pictures taken right the first time.
This interface uses the sensors and processing power in a smartphone to provide you with more information before you take a picture. Some examples would be, accelerometers, measuring movement, can detect if your hands are shaking or if an image is aligned with the horizon. The phone will have warning guidelines on the screen or even vibration. Some people have only one built-in camera on their phone and Brewster’s team has even lengthened the face detection in phones to help you frame arm-length self-portraits with friends. As you point the camera towards yourself, you can’t see the picture but the phone will vibrate once for each face it has in its sights.
Brewster said, “You’ve got to get it right the first time because the event has gone, and if you’ve got a really bad photo, you’ve lost it.” That’s why he has devised a “traffic-light system” that will let you know the quality of a shot before you take it. A green light will help you to take a good picture while a red or amber light means you may want to reorganize or rearrange your shot. Brewster plans to present his system at the Electronic Imaging conference in San Francisco in January and also plans to release a version of the interface as an Android app by the end of the year. Smartphones may never give you the quality of a digital camera with a good lens, but Brewster is in talks with a major camera manufacturer about incorporating some of his ideas into their products.
Sam Hasinoff, a software engineer at Google, is working on a solution to another problem, the balancing act between a photo’s exposure time and its DoF (depth of field), or what is in focus. Using a small aperture (an adjustable opening that limits the amount of light passing through a lens or onto a mirror) or a larger aperture will either give you blurry subject or fuzzy background. Hasinoff has taken multiple wide aperture photos with different DoFs and combines them to take a picture the same as a small aperture photo but taken in a fraction of the time. His method is called “light efficient photography” which automatically calculates which combination of photos that will produce the desired picture for a selected exposure.
Hasinoff said, “If either the scene or camera is moving, our method will record less motion blur, leading to a sharper and more pleasing photo.” Even though some processing techniques still need a PC, Hasinoff thinks this tech can be implemented directly into existing cameras. They say that future cameras could boast all of these methods, performing both pre and post-processing to help give you the best possible picture. Brewster said, “That would be the perfect solution.”
Eavesdropper