This project has gotten off to a very slow, very rocky start, but it is still alive - barely. There are still some key parts on order, but I have been able to get some image processing sorted out.
Here is a little description of how the big screen touch digitizer will work:
This diagram shows an outer frame that holds a camera in each top corner.
The picture in the middle is a big screen TV driven by a Raspberry Pi.
The field of view of the left camera is the yellow translucent area.
The field of view of the right camera is the green translucent area.
The point where the red and blue lines meet is an example of where a finger might get detected by both cameras.
The left camera detection system will only be able to indicate the alpha (a) angle to the finger.
The right camera detection system will only be able to indicate the beta (b) angle to the finger.
Knowing alpha and beta and the distance between cameras (r) allows dr and dl to be calculated.
Which in turn allows calculation of x and y, so we know where on the Pi display the finger is.
To obtain alpha and beta we only need to look at one line of pixels in each camera's field of view.
The following video demonstrates how this can be done with a camera and a Raspberry Pi:
The location of the finger is actually just an angle from one side of the camera to the other.
This angular information from each camera will be sent to another computer to figure out the x-y position of the finger.
I don't have 2 cameras yet, but I can start building the camera frame and the computer that will compute finger location.
Relevant links:
Top Comments