Previous posts
Post 2 - Installing OpenCV - Prerequisites
EyePrints - Post 4 - Installing Pygaze
To complete the eye-tracker, I will install a tool (based on Pygaze) that performs eyetracking using a common webcam as a source. I preferred a USB camera instead of the Raspberry Pi Camera because it gives a more possibility of installation, thanks to the longer cable.
The webcam eyetracker requires the installation of an additional library (libwebcam.py). To make things simple, I just copied the library into the pygaze packge directory
$ cp ~/pygaze/additional_libraries/libwebcam.py /usr/local/lib/python2.7/dist-packages/pygaze/
To install the webcam eye tracker, download and unzip the source code
$ wget https://github.com/esdalmaijer/webcam-eyetracker/archive/master.zip $ unzip master.zip
Source code does not work out-of-the-box: it is not supposed to work on a Raspberry Pi, so please replace the original file with the file attached to this post.
The software works in a relatively straightforward way. Every image that the webcam produces is analyzed to find the dark bits in the image. This makes sense, because the pupil usually is one of the darkest part of the image of your face. Of course, how dark is exactly your pupil is can differ depending on the environment. Therefore, you will have to tell the software what 'dark' actually means. You do so by setting a threshold value: a single number below which everything is considered dark.
However, there could be other dark parts in the image. The easiest way to prevent incorrect pupil detection, is by specifying where in each image my software is allowed to look. Basically, it needs to know where the pupil is, and how big the area around the pupil is in which it can look for the pupil. The easiest way to achieve this, is by directly telling my software where your pupil is. Alternatively, you should write some sophisticated face detection algorithm, which finds your face in an image, and then knows where to look for your pupil. This, however, has the disadvantage that an entire face would have to be present in the image, which is not necessarily the case.
After indicating the pupil location, you can increase or reduce the size of the ‘pupil bounding rect’, the enclosure outside of which my software ignores everything. You can set its limits to anything (and you can even deactivate it). The larger the bounding rect, the higher the risk of false pupil detection, but the smaller the bounding rect, the higher the risk of losing pupil detection if you move too fast. After setting the rect, you can test if your settings are good by moving and gazing around a bit, and you can adjust the threshold if needed (or go back to any earlier step).
As suggested by the author, I removed the infrared filter from the camera and illuminate my face with a bunch of infrared LEDs
Then I launched the application
$ cd ~/webcam-eyetracker-master $ python GUITest.py
And it seems to work!