This was an interesting exercise to try to get simple camera capability for the BeagleBone Black directly (without USB) for some low-resolution imaging ideal for some machine vision use-cases, robotics and movement detection. The lower resolution means there is less data to process, and the opportunity to connect multiple cameras (e.g. for stereo vision).
Here is an example image taken with the BBB.
This image was 160x120 pixels, and has been slightly corrected - although still quite washed out - and resized to 640x480 using bilinear scaling on a desktop but which could be done by the BBB – see here.
This is the actual unaltered image from the camera – the software saves it in .png format:
So, (as expected) the quality is not great (and probably not even well focussed – I didn’t check), but adequate for some use-cases. In the photo, Bert is about 55cm away from the camera, the top of his hair is 18cm from the floor and the furthest end of the blue block is 45cm away from the camera. Also, the image is flipped left-to-right currently; this is an easy fix of course. Note that some work will need to be done to make this a usable camera system.
How does it work?
There is not much to the implementation – it is a low-cost camera module (OV9655) - £9.58 from ebay (or £2.56 or less with no PCB), connected up to a buffer and then connected to the BBB. In other words, in much the same procedure as for the ADC here. The same buffer board was used.
This is a photo of the entire layout:
This is a close-up of the camera board (oriented with the connector at the top gave an image the correct way up (but with left-right flipped as mentioned earlier):
As you can see in the photo, some slight modifications need to be done – these are detailed further below. Note that the camera is the same one that is available on the STM32F4DIS-CAM module which is more expensive, but which won’t need the modifications.
Some buffering is needed as mentioned earlier and this is achieved with a 74LVC244A device and MC74VHC1GT50 (see the ADC post for information).
The diagram here shows the overall system:
On startup, the application initializes the camera (using I2C) and then starts off the PRU software.
The PRU code is very similar to the ADC example in that it sits and waits for a command, and then it captures data from the camera and dumps to shared RAM, ready for the Linux hosted application (called cam_app in the diagram) to pick up. There is a slight complication here, in that there is not much RAM available; there is just 12kbyte shared between the PRU and the ARM core. This means that it’s not possible to transfer the entire image in one go; I had to grab the image in 4 portions. The cam_app code sends a number between 0 and 3 to select the desired portion of image. To fix this requires a driver to change the RAM allocation.
The code dumps the data into a cairo buffer, and writes a file in png format. Although it works, the code (attached) is extremely untidy and is just a quick prototype.
As mentioned, a low cost camera module was used. It requires some decoupling capacitors, otherwise expect it not to work at all. Even with decoupling, the camera only just worked with the extreme wiring in the photo earlier (time to make a PCB).
The STM32F4DIS-CAM documentation was used as a reference. The cheap ebay board was missing 1nF capacitors. The two photos here show where they need to be soldered (0603 sized capacitors).
The rear of the camera board requires two 1nF capacitors, where the yellow outlines are shown in the photo here. This is an unmodified board photo. The 1nF capacitors are just piggy-backed on top of the existing capacitors.
Below is a photo of the front of the camera board (unmodified board photo). Another 1nF capacitor is required at the yellow outline again piggy-backed.
At the red outline (i.e. across the two pins of the header), a 10nF and 1uF capacitor were both piggy-backed. Something higher than 1uF will be preferable.
At the blue circle in the diagram above, a 32MHz clock needs to be fed in; a 3.3V oscillator in a 7x5mm package worked fine. It can be seen glued upside-down in one of the photos earlier.
Buffers are needed for all the data lines (there are 8, marked D2..9), and preferably for the VSYNC, HREF and PCLK pins too. The table here shows the wiring from the BBB point of view, and how it is assigned to the camera (through the buffers). All pins are inputs apart from the buffer enabling pin. HDMI is disabled.
There was a lot of high-frequency noise on the VSYNC line. I tried to reduce it with a resistor in series, but I didn’t measure it in any detail to have much effect. I don’t know if it is related to the PCB layout and (likely!) the wiring. After a while, I gave up trying to resolve it on such a poorly constructed layout (a PCB is required), so the solution for this particular layout was to perform some de-glitching in software which worked well.
The longer term plan is to connect up at least two of these cameras to the BBB for additional flexibility, such as stereo vision capability. It would be easy to do, since the buffers have an enable pin.
The camera is capable of 1Mpixel imaging but for now, I set the camera to the very low resolution QQVGA (160x120 pixels) mode which is still useful for some applications, and to reduce the need for a driver to allocate more RAM.
Once the camera has been configured by the I2C-like interface for QQVGA mode, PCLK runs at 2MHZ continuously.was used.
The camera configuration is stored in camctrl.c, and is just an array of data (register and data pairs) that is not annotated. However it can be decoded using the camera datasheet.
The video is comprised of frames, and each frame contains 120 lines. Each line contains 160 pixels.
Each pixel is in RGB 565 format, which means that 16 bits (two bytes) are used per pixel. So to summarize each line contains 160*2 bytes (320 bytes) and two bytes are sent, one after the other, per pixel.
In order to capture images, it is important to understand how the camera signals the start and end of frame, and the format of data for each line within a frame. The detail is described here.
Start of frame
The start of each frame begins with VSYNC going low. The diagram here shows VSYNC going low, and then after about 2.8msec the first line of data appears (i.e. when HREF goes low).
The diagram shows all the interesting signals; PCLK, VSYNC, and HREF. Only some of the data signals (D7,8,9) are shown. Note that signals D2..D9 will be used for the eight bits (D0 and D1 are not used in all modes).
The diagram here shows the data per line. At 2MHz the PCLK period is of course 0.5usec. Once HREF goes low, the line data is read. 320 bytes are read as mentioned earlier, per line of 160 pixels. Each burst of line data is 160usec long (i.e. 320bytes x 0.5usec) followed by 640usec delay until the next line.
The diagram below shows a zoomed-in view of the beginning of each line. It can be seen that each byte needs to be read on the rising edge of PCLK.
End of frame
The diagram below shows the last few lines before end of the frame. It can be seen that after the last line in a frame (i.e. 120th line) there is approximately a 990usec delay until VSYNC goes high to indicate that the frame is complete.
Not shown on the diagram above, VSYNC will stay high for about 800usec and then go low for the next start of frame indication.
Building and Running the Software
The code can be built by issuing
make clean followed by
make (if you make a change to the C source, type
make clean first, becuase my makefile is broken - I have to fix it someday!). (Note - make sure the has been compiled and installed first). This will build three things; the cam_app software, the PRU code and a .dtbo file which is used to configure the pins. Copy the built .dtbo file into the
HDMI will need to be disabled (or, modify the code to use different pins available to PRU#0 - the code currently uses PRU#1). HDMI disablement is mentioned on the ADC page.
To run the code, first type
source install_cam_cape.sh which will execute some commands to configure the pins using the .dtbo file.
Then, just type
The code will capture an image and dump to a file called
It would be great to integrate the camera and buffer into a single module so that multiple cameras could be connected up to the BBB (and perhaps also a serializer for wiring convenience and distance). Or, a board with two cameras spaced eye-distance apart for 3D vision processing.