Raspberry Pi 3 Camera Bundle - Review

Table of contents

RoadTest: Raspberry Pi 3 Camera Bundle

Author: davide.bellizia

Creation date:

Evaluation Type: Development Boards & Tools

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?:

What were the biggest problems encountered?:

Detailed Review:

Introduction
All the features of a linux computer in the size of a small packet of nuts: the Raspberry Pi 3 model B is a must have for a newbie approaching the world of Single Board Computers (SBCs) as well as a nice backup and standalone product for pro users. In addition, the PiCamera V2 module make it more enjoyable, opening the gates of the computer vision world to many users with (almost) simple coding in Python! During my PhD, I attended a multimedia elaboration class, which introduced me a little into the computer vision, but I did no hands-on example to really absorb all the knowledge about this wonderful branch of digital elaboration.

Unboxing

The Raspberry Pi3+PiCameraV2 bundle comes with two boxes. The Raspberry Pi 3 Model B’s box contains only the board with the popular SBC and all its features. No cables and power supply unit are provided within the box, which is actually the standard for this product first its first appearance.

Regarding the PiCamera V2 Module, it has been provided in a separated box which contains a small printed circuit board (PCB) implementing the “eye” of the Raspberry Pi 3.

 

The board

The board is composed by the following components and features:

  • Main processor: Broadcom BCM2837, equipped with a 1.2GHz 64-bit quad core ARM Cortex-A53+VideoCore IV 400 MHz dual core GPU;
  • 1GB of 900MHz RAM memory, shared between CPU and GPU;
  • x40 pins header (GPIO);
  • Ethernet port (RJ45-10/100Mbps);
  • 4 USB 2.0;
  • 3.5mm composite TRRS jack, for stereo audio+video;
  • HDMI port;
  • Micro-usb power jack;
  • On-board 802.11n Wi-Fi + Bluetooth 4.1;
  • Display Serial Interface (DSI, flat connector);
  • Camera Serial Interface (CSI, flat connector);
  • microSD slot.

 

 

The microSD slot is empty, but it is necessary use one to have a functional system. This storage support substitutes the HDD in a regular PC, storing the Operating System (OS) and data. It is highly recommended to buy a >8GB class10 microSD. The class10 requirement stands on the fact that since the OS and other critical components are stored/loaded into this tiny storage, it has to fulfill writing/reading speed to maintain the system usable. Nowadays, a good quality 16GB class10 can be found at 10EUR, which is very affordable. I have used a Kodak 16GB class10 microSDHC-1 card. There are a lot of possible OS that are compatible with the Raspberry Pi 3 Model B. Since I prefer the classic Linux environment, I have used Raspbian, which is a Debian-based distribution designed for the Raspberry Pi platform.

It has also to be pointed out, that the whole system is provided without any case or cooling system. The latter could be very useful in some kind of usage, since the unit can reach not-so-low temperature, and there is a severe risk of failure due to thermal problems. The board comes with a micro-USB port for powering, making very easy to use a normal PSU from a recent Android smartphone or tablet. It is recommended to use 5V 2000mA power supply, to correctly ensure the functionality of the system. A good choice could be to buy a kit with PSU+case+fan+heatsink, that can be found easily.

I set the whole system in “headless” configuration, since I am far away from my desk in Italy, and I did the best with my knowledge (may Google be on your side!) to set up everything to work. In “headless” configuration, the board has no monitor, no mouse and no keyboard attached. So far, an additional step has been required to set up manually the WiFi configuration in the WPA_supplicant.conf file, which contains information for available WPA connections. This file can be found into the microSD inside the classic ‘etc’ folder. The folder, and thus, the file, is accessible with the proper OS, since the microSD is formatted to be “ext4”, which is also the classic Linux file system. I have used Ubuntu Mate, to setup manually the configuration file which is recalled during the boot of the OS. Once the system is running and connected to the Wifi (or through Ethernet Cable), it can be easily reached by SSH, and after the installation of a VNC server, it is possible to open remote session (I prefer to use VNC or MobaXterm). A remote session allows to practically control the whole system with an emulated window instead of a real screen, and using the mouse/touchpad and keyboard on your host PC as the mouse and keyboard of the Raspberry Pi. All these additional steps are not needed in the presence of a real screen+mouse+keyboard, which is highly recommended to newbie.

The PiCamera V2 Module is implemented by a small PCB (2.5cm x 2.5cm) with the Sony IMX219 8-megapixel sensor and additional circuitry for communications with the Raspberry Pi. Attached to the board, there is the flat cable for the CSI connector, which is very easy to install, and it does not require any tool. The PiCamera is not enabled by default on the system, and it has to be enabled by “sudo raspiconfig” from a terminal, which also requires a reboot of the system (the reboot is very fast in this version of the Raspberry Pi!).

 

 

 

Example of usage

The RoadTest of this product has been conducted using mainly the PiCamera V2 module, which is a very interesting and very useful add-on to the main Raspberry Pi 3 board. The presence of a camera opens to the possibility to use this so small but so powerful SBC as a Computer Vision development platform, that can be used in huge number of applications: security, surveillance, etc.

In the Linux community, and broader, to developers in computer vision, it is common to adopt the Open Source Computer Vision Library, as known as OpenCV, to perform operations, such as Image Processing, Video Processing, Face Recognition, etc. The OpenCV library is open source and free, but it needs to be compiled on the machine, because of its highly portability. It contains a huge amount of functions for computer vision and image/video processing, and also useful data structure for easy handling of such kind of data. The complete installation and compilation of the OpenCV repository takes almost 2h, and stress the Raspberry Pi. It is heavily recommended to mount a cooling system during this operation. At the end of the whole process, it is possible to use OpenCV functions and classes in Python 2 and C/C++. I decided to adopt Python 2.7, since it is new to me, and this bundle enforced my curiosity in learning this widely used scripting language.

 

As a newbie, the first thing to do with a camera is to capure a picture with a script, reported as follows:

 

#Library for PiCamera Module

import picamera

from time import sleep

 

#Instantiate the camera.

camera = picamera.PiCamera();

 

#Capture the image and save

  1. camera.capture(‘test.jpg’)

 

#Close the camera

  1. camera.close();

 

This is the basic script for capturing a shot. It should be noted that this script has no feedback and no image/video of what the camera actually sees is shown on the screen. The introduce better the usage of the OpenCV library, we will make use of the array class. In this example we will make the capture operation into a picamera.array, which is actually a matrix of 3-plets. Each 3-plet is a pixel, and it express the intensity of the main color component in this encoding (the BGR). This time, a window named “frame” will appear, containing what the PiCamera sees. When the ‘q’ button is pressed, the imaga is captured and saved into ‘capture.jpg’.

 

import cv2

import picamera

import picamera.array

import time

 

 

with picamera.PiCamera() as camera:

    camera.resolution = (320, 240)

    with picamera.array.PiRGBArray(camera) as stream:

        while True:

            camera.capture(stream, 'bgr', use_video_port=True)

            # stream.array now contains the image data in BGR order

 

            cv2.imshow('frame', stream.array)

         

            if cv2.waitKey(1) & 0xFF == ord('q'):

camera.capture(‘capture.jpg’)

                stream.seek(0)

                stream.truncate()

                cv2.destroyAllWindows()

                break

 

            # reset the stream before the next capture

            stream.seek(0)

            stream.truncate()

         

        camera.close()

 

The next step is to make a little bit of processing, detecting the motion in two consequent frames. The aim is to create a “difference” frame, which contains black pixels for still ones, and green for pixels where a meaningful change in brightness (movement) has been detected between the two frames. The time distance between frames has been set to 0.1s:

 

import cv2

import picamera

import picamera.array

import numpy as np

import time

 

with picamera.PiCamera() as camera:

    camera.resolution = (320, 240)

 

    # Wait for analog gain to settle on a higher value than 1

    while camera.analog_gain <= 1:

        time.sleep(0.1)

    # Now fix the values

    camera.shutter_speed = camera.exposure_speed

    camera.exposure_mode = 'off'

    g = camera.awb_gains

    camera.awb_mode = 'off'

    camera.awb_gains = g

    camera.video_stabilization = True

     

    with picamera.array.PiRGBArray(camera) as streamDiff:

        with picamera.array.PiRGBArray(camera) as stream:

            with picamera.array.PiRGBArray(camera) as stream2:

 

                while True:

                    # stream.array now contains the image data in BGR order

                    camera.capture(stream, 'bgr', use_video_port=True)

                    time.sleep(0.1)

                     

# stream2.array now contains the image data in BGR order

                    camera.capture(stream2, 'bgr', use_video_port=True)

                   

                  # streamDiff.array now contains the squared difference of stream and stream2

                    streamDiff.array = (stream.array - stream2.array)**2

        

 

# raw estimation of brightness among the streamDiff.array’s pixels

                    A = 0

                     

                    for i in range (0,239):

                        for j  in range (0,319):

                            bright = ((int(streamDiff.array[i,j,0]) +int(streamDiff.array[i,j,1])+ int(streamDiff.array[i,j,2]) )/3)

                            if (bright > 50):

                                A = A +1;

streamDiff.array[i][j][:] = (0,255,0)

                            else:

streamDiff.array[i][j][:] = (0,0,0)

                                 

                    cv2.imshow('Image 1', stream.array)

                    cv2.imshow('Image 2', stream2.array)

                    cv2.imshow('DIFFERENCE', streamDiff.array);

                   

                  # Decision rule for the motion detection                           

                  TH = 4800

                    if A > TH:

                        print('Motion Detected')

                    else:

                        print('MOTION NOT DETECTED')

                                             

                    if cv2.waitKey(1) & 0xFF == ord('q'):

                        cv2.destroyAllWindows()

                        break

                         

                    # reset the stream before the next capture

                    stream.seek(0)

                    stream.truncate()

                    stream2.seek(0)

                    stream.truncate()

                    streamDiff.seek(0)

                    streamDiff.truncate()

 

                cv2.destroyAllWindows()

 

OpenCV has a large number of functions that allows to compute matching between two pictures. The cv2.matchingTemplate function allows to compare two pictures with a comparing algorithm. Among them, the Pearson’s correlation coefficient can be very useful in detecting changes between two frames, since it is normalized and allows to state a threshold for the detection with a very easy trial&error procedure.

 

import cv2

import picamera

import picamera.array

import numpy as np

import time

 

 

 

with picamera.PiCamera() as camera:

    camera.resolution = (320, 240)

 

    # Wait for analog gain to settle on a higher value than 1

    while camera.analog_gain <= 1:

        time.sleep(0.1)

    # Now fix the values

    camera.shutter_speed = camera.exposure_speed

    camera.exposure_mode = 'off'

    g = camera.awb_gains

    camera.awb_mode = 'off'

    camera.awb_gains = g

    camera.video_stabilization = True

    with picamera.array.PiRGBArray(camera) as streamDiff:

        with picamera.array.PiRGBArray(camera) as stream:

            with picamera.array.PiRGBArray(camera) as stream2:

                while True:

                    # stream.array now contains the image data in BGR order

                    camera.capture(stream, 'bgr', use_video_port=True)

                    time.sleep(0.05)

                     

                    camera.capture(stream2, 'bgr', use_video_port=True)

 

                    match = cv2.matchTemplate(stream.array,stream2.array,cv2.TM_CCOEFF_NORMED)

 

                    if match < 0.995:

                        print('Motion detected! %d',match)

                    else:

                        print('Frame still...')

 

                    cv2.imshow('Image 1', stream.array)

                             

                                             

                    if cv2.waitKey(1) & 0xFF == ord('q'):

                        break

                         

                    # reset the stream before the next capture

                    stream.seek(0)

                    stream.truncate()

                    stream2.seek(0)

                    stream2.truncate()

 

                cv2.destroyAllWindows()

 

As last part of this examples with the Raspberry Pi 3 Model 3, I introduce also a simple Face Tracking script. The script will collect a sample image of the face to track (which has to be smaller than the final frame, to collect “less noise”), and after the video stream is collect live, the Raspberry Pi will attempt to firstly search for the reference face in the captured frame and then to track it. In this case, we are not interested only in the matching value, but also in which area we have reasonably matching. Adopting this approach, the red rectangle will be locate always around best matching point, which will be, reasonably, the face you want to track.

 

import cv2

import picamera

import picamera.array

import numpy as np

import time

 

 

 

with picamera.PiCamera() as camera:

    camera.resolution = (128, 128)

 

    # Wait for analog gain to settle on a higher value than 1

    while camera.analog_gain <= 1:

        time.sleep(0.1)

    # Now fix the values

    camera.shutter_speed = camera.exposure_speed

    camera.exposure_mode = 'off'

    g = camera.awb_gains

    camera.awb_mode = 'off'

    camera.awb_gains = g

    camera.video_stabilization = True

    with picamera.array.PiRGBArray(camera) as streamDiff:

        with picamera.array.PiRGBArray(camera) as stream:

            while True:

                 

                camera.capture(stream, 'bgr', use_video_port=True)

                cv2.imshow('Image 1', stream.array)

                if cv2.waitKey(1) & 0xFF == ord('t'):

                        break

                stream.seek(0)

                stream.truncate()

            camera.resolution = (320,240)               

            with picamera.array.PiRGBArray(camera) as stream2:

 

                while True:

                    # stream.array now contains the image data in BGR order

                    #camera.capture(stream, 'bgr', use_video_port=True)

                    time.sleep(0.05)

                     

                    camera.capture(stream2, 'bgr', use_video_port=True)

                     

                    match = cv2.matchTemplate(stream.array,stream2.array,cv2.TM_CCOEFF_NORMED)

                    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(match)

                    loc = np.where( match >= 0.6)

                    print(max_val)

                    top_left = max_loc

                    h,w = (70,70)

                    bottom_right = (top_left[0] + w, top_left[1] + h)

                    cv2.rectangle(stream2.array,top_left, bottom_right,(0,0,255),4)

 

                    cv2.imshow('Image 1', stream.array)

                    cv2.imshow('Image 2', stream2.array)

                    cv2.imshow('Result',match)

                                             

                    if cv2.waitKey(1) & 0xFF == ord('q'):

                        cv2.destroyAllWindows()

                        break

                         

                    # reset the stream before the next capture

 

                    stream2.seek(0)

                    stream2.truncate()

                    streamDiff.seek(0)

                    streamDiff.truncate()

 

                cv2.destroyAllWindows()

 

Summary

In this RoadTest, the Raspberry Pi 3 Model B + PiCamera V2 Module Bundle has been tested. The whole bundle is suitable as a beginner platform from a lot of project (built around the Raspberry Pi itself), but it can be even more useful if it is used as first attempt into image/video processing for computer vision. The bundle is almost ready to use to start to code and process data from the camera, and the Python support makes newbies life very easy.

Anonymous