Raspberry Pi 3 Camera Bundle - Review

Table of contents

RoadTest: Raspberry Pi 3 Camera Bundle

Author: ilyasskure

Creation date:

Evaluation Type: Development Boards & Tools

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?: BeagleBone Black, Arduino, Netduino.

What were the biggest problems encountered?: Even though there were no biggest problems, using camera functions on Matlab was quite challenging.

Detailed Review:

I have applied for Raspberry Pi camera bundle roadtest so I could test it with my students in our programming class since I was teaching Matlab this semester. We have made 3D printed cases for the pi and camera. Please see some of the projects we have done using this module.

3D representation of the object

We simply cannot create a "true" higher dimensional image from the only couple of 2-dimensional images. It's a question of math: there's an infinite number of possible solutions. The more 2D images we have, taken from different projection angles, the better we can approximate the 3D image. However, taking around 40 photos of an object that rotates on a specific axis with a stable camera module we can a better 3D representation of the object. Taking the 40 photos takes a while, along with the fact that Zephyr, the image processing software we are using, takes a while to create a model from the images. We can use a simple model and shorten that time, but the problem still exists in the resolution of the Raspberry Pi Camera.

imageimage

 

Hardware:

  1. 1. Raspberry Pi 3 Model B
  1. Raspberry Pi Camera V2
  2. Ethernet cable
  3. Micro USB cable
  4. Micro SD
  5. Computer running MATLAB

imageimage

A simple 12in turntable off of Amazon would  We need this to keep a constant distance for the photographs as the image rotates around. The camera and light source will stay still while the model moves with the turntable.

image

 

Software:

  • Our MATLAB code has two main functions.
  • Live Feed function is to connect and shows a live feed from the camera and a figure with two buttons to take a picture or exit the code.
  • Shutdown function is to properly disconnect and shut down the Raspberry Pi without crashing MATLAB
  • For details see following code:

 

function RaspberryPi
LiveFeed
ShutdownRasPi
end

function LiveFeed
clear;
clc;
global x;
global y;
global count;
clear rpi;
clear cam;

rpi = raspi('rasppi.local','pi','raspberry');
cam = cameraboard(rpi, 'Resolution', '640x480', 'FrameRate', 89, 'Rotation', 180, 'Brightness', 50,...
'ExposureMode', 'auto', 'AWBMode','auto', 'MeteringMode','average', 'VideoStabilization','on');
disp('');
disp('Raspberyy Pi 3 Model B connected.');

CameraValue = 'Turn Camera Live Feed: ON or OFF? ';
UserCommand = input(CameraValue,'s');
if isequal(UserCommand, 'ON') || isequal(UserCommand, 'on')
count = 0;
figure('pos',[200 700 100 200])
uicontrol('Style', 'pushbutton', 'string', 'Take Photo', 'Position', [0 100 100 100],...
'Units','normalized','Callback', 'global y; y=1;','BackgroundColor','black','ForegroundColor','white',...
'FontSize',12);
uicontrol('Style', 'pushbutton', 'String', 'Done', 'Position', [0 0 100 100],...
'Units','normalized','Callback', 'close all; global x; x=0;','BackgroundColor','black','ForegroundColor','red',...
'FontSize',12);
hold on;
x = 1;
elseif isequal(UserCommand, 'OFF') || isequal(UserCommand, 'off')
x = 0;
close all;
disp('Please, Re-run the program and enter one of the choices');
end
disp('');
disp('Please wait...');

while x
img = snapshot(cam);
imagesc(img);
drawnow;
figure(2)
set(gca,'XTick',[])
set(gca,'YTick',[])
set(gca,'Position',[0 0 1 1])
if y==1
  temp=['fig',num2str(count),'.png'];
saveas(gca,temp);
count = count + 1;
fprintf('Photo Number: %d \n',count);
y=0;
end

if count==50
disp('this is the max number of photos allowed on 3DF Zephyr Free');
close all;
x=0;
end
end
close all;
end

function ShutdownRasPi
global y;
Shutdown = 'Shutdown "Raspberry Pi 2 Model B"? Yes or No?  ';
SHUT = input(Shutdown,'s');
if isequal(SHUT, 'Yes') || isequal(SHUT, 'yes') || isequal(SHUT, 'YES') || isequal(SHUT, 'y') || isequal(SHUT, 'Y')
y = 1;
elseif isequal(SHUT, 'No') || isequal(SHUT, 'no') || isequal(SHUT, 'NO') || isequal(SHUT, 'n') || isequal(SHUT, 'N')
y = 0;
end
if isequal(y,1)
disp('');
  disp('You chose to shutdown Raspberry Pi');
h = raspberrypi;
h.execute('sudo shutdown -h now');
disp('');
disp('shutting down...');
disp('');
disp('loading...');
disp('');
disp('Safe to unplug Raspberry Pi');
else
disp('');
disp('loading...');
disp('');
disp('You chose to NOT to shutdown Raspberry Pi');
disp('');
disp('NOTE: Make sure to shutdown before unplugging the Raspberry Pi');
end
close;
disp('');
disp('NOTE: Your photos are stored in this root folder');
clear;
end

 

Conclusion:

Where we need to be (Hardware): 

  • We need to create a softbox effect, or else the shadows in overlapping images will wreak havoc on our final image. Remember that the lighting in our class is harsh and from high up. To this end we need a diffuse filler light that can rotate with our camera.
  • We need a mount for the camera and light that can change angles and possibly height, that would mount the turntable

Where we need to be (Software):

  • Taken from the software we are using, we will assuredly change the parameters to what we will need, but for the presentation, we use an object that is fairly simple without too many details.
    • If you are using Zephyr to capture an object by moving around it (like a statue) we suggest you take several orbits around the object at different heights. This indeed can be seen as a special case of the aerial photogrammetry application, where the orbit plays the role of the strip. If the acquisition follows a strict planning, you can apply the same rules as in the serial case. Otherwise, as a rule of thumb, 3 orbits of 24 images each will typically suffice.
    • https://www.3dflow.net/technology/documents/photogrammetry-how-to-acquire-pictures/

We need to be able to have the camera take a picture, save it to a directory with a unique name for each image, and get ready for another photo.

 

 

 

Live camera feed that recognized faces

The goal of the project was to create a live camera feed that recognized faces using matlab, Raspberry Pi 3, and a camera module for the Raspberry Pi. The program is able to do this using Matlab's built-in function called “vision.CascadeObjectDetector” and a camera connected to the Raspberry Pi.

 

Introduction

This Project used Matlab, Raspberry Pi 3, and a camera module for the Raspberry. The main function used in this program is called vision.CascadeObjectDetector. What this function does is detects objects using the Viola-Jones algorithm. This was used to find “faces” and define them as such to be recognized in the program. Once the program is running it will display all faces inside of the shot surrounded by yellow boxes. The program maintains a continuous feed and the location of the faces recognized. If someone new enters the picture the program will recognize them too and display a new box around their face. The program impressively held all five of our faces at once, and a demonstration of the program catching three faces can be seen in the figure below.

imageimage

 

 

image

The code continuously detects faces from images that are continuously being taking by the pi camera using a while loop. We then use the ‘vision.CascadeObjectDetector’ function, which uses the built-in Viola-Jones algorithm to detect different people’s faces. The Viola-Jones algorithm is a four-stage algorithm that uses different shapes projected over pixels of an image to determine if the given image shows the person’s face or the requested object. The step function then creates a position matrix of the image and the given location of the faces from the ‘Detector’ function. Next, the ‘insertObjectAnnotation’ function is used to put a rectangle around the face at the given location in the position matrix and label the face. The code while is continuously updated in real time as long as the pi camera is running.

 

Conclusion

The goal of this project was to detect the presence of faces on a live video feed coming off of a Raspberry Pi using Matlab as a source for the code. The most important part of the program was Matlab’s function vision.CascadeObjectDetector. The problems we encountered were the vision.CascadeObjectDetector, which sometimes mistakes other objects as faces that don’t look anything like a face. The solution to the problem encountered was simply solved by adjusting the shape parameters via Matlab. Once fully adjusted for facial detection the program has proven great a facial recognition and more.

For example, this program could be used to count how many people are in a room or more likely be used as a start of some other programs that need to require a continuous camera feed and not just a picture. Applications include things such as an automated defense system, active building security, finding wanted peoples’ in specific areas, facial recognition advertising, etc. A popular extension of our current application would be using this software, coupled with facial recognition to specifically advertise to individual interest via malls, restaurants, and of course online. Utilizing this new form of advertising would be very valuable to businesses; imagine simply going to the mall, using the kiosk and instantly see a welcome sign with your name and the new shoes you googled last week. There are many ways to utilize our simple algorithm within the markets and private companies.

Lastly, we have demonstrated the elegance and simplicity of this program and how easy it has become to utilize such powerful technology. Matlab was first introduced as a tool to solve matrix mathematics and focused primarily on the realm of Linear Algebra. Now it is proven to be an extremely powerful tool within programming to perform tasks involving algorithms and visualization. The biggest takeaway from this project for all of us was the importance of technology and robotics in today's world, and how we can be a part of that.

 

 

 

Object identification : Specifically to isolate a yellow cone from a live feed of images inputted from the Pixie camera

Introduction

The initial goal of this project was to create a code that detected yellow cones in a field and draws a box around its location for another robot to travel to and obtain. The Raspberry Pi and Pixie camera took continuous pictures to feed into the code, so the original plan consisted of getting the range of the pixel value of the yellow cone in the last image obtained from the camera, and black out all the pixels in the image that were not in that range. With the blacked-out image we would add the value of all the pixels, and if there was not a yellow cone in the image the added value would be 0 or very low. If there was a yellow cone in the image, the pixel value would return a significantly higher number, so we would choose the highest x and y value of the pixel and the lowest to draw a square box around the cone and display that back on the original image, and then onto the camera feed. This original plan could not work with the Raspberry Pi and the camera. The next route of action was color segmentation. We implemented code that would take the picture and find the color that stood out the most and isolate the rest of the image from that color, which would result in the bright, yellow cone being found. Once it was found we saturated it blue, which resulted in any yellow cone appearing green once it was recognized.

 

Results and Discussion

The program executed on a separate computer in Matlab and communicated with the pi via the SSH protocol, wirelessly, allowing the pi to potentially be mounted on a mobile robot. The first line of code sets up this connection. The code correctly identified the color and shape of the cone and differentiated it visually on the camera feed. The exact parameters of a yellow color or a brighter color was seen as separate from the rest of the image. The code took an image, separated the brighter from the dimmer colors, and saturated one.

image

 

image

This screenshot shows the identified yellow cone, it appears green because the program shades it blue when the bright, yellow color is identified.

image

This is the image that the program would flicker to from image 1 above.

 

Summary and Conclusion

The program correctly identified the yellow cones in the images fed in through the live feed since the parameters of the color of the cone fit the cone exactly, but the image presented would flicker between the original feed image and the new image with the identified cone. In order to solve this, we tried changing the settings of the Pixie Camera, but the problem had to do with how we wrote the code. It was changing between variables too rapidly and we did not know how to solve it with the time we had to complete the project.

  The benefits of using color segmentation over hard-coding the pixel values of the cone were that we could now identify if there were multiple cones in a field, which worked better with the original goal of helping the Vex Robotics Team in competition. We used this method of color identification from Mathworks to start, but the range of the colors was not as specific as we wanted. The Mathworks code identified 5 different colors and separated them to figure out if there was a certain color in the code, but not the shape of the object with that color. We then modified the code by deleting the extra color identification, modifying it so that it only compared and contrasted two colors, allowing for more accurate cone identification.

 

 

Road Test Summary

This road test was a great opportunity for us to implement real-life projects using Raspberry Pi and camera module. It helped me to show them what they are capable of doing using Matlab and raspberry pi. I am quite happy and grateful to be chosen as a roadtester. I personally would like to thank element14 for giving us this chance.

Anonymous

Top Comments

  •   - speaking for @self

     

    I think a road test is OK if it describes an experience with the product in a real world setting - while validating it as the main goal or as a side effect.

     

    I agree with you that the Summary and Conclusion describes the teacher's (and his class's) achievements more than the product at hand image. It does show that the device works in a practical settlement also ...

  • Personally I read this road test to find out about the Pi camera module because I'm thinking of buying one. This roadtest didn't really help me with that at all. If I'm not mistaken this is what a road test is supposed to be. You seem to have reviewed your own project, not of the Pi and camera.

  • Nice road test report.

     

    DAB