Intel Neural Compute Stick 2 - Review

Table of contents

RoadTest: Intel Neural Compute Stick 2

Author: carmelito

Creation date:

Evaluation Type: Development Boards & Tools

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?: NVIDIA Jetson Nano(https://developer.nvidia.com/embedded/jetson-nano)

What were the biggest problems encountered?: Documentation for the latest version of OpenVINO 2021.2 was not up to the mark with steps missing, for installation on the Raspberry Pi. I have documented the steps for face detection for an image and video in the review below, so that it is easy to follow along.

Detailed Review:

Firstly I would like to thank the element14 road test team for selecting my application for this road test. The main reason for applying for the roadtest was to help my dad and the surrounding farms to identify a disease called Rugose spiralling whitefly,  that impacts coconut trees growth and productivity in terms of bearing coconuts, you can read more about it here http://nrcb.res.in/documents/Factsheet/whitefly.pdf. Basically the disease impacts the coconut leaves and cause a white fungus like layer on them, which then turns black in a couple of weeks, and this impacts the coconut yield by at-least 60% based on my Dad's estimates from last year. As part early detection, so that preventive measures can be taken, as part of an upcoming project I plan to use OpenVINO toolkit in addition TensorFlow to create a machine learning model to detect if the coconut tree leaf is affected by the disease.

 

Unboxing

The unboxing experience was good, and Intel Neural Compute stick 2 came in a well packaged element14 box with brown paper warped around it to prevent the blue box in the picture below from any damage.

image

 

Opening the blue box I was presented with the the Intel Neural Compute stick 2 which is entirely made from aluminum, which I am guessing and is a great idea for heat depreciation. In addition the box also contained a getting started guide with the url –developer.movidius.com/start but this dint not load a page in the browser, which meant after google'ing I landed on the following page as the top result-https://movidius.github.io/ncsdk/.

Also getting the Cap of the USB required a lot of force, as the cap seem to have two drops of glue on the top, in addition to a locking mechanism below which I have seen on some regular USB storage drives. This meant I dint want to take the risk of putting the cap back on the Intel Neural Compute stick 2.

image

The fist thing I did after unboxed the Intel Neural Compute stick 2, I plugged it into my laptop running Ubuntu 20.04 LTS and ran dmesg, and here is screenshot of the output,refer the section usb 1-2

image

 

Feature of the Intel Neural Compute stick 2

 

Intel Neural Compute stick 2 is a Plug and Play Development Kit for AI inferenceing and was launched in Q4 of 2018.The Intel NCS2 is built on the Intel Movidius™ Myriad™ X VPU featuring 16 programmable shave cores and a dedicated neural compute engine for hardware acceleration of deep neural network inferences.The Neural Compute Stick 2 offers plug-and-play simplicity top a laptop/desktop or single board computers like the raspberry Pi, support for common frameworks and out-of-the-box sample applications. Use any platform with a USB port to prototype and operate without cloud compute dependence. The Intel NCS 2 delivers 4 trillion operations per second with 8X performance boost over previous generations, and Deep learning prototyping is now available on a laptop, a single board computer or any platform with a USB port.It is first in its class to feature the Neural Compute Engine — a dedicated hardware accelerator. And has 16 powerful processing cores, called SHAVE cores, and an ultrahigh throughput intelligent memory fabric together make the Intel Movidius Myriad X VPU the industry leader for on-device deep neural networks and computer vision applications. And it features an entirely new deep neural network (DNN) inferencing engine on the chip.

image

Here are some hardware specifications

  • Processor: Intel Movidius Myriad X Vision Processing Unit (VPU) 4GB
  • Processor Base Frequency: 700 MHz
  • Dimensions: 2.85 in. x 1.06 in. x 0.55 in. (72.5 mm x 27 mm x 14 mm)
  • Supported frameworks: TensorFlow*, Caffe*, Apache MXNet*, Open Neural Network Exchange (ONNX*), PyTorch*, and PaddlePaddle* via an ONNX conversion
  • Connectivity: USB 3.0 Type-A
  • Operating temperature: 0° C to 40° C

 

For software the Intel Neural Compute Stick 2 supports Intel Distribution of OpenVINO toolkit.

 

  • Intel Distribution of OpenVINO Toolkit is used to develop multiplatform computer vision solutions from smart cameras and video surveillance to robotics, transportation, and more. Develop applications and solutions that emulate human vision with the Intel® Distribution of OpenVINO™ toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance.
  • The OpenVINO toolkit is a free download for developers and data scientists to fast-track the development of high-performance computer vision and deep learning into vision applications. The kit enables deep learning on hardware accelerators and easy heterogeneous execution across multiple types of Intel platforms.
  • This includes the Intel Deep Learning Deployment Toolkit with a model optimizer and inference engine, along with optimized computer vision libraries and functions for OpenCV and OpenVX.
  • This comprehensive toolkit supports a full range of vision solutions, speeding computer vision workloads, streamlining deep learning deployments, and enabling easy, heterogeneous execution across Intel platforms from edge to cloud. In combination with Intel's diverse Al portfolio, the OpenVINO toolkit provides the power to scale computer vision solutions. The wide range of advanced silicon allows solution providers to match the performance, cost, and power-efficiency required at any node in an Al architecture.

When writing this road test the version of OpenVINO I am using is -  2021.2

 

Datasheet link - https://www.intel.com/content/dam/support/us/en/documents/boardsandkits/neural-compute-sticks/NCS2_Datasheet-English.pdf

OpenVINO -https://docs.openvinotoolkit.org/latest/documentation.html

reference: https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html

 

To start , I flashed the latest version of Raspbian(2021-01-11-raspios-buster-armhf.img) on an 16GB SD card for the Raspberry Pi 3 B+

 

Setting up OpenVINO on the Raspberry Pi 3 B+.

I tried to follow this link -https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_raspbian.html , but it seems the steps mentioned here are not sufficient to get the Intel Compute stick 2 working with an image for face detection, I have detailed the steps base on history of my command line on the Pi.

#Updating system Raspberry Pi and installing cmake
sudo apt-get update 
sudo apt-get upgrade
sudo apt-get install cmake
cmake --version
#Here in my case when writing this roadtest the version of cmake is - 3.13.4

 

#Download the OpenVINO package
cd Downloads
wget  https://storage.openvinotoolkit.org/repositories/openvino/packages/2021.2/l_openvino_toolkit_runtime_raspbian_p_2021.2.185.tgz
#create a folder to unpack the tar in the opt folder
sudo mkdir -p /opt/intel/openvino
sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2021.2.185.tgz --strip 1 -C /opt/intel/openvino

 

 

#Setting up environment variables
source /opt/intel/openvino/bin/setupvars.sh 
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc 
#Setting up USB rules for the intel neural compute stick 2
sudo usermod -a -G users "$(whoami)"
Activate intel neural compute stick 2 USB usage
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh

 

Now plug in you Intel Neural Compute stick to the Raspberry Pi

#create a build folder
cd ~
mkdir build
cd build
#Running cmake to perape the build files
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
# to check that the intel neural compute stick is connected to the Pi and all is good
make -j2 hello_query_device 
./armv7l/Release/hello_query_device

Here is the screenshot of the above command which means all is good and you are ready to move on to the next step.

image

Setting up and running the Face Detection example in an image

 

# clone the git Open Model Zoo repo which has the example which we will run below
cd ~
git clone –depth 1 https://github.com/openvinotoolkit/open_model_zoo 
cd open_model_zoo/tools/downloader  
python3 -m pip install -r requirements.in 
#now use the downloader.py script to get the demo model for face detection 
python3 ~/open_model_zoo/tools/downloader/downloader.py --name face-detection-adas-0001

 

Download image with mutiple faces/or one face should also do the trick and run the commands below. In my case I downloaded an image with mutiple expression from free photos site -https://www.pexels.com/photo/collage-photo-of-woman-3812743/ and uploaded to the download folder of the Pi via an FTP client(filezilla), also attached the image below the road test just in case you want to try the same image.

 

./armv7l/Release/object_detection_sample_ssd -m /home/pi/build/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml -d MYRIAD -i ~/Downloads/photo3812743.jpeg

Here is the screenshot of the above command, to the left is the image used.

image

An out_0.bmp file should have be now created in the build folder, here is screenshot after I ftp'ed it over to my laptop from the Pi.

 

image

 

Setting up and running the Face Detection example for a Video

For the face detection example in a video we need to use object_detection_demo which is placed in the demo folder of open_model_zoo,so I created another build folder to prepare and make the build files and then run the example, run the following commands below

 

cd ~
 mkdir build2
 cd build2
 cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" ~/open_model_zoo/demos
 make -j2 object_detection_demo

 

I download a video with a few faces moving in it, from a free to use video site https://www.pexels.com/video/video-of-people-walking-855564/ (video by Pixabay) and then FTP'ed the video to the Downloads folder to the Pi. I have attached the sample video to the roadtest beow just in case you would like to use the same. Alternatively you can also get the sample videos from the intel repo's on githhub using wget at https://github.com/intel-iot-devkit/sample-videos/blob/master/face-demographics-walking.mp4, these are video's that you see in the video tutorials by intel.  

 

  ./armv7l/Release/object_detection_demo -m /home/pi/build/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml -i ~/Downloads/VideoOfPeopleWalking.mp4 -at ssd -d MYRIAD

Here is a picture of the command output, note that you will have to run this command physically on the Pi or using VNC and not via ssh.

image

Here is the video output

In the similar fashion, I also tried the vehicle detection and pedestrian detection examples for a video.

 

Text detection in an image

To detect text in an image use the following commands, for the image I took a picture of the Raspberry Pi 3 Model B+ box, and getting started leaflet that came in the Intel Neural compute stick 2 box.

cd ~/build2
 make -j2 text_detection_demo
 python3 ~/open_model_zoo/tools/downloader/downloader.py --name text-detection-0003
 python3 ~/open_model_zoo/tools/downloader/downloader.py --name text-recognition-0012

 

Here is the image used that I downloaded from my mobile and uploaded to the Pi 3 B+

image

 

./armv7l/Release/text_detection_demo -m_td /home/pi/build2/intel/text-detection-0003/FP16/text-detection-0003.xml -m_tr /home/pi/build2/intel/text-recognition-0012/FP16/text-recognition-0012.xml -i ~/Downloads/TextRecog.jpg -d_tr MYRIAD -d_td MYRIAD -r

This command took over 25 seconds to run, and here is the screenshot of the output, of the text detected.

image

 

In addition, in the next couple of weeks I will post another blog with the progress of my project on identifying the disease on the coconut leaf, which I described in the introduction of the roadtest.

 

Conclusion

I wanted to start of by saying, that this has been one of the most interesting roadtest that I have done, and based on the demo’s I tried using the Raspberry Pi model B+, the applications/prototyping for putting together projects to run inference application at the edge are limitless. For a quick peak at what is possible, my suggestion is to check out video series at -https://software.intel.com/content/www/us/en/develop/hardware/neural-compute-stick/training.html. In addition, since the stick has an USB interface, and a small form factor, it make it easy to use with multiple devices/operating systems.

 

For the price to performance ratio, I had to give the Intel Neural compute stick a 10/10 , because I think 69$ is a great price for such a device, and based on the amount of projects posted on the internet, and on the element14 and hackster.io sites, this is a great device to use for prototyping/familiarizing yourself with AI on the edge, and specially if you are on a budget.

 

Now with respect to documentation and demo videos/software there is a lot of it ! on Intel, OpenVINO and github. But when I tried the latest version of OpenVINO(2021.2) with the latest version of Raspbian(2021-01-11-raspios-buster), I ran into some issue to get the demo example up and running, the forums where helpful in answering most of the issues I ran into. I have documented the commands with screenshots in the roadtest, just in case if someone comes across this post, it would be easy to follow along. This meant I had to drop some point of demo software was of good quality, and gave it a 7/10. I also plan to try the same steps on my friends Intel NUC 8th Gen running Ubuntu 20.04 LTS ,when I get my hands on it, and report back.

 

In addition, my apologies in posting the road test a few weeks late, I had a medical emergency at home that I had to attend to.

 

 

Anonymous
  • Nice review.  I too went with running with OpenVINO Release 2021.2 and ran into some issues getting some of the examples running on the Raspberry Pi 4.  I did not find the forums that useful, although I do have many posts there as well as the GitHub repos.  I was more directed to use the older 2020 Release instead but was determined to use the 2021.2 release. The Open Model Zoo demos had enough differences where I did not want to flip back and forth.  However, for running with a RealSense Camera. the 2020 version is required since they have not updated the tool to support OpenVINO 2021.   There is still more work to do on my end though.