element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • About Us
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Tech Connection
  • Learn
  • Learning Center
  • Tech Connection
  • More
  • Cancel
Tech Connection
Documents AI Prototyping on the Edge with the Intel Neural Compute Stick 2
  • Blog
  • Forum
  • Documents
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Tech Connection to participate - click to join for free!
Actions
  • Share
  • More
  • Cancel
Engagement
  • Author Author: rscasny
  • Date Created: 19 Aug 2020 9:48 PM Date Created
  • Last Updated Last Updated: 12 Oct 2021 5:36 PM
  • Views 2534 views
  • Likes 8 likes
  • Comments 3 comments
Related
Recommended

AI Prototyping on the Edge with the Intel Neural Compute Stick 2

IoT applications typically include any application that monitors a physical environment. Examples include airplane telemetry, tracking fauna, manipulating blinds via voice commands, and surveillance using drones. Many such applications require functional Artificial Intelligence (AI) capabilities, including image, audio, and video analysis. AI is an umbrella term, and all AI machines are trained using Machine Learning Algorithms (MLAs).image

 

The term "Edge AI" is loaned from edge computing, which implies computation closer to the data source. In AI terms, it generally means any action that happens outside data centers or bulky computers, such as drones, cellphones, and autonomous vehicles. These Edge AI devices have diverse physical size and are designed or supported by multiple vendors. In this article, we will focus on pocket-sized platforms that individuals and small companies purchase and use.

 

In such Edge use-cases, processing time-delays determine that data not be dispatched to the cloud; instead, algorithms armed with sufficient computing power are required. Edge devices such as vehicles or drones lack such computational power, necessitating the presence of dedicated hardware like the Intel Registered Neural compute Stick 2 (NCS2), employed as a neural network inference accelerator providing additional performance.


Buy Now Buy Nowimage

 

Figure 1: Intel Registered Neural compute Stick 2

 

Movidius is chiefly designed to execute AI workloads built on inference or trained models. NVIDIA's GPU serves an identical function plus training. It thus depends on whether the device under consideration will work in execute-only-mode or be capable of streamlining/re-training its models (brains) as well. However, these are all valid options, as long as such tasks are completed within a reasonable period.

 

As per the work cycle, developers train AI models and transfer results to the NCS2, connecting it to a low-cost computer like an Arduino or Raspberry Pi dedicated to its immediate task. Intel-provided use case examples include image classification, object detection, and motion detection. AI even helps to stabilize videos.

 

Edge device processing provides the significant advantage of time and cost avoidance. This is particularly evident when you upload streams of data, like photos of a vehicle that drives past a security gate. The computer vision verifies the number plate characters and communicates them to the device.  The outcome either raises the boom gate (authorized vehicle) or lowers it (unauthorized/unwelcome entity). If you use a connected device, only updates need to be pushed to the registered number plate database (a compact dataset). If business requirements compel log entries, as well as the need for secure areas, a compact data log can be transmitted from the Edge device back to the server, with expensive processing being done on the NCS2.

 

AI Prototyping

Shop our wide variety of embedded computers, including single-board computers, development boards, and accessories for prototyping.

    Shop Now Shop Now Shop Now Shop Now Shop Now

The Intel Registered Movidius Myriad X VPU powers the Intel NCS2 and is the first to be equipped with a neural compute engine – an exclusive hardware neural network inference accelerator, generating additional performance. Paired with the Intel Distribution of the OpenVINO toolkit supporting additional networks, the Intel NCS2 provides exceptional prototyping flexibility to developers.

 

Working with Neural Compute Stick 2

With merely a laptop and the Intel Registered NCS2, developers can get AI and computer vision applications up and running in minutes. The Intel Registered NCS2 runs on a standard USB 3.0 port and needs no additional hardware, enabling users to seamlessly convert and then deploy PC-trained models to a broad range of devices natively. Internet or cloud connectivity becomes superfluous.

 

Intel Registered has simplified project deployment on embedded devices through its OpenVINO toolkit, which is designed to profile, tune, and subsequently deploy convolutional neural networks. It targets applications that need real-time inferencing using low power. This software tools set simplifies the deployment process on various Intel AI solutions and supports models in ONNX, Caffe, TensorFlow, and MXNet formats.

 

OpenVINO, a primary development software toolkit for NCS2 and other Intel hardware, allows the development and deployment of machine vision solutions delivering high inferencing speed and accuracy. OpenVINO combines camera processing, CV acceleration tools, and optimized DL computation for heterogeneous execution environments. This implies that CNN-based solutions using this toolkit can maximize performance by extending their workloads across the Intel hardware (including CPUs, GPUs, FPGAs, VPUs, and IPUs) only using a standard API. Equated with NCSDK, OpenVINO also allows for CNN-based inference at the edge, but with superior pre-optimized kernels and calls for OpenCV API. The OpenVINO development workflow depicted in Figure 2 initially includes training a CNN model in one of the machine learning (ML) libraries. The model optimizer is subsequently used to produce the Intermediate Representation (IR) model graph. The IR contains two files: topology description in XML format and binary data of the model weights. The IR is used to read, load, and infer using the Inference Engine with the latter accommodating unified functions to span multiple Intel platforms. User applications integrate this API to use the model IR to execute deep learning inference.

image

Figure 2: The workflow diagram of the OpenVINO toolkit

 

Getting Started with the Neural Compute Stick 2 and Inference on the Raspberry Pi

We will now learn how to use the OpenVINO toolkit in conjunction with OpenCV for quicker DL inference on a Raspberry Pi. A TinyYOLO - a compact You Only Look Once (YOLO) DL model version - is taken as an example. This model is heavy for a Raspberry Pi, but with the Neural Compute Stick 2 we will engineer a better frame rate than merely using a Raspberry Pi.

We need a Raspberry Pi 4 Model B, a USB cable, and the Neural Compute Stick 2. A Raspberry Pi camera is optional, if you are not loading video from the disk.

image

Figure 3: Intel NCS2 on Raspberry Pi 4

 

 

Let us first install a few dependencies required by openCV and OpenVINO:

 

$ sudo apt-get update && sudo apt-get upgrade

$ sudo apt-get install build-essential cmake unzip pkg-config

 

Next, it is time to install a selection of image and video libraries — these are key to work with image and video files:

 

$ sudo apt-get install libjpeg-dev libpng-dev libtiff-dev

$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

$ sudo apt-get install libxvidcore-dev libx264-dev

$ sudo apt-get install libgtk-3-dev

$ sudo apt-get install libcanberra-gtk*

 

Next, we need two packages which contain numerical optimizations for OpenCV:

 

$ sudo apt-get install libatlas-base-dev gfortran

 

Add the following command to download the python headers:

 

$ sudo apt-get install python3-dev

 

Installation of OpenVINO's optimized OpenCV on the Raspberry Pi

 

$ wget https://download.01.org/opencv/2020/openvinotoolkit/2020.1/l_openvino_toolkit_runtime_raspbian_p_2020.1.023.tgz

 

We will now unpack the folder and rename the same for better readability:

 

$ tar -xf l_openvino_toolkit_runtime_raspbian_p_2020.1.023.tgz

$ mv l_openvino_toolkit_runtime_raspbian_p_2020.1.023 openvino

 

Configure openVINO to use with the Raspberry Pi:

 

$ nano ~/.bashrc

Add below lines to the end of the .bashrc file and save it

# OpenVINO

source ~/openvino/bin/setupvars.sh

 

Close editor and source the file:

 

$ source ~/.bashrc

 

Next, we need to add the current user to the Rasbian users group:

 

$ sudo usermod -a -G users "$(whoami)"

 

Reboot the Pi:

 

$ sudo reboot

 

Reopen the terminal and set the USB rules:

 

$ cd ~

$ sh openvino/install_dependencies/install_NCS_udev_rules.sh

 

It is better to generate a virtual environment to stay clear of the installed libraries for the project. We will now create a virtual environment for this OpenVino toolkit:

 

$ wget https://bootstrap.pypa.io/get-pip.py

$ sudo python3 get-pip.py

$ sudo pip install virtualenv virtualenvwrapper

$ sudo rm -rf ~/get-pip.py ~/.cache/pip

$ nano ~/.bashrc

 

Include the following lines in the .bashrc file:

 

# virtualenv

export WORKON_HOME=$HOME/.virtualenvs

export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3

source /usr/local/bin/virtualenvwrapper.sh

VIRTUALENVWRAPPER_ENV_BIN_DIR=bin

 

We will now source the file:

 

$ source ~/.bashrc

 

We will now generate a virtual environment and name it NCS2. The following command will create a virtual python 3 environment:

 

$ mkvirtualenv NCS2 -p python3

 

Open this virtual environment and install the libraries needed for this project:

 

$ workon NCS2

$ pip install numpy

$ pip install "picamera[array]"

$ pip install imutils

$ pip install pillow

 

Type the following command to setup the Movidius NCS with OpenVINO after setup:

 

$ source ~/openvino/bin/setupvars.sh

 

You can download the project source code and the model from here.

 

Unzip the file after downloading and open the terminal. Navigate to the ncs2.py directory and type the following in the terminal:

 

$ cd /home/pi/Downloads/ncs2

$ workon NCS2

$ source ~/openvino/bin/setupvars.sh

$ python ncs2.py

image

Figure 4: Image detection on Raspberry Pi

 

The video is loaded from disk, and it will begin object detection. This video is found in the project link. You can, however, use your video. To do this, provide the video file location in line 37 of the source file:

 

vs = cv2.VideoCapture("/home/pi/Downloads/ncs2/videos/vid.mp4")

 

If you prefer to use a USB camera and not a video file, change cam= True in line 15 of the source file. The USB is the default camera. If you prefer the Raspberry Pi camera, then in the source file, comment line no 31 and uncomment line no 32:

 

vs = VideoStream(src=0).start()  #USB CAMERA

#vs = VideoStream(usePiCamera=True).start() # RASPBERRY PI CAMERA

 

We can achieve a frame rate of 5 to 6 frames per second (fps) on a video that gets loaded from the disk, whereas we can achieve a 10 to 12 fps rate when a USB camera is run. The frame rate, sans the Neural Compute Stick 2, will be noticeably slower, unable to clock even one fps.

 

Conclusion

Intel Registered has also pioneered a complete AI development Kit, which involves a powerful combination of an IntelRegistered CoreTm processor, IntelRegistered MovidiusTm MyriadTm X Vision Processing Unit (VPU), and an integrated graphics unit from Intel for high-performance, low-power AI workloads. The three hardware engines can run diverse AI workloads, to deliver an exhaustive raw AI capability for contemporary PCs. This kit is comprised of Windows 10 pre-loaded IntelRegistered NUC and AI development tools, and code samples to help developers expedite new AI applications. Tutorials come included.

  • intel
  • intel ®
  • ai at the edge
  • tech spotlight
  • neural compute stick
  • ®
  • neural_compute_stick
  • compute stick 2
  • Share
  • History
  • More
  • Cancel
  • Sign in to reply

Top Comments

  • gpolder
    gpolder over 4 years ago +2
    I would use the NCS2 to change the agricultural world. For instance early disease detection in field and greenhouse crops.
  • DAB
    DAB over 4 years ago +2
    Could be interesting to look at spectral processing of agriculture and vegetation. DAB
  • albertabeef
    albertabeef over 4 years ago +1
    Great overview, thanks
  • albertabeef
    albertabeef over 4 years ago

    Great overview, thanks image

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • DAB
    DAB over 4 years ago

    Could be interesting to look at spectral processing of agriculture and vegetation.

     

    DAB

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • gpolder
    gpolder over 4 years ago
    I would use the NCS2 to change the agricultural world.

    For instance early disease detection in field and greenhouse crops.

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube