In a recent comment by beacon_daveabout the upcoming PYNQ-Z2: Embedded Vision Workshops he referred to a project by Jeff Johnson that used an Intel Movidius NCS with a PYNQ-Z1: Setting-up-the-PYNQ-Z1-for-the-Intel-Movidius-Neural-Compute-Stick.
I roadtested a PYNQ-Z2 last year PYNQ-Z2 Dev Board: Python Productivity for Zynq - Review and I'm planning to attend the workshops to learn how to build custom overlays for the PYNQ-Z2. I've recently been using an Ultra96-v2 which I've also used with PYNQ and my PYNQ-Z2 has been offline for quite a while. I decided that it would be good to dust it off and get it set up with the new v2.5 image in preparation for the workshops.
I have the original Intel Movidius Neural Compute Stick (NCS) that I've used with Raspberry Pi's but I've never tried it with the PYNQ-Z2. This looked like a fun exercise and a good way to check out my PYNQ-Z2 setup.
The PYNQ-Z1 and PYNQ-Z2 are reasonably equivalent in terms of the FPGA, memory, and peripherals. The primary differences are in the audio processing and the GPIO pinouts. Therefore, I didn't expect to need to port any of the examples. My main concern is that because this project is 2 years old that there would probably be software incompatibilities that would have to be resolved.
Fred27 did a great blog post on setting up the PYNQ-Z2 for the workshops so I won't repeat any of that: PYNQ-Z2 - Pre-workshop setup.
The setup for the NCS is very straightforward based on Jeff's great documentation.
- Install the dependencies (if installing from the PYNQ terminal you don't need "sudo" because you are logged in as root)
- only need to install dependencies that are not included in PYNQ image
apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev
apt-get install -y libopencv-dev libhdf5-serial-dev
apt-get install -y protobuf-compiler byacc libgflags-dev
apt-get install -y libgoogle-glog-dev liblmdb-dev libxslt-dev
- Create workspace directory in /home/xilinx
- cd /home/xilinx
- mkdir workspace
- cd workspace
- Install the NC SDK (in API-only mode)
- git clone https://github.com/movidius/ncsdk
- cd ncsdk/api/src
- make install
- Install the NC App Zoo
- git clone -b ncsdk1 https://github.com/movidius/ncappzoo
The one stumbling block that I encountered as expected is that there are 2 primary branches in the GitHub repository for the NC SDK (master and ncsdk2) and 3 primary branches for the NC App Zoo (master, ncsdk1, and ncsdk2). NC SDK supports ncsdk1 (master branch) and ncsdk2 (ncsdk2 branch). You must match these branches with the applications as they are not compatible. NC App Zoo supports both the original NCS and the newer NCS 2. NCS 2 does not use the SDK but instead uses the OpenVINO toolkit. The master branch supports OpenVino the other two branches support the two SDK versions. For now I decided to use the ncsdk1 versions to match the NCS examples that I am using. You can see that when I installed the NC App Zoo that I needed to specify the branch explicitly.
pynq-ncs-yolo
Jeff created a YOLO project derived from yoloNCS specifically for the PYNQ-Z1 with a repository on GitHub. The repository contains python example files and Jupyter notebooks to run yolo detection with 3 types of input: single image, HDMI, and webcam. Output is to the notebook or the PYNQ HDMI output port.
- Install pynq-ncs-yolo
- cd /home/xilinx/jupyter_notebooks
- git clone https://github.com/fpgadeveloper/pynq-ncs-yolo.git
- Install prebuilt graph file
- cd pynq-ncs-yolo
- wget "http://fpgadeveloper.com/downloads/2018_04_19/graph"
Here is a screenshot of the included notebooks:
Here's a picture of my setup:
I needed to use a powered USB hub to connect the NCS to the PYNQ-Z2. I also use it to attach a Logitech C525 webcam. I am using an Apeman A80 action cam as the HDMI video source. The HDMI output goes to a Dell U2518 2K monitor.
I apologize in advance for not doing a live demo but I haven't worked out how to capture HDMI output yet.
Here is a run through the HDMI notebook. I have the camera set to 1080p@60fps. My understanding is that the base overlay does not allow you to set the HDMI input and output resolutions - the received input mode is copied to the output.
Since YOLO uses a 448x448 image size, you can either cut out that size from the input frame or you can resize the input frame. Both methods are used for comparison.
So the 60 fps input rate is maintained.
Just under 3 fps rate for classification.
Here is a static capture of the monitor screen.
Confidence level of 68% that I'm a person.
About 10% slower for classification.
Here is a static capture of the monitor screen.
A little less confident - 49%. I did not check for repeatability.
Looks like I can have some fun while waiting for the workshops to start..... I'll probably try some of the examples in the NC APP ZOO.