VectorBlox SDK Setup
in this part we'll install the VectorBlox SDK and test the example networks on board. The VectorBlox AI accelerator makes use of an SDK that leverages the openVino framework to convert AI networks from floating point format designed in multiple frameworks such as Keras , Tensorflow1 or TF2 , Pytorch, ONNX into a quantized version that can run on the FPGA fabric accelerator. As of 13/10/2023 Vectorblox SDK only supports Ubuntu 16/18/20.04. WSL can also be used for AI network development.
For this project WSL2 and Ubuntu 20.04 WSL image from the Windows store were used. Some additional SW packages have to be installed as shown below.
Using the VectorBlox IP on the Polarfire Video SOC is a multi-step process.
First an FPGA design has to be programmed into the Polarfire SOC fabric. Next a compatible Yocto image needs to be programmed on the on-board eMMC or sd card. Next the SDK has to be loaded on board the Yocto image and a new project folder has to be uploaded using the framework of choice. Then the AI network has to be compiled and converted into a .vnnx file on the host. Finally, if all the AI network ops (instructions) are compatible and the conversion of the floating point network is successful the last steps are integration with the video pipeline by adding the per-processing and post processing steps. Before all that the SDK has to be installed on a host PCB for development.
1. VectoBlox FPGA design
As of Oct 18 2023 Libero 2023.1 with a valid VectorBlox license must be used to compile the reference design. Complication takes 40 minutes on a Ryzen-7. Libero compilation for Polarfire is single threaded.
Clone the repo : git clone https://github.com/Microchip-Vectorblox/VectorBlox-SoC-Video-Kit-Demo
Then launch Libero and source the .tcl file. This will create the VectorBlox Accelerator Libero project from the MPFS_VIDEO_KIT_VECTORBLOX_DESIGN.tcl file.
Note that compiling this design requires Libero 2023.1. Once the project is compiled and programmed on board, the next step is to install the SDK on board the Polarfire Video SOC
The board was already pre-programmed with the latest Yocto image as shown in the previous article.
The board is connected to one of the Ethernet jacks so since it is configured in DHCP mode it will automatically lease an IP. Once the FPGA design is programmed, next step is to connect and Ethernet cable to any of the two Gigabit network ports.
Now we can move on to download the SDK on board.
2. Install VectorBlox SDK on Polarfire Video SOC
.At this point we can download and unzip the sample networks from https://vector-blox-model-zoo.s3.us-west-2.amazonaws.com/Releases/ModelZoo/samples_V1000_1.4.4.zip to the root directory.
wget --no-check-certificate https://vector-blox-model-zoo.s3.us-west-2.amazonaws.com/Releases/ModelZoo/samples_V1000_1.4.4.zip unzip samples_V1000_1.4.4.zip
Download and unzip the VectorBlox SDK from https://github.com/Microchip-Vectorblox/VectorBlox-SDK/archive/refs/tags/release-v1.4.4.1.zip and navigate to this example
wget --no-check-certificate https://github.com/Microchip-Vectorblox/VectorBlox-SDK/archive/refs/tags/release-v1.4.4.1.zip unzip release-v1.4.4.1.zip cd release-v1.4.4.1.zip/example/soc-video-c
Two HDMI cables were connected to the PolarFire SoC Video Kit (Rx/Tx). The RX cable was plugged into a laptop which is used as the video source.
To test that the HDMI is working correctly , the user can run
make hdmi
to toggle the frame buffers (for ~1 sec) to verify their HDMI setup.
The demo does not support the camera as a video feed so only the framebuffer data from the HDMI RX is used.
3. Install VectorBlox SDK on Host
To develop and port AI networks like Imagenet, Mibolinet or similar the SDK must be installed on a host PC.
Install GIT LFS
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt update && sudo apt install git-lfs git lfs install
If having an issue with clock synchronization update as below:
sudo hwclock --hctosys
Installing the VectorBlox SDK requires some additonal steps which are not clearly laid out on the github repo.
git clone https://github.com/Microchip-Vectorblox/VectorBlox-SDK cd VectorBlox-SDK/ chmod +x install_dependencies.sh chmod +x install_venv.sh chmod +x setup_vars.sh
Install dependencies first.
./install_dependencies.sh
Next setup variables. This will download a number of frameworks and take a couple of minutes.
./install_venv.sh
The script setup_vars.sh will create a virtual environment so it can be run only once. Any other time simple activate the environment as below.
Next start the virtual env and export SDK location
cd vbx_env source bin/activate export VBX_SDK=/home/user/VectorBlox-SDK
At this point, one can compile the examples , copy them on the Polarfire Video SOC board and run them.
The top level SDK folder contains the following folders:
/example -
/tutorials
/drivers
/fw
/app_notes
The /fw folder contains the binaries that are used to program the SPI flash.
The /tutorials folder contains example networks for Pytorch , Keras, TF1/TF2, Darknet and onnx.
The /example folder contains 4 sub-folders:
/python
/sim-c
/soc-c
/soc-video-c
The python and sim-c folder contains scripts to simulate the developed networks.
The /soc-c is used to run networks on images.
The /soc-video-c uses additional pre and post processing steps for individual networks.
The framework does not have any python support to run network on board so all code at the moment needs to be developed in C.
4. AI Network on Polarfire Video SOC
Test the installation by switching under onnx directory and compiling MNIST
cd ../tutorials/onnx chmod +x mnist.sh ./mnist.sh
This will copy pass the number 7 image from the folder and print the expected results.
Testing Mobilenets
Next download and unzip the sample networks to the `/home/root` directory
wget https://as2.ftcdn.net/v2/jpg/00/97/58/97/1000_F_97589769_t45CqXyzjz0KXwoBZT9PRaWGHRk5hQqQ.jpg
mv 1000_F_97589769_t45CqXyzjz0KXwoBZT9PRaWGHRk5hQqQ.jpg cat.jpg
./run-model ../../fw/firmware.bin ~/samples_V1000_1.4.4/mobilenet-v2.vnnx cat.jpg CLASSIFY
5. VectorBlox-SoC-Video-Kit-Demo
The VectorBlox SDK comes loaded with a demo composed of multiple AI networks. To start the app follow the instructions on the repo by issuing
make overlay
make
to
./run-video-model
The networks can be replaced on the fly by pressing enter.
{gallery}Face recognition |
---|
So now we have completed the setup of the Polarfire SOC Video Kit by updating the Yocto image and programming the FPGA fabric with the Vectorblox AI accelerator, installed the Vectorblox SDK on the FPGA SOC and Host computer, tested the demo networks and verified network connectivity and HDMI pipeline. Last step will be to implement a custom network on the VectorBlox SDK. Before we delve into that we'll take a look at the FPGA design examples that comes with the kit.
References
[2] https://github.com/Microchip-Vectorblox/VectorBlox-SoC-Video-Kit-Demo
[3] https://github.com/Microchip-Vectorblox/VectorBlox-SDK/tree/master/example/soc-video-c