RoadTest: Intel Neural Compute Stick 2
Author: jomoenginer
Creation date:
Evaluation Type: Development Boards & Tools
Did you receive all parts the manufacturer stated would be included in the package?: True
What other parts do you consider comparable to this product?: NVIDIA Jetson Nano, Google Coral USB Accelerator.
What were the biggest problems encountered?: The documentation for the newer version of OpenVINO at times did not reflect properly the latest version. There were changes in the examples where the names were different in the newer version of OpenVINO so the posted examples were not valid. Also, trying to install OpenVINO on a ODROID C-4 look alot of work since it had to be compiled which at times took days to complete the build of the tool on the ODROID device. There is a Docker example for Raspbian posted on the Intel site, but following the instructions did not result in a working container. Much research, and posting in the Intel Developer's forum was needed to get the container to build properly but I still was not able to get an example to open a .MP4 video.
Detailed Review:
The Intel Neural Compute Stick 2 (NCS 2) is a USB based device that can be used with a PC or Small Board Computer such as a Raspberry Pi to offload the processing of AI and Machine Learning tasks. With its Intel Movidius Myriad X Vision Processing Unit, built in Neural Compute Engine for deep neural network (DNN) inferencing, and 16 Streaming Hybrid Architecture Vector Engine (SHAVE) Programmable Cores, along with a rich set of APIs and tools, the NCS 2 is a small versatile device that can add extra processing power to Edge devices that was not easily attainable previously. The NCS 2 can be deployed in Robotic Object Identification, Small Satellites in Space, safety by way of tracking vehicle and people traffic, and other interesting environments.
https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/pretrained-models.html
https://docs.openvinotoolkit.org/latest/index.html
https://github.com/openvinotoolkit/openvino/wiki/GettingStarted
https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_raspbian.htmlhttps://storage.openvinotoolkit.org/repositories/openvino/packages/2021.2/
https://storage.openvinotoolkit.org/repositories/openvino/packages/2021.2/
https://www.intel.com/content/www/us/en/support/articles/000055220/boards-and-kits.html
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_inference_engine_intro.html
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Samples_Overview.html
https://docs.openvinotoolkit.org/latest/index.html
https://docs.openvinotoolkit.org/latest/omz_models_intel_index.html
https://docs.openvinotoolkit.org/latest/omz_models_public_index.html
https://github.com/openvinotoolkit/open_model_zoo
Reference:
Raspbian Version:
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
OpenVINO Version
1. Download the Raspbian OpenVINO package and place in the Downloads folder, or other folder:
https://storage.openvinotoolkit.org/repositories/openvino/packages/2021.2/l_openvino_toolkit_runtime_raspbian_p_2021.2.185.tgz
NOTE: In this instance, version 2021.2 was used
2. Change into the Downloads folder where the OpenVINO package is located
cd ~/Downloads
3. Create an ‘openvino’ folder under ‘/opt’
Ex:
sudo mkdir -p /opt/intel/openvino
4. Unpack the OpenVINO archive to the openvino folder created previously
sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2021.2.185.tgz --strip 1 -C /opt/intel/openvino
5. Install ‘CMake’ version 3.7.2 or higher.
sudo apt-get update sudo install cmake $ cmake --version cmake version 3.13.4 CMake suite maintained and supported by Kitware (kitware.com/cmake).
6. Set the OpenVINO Environment Variables:
source /opt/intel/openvino/bin/setupvars.sh
Optional:
Add the setupvars.sh to the user ‘.bashrc’ to set the Environment Variables when a terminal is open
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
1. Add the Linux user to the users group
sudo usermod -a -G users "$(whoami)"
2. Run the NCS UDEV Rules script to add the NCS2 to the udev rules
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh Updating udev rules... Udev rules have been successfully installed.
This should result in a ‘97-myriad-usbboot.rules’ file located under ‘/etc/udev/rules.d/’ with the following content:
SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0660", ENV{ID_MM_DEVICE_IGNORE}="1" SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0660", ENV{ID_MM_DEVICE_IGNORE}="1" SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0660", ENV{ID_MM_DEVICE_IGNORE}="1"
1. Create a build folder in a location the user has permission to.
Ex:
mkdir ~/inference_engine_cpp_samples_build && cd ~/inference_engine_cpp_samples_build
2. Run CMake to bring in the build files
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
3. Run make to build the Object Detection Sample
make -j2 object_detection_sample_ssd
NOTE: This will create an ‘./arm71/Release’ folder under the build folder where the executable is located.
Optional:
a. Build the Query Device Example to ensure the NCS2 is identified properly by the Raspberry Pi
make -j2 hello_query_device
b. Run the Query Device example to confirm the MYRIAD device is identified properly.
./armv7l/Release/hello_query_device Available devices: Device: MYRIAD Metrics: DEVICE_THERMAL : UNSUPPORTED TYPE RANGE_FOR_ASYNC_INFER_REQUESTS : { 3, 6, 1 } SUPPORTED_CONFIG_KEYS : [ PERF_COUNT EXCLUSIVE_ASYNC_REQUESTS LOG_LEVEL VPU_MYRIAD_PLATFORM CONFIG_FILE VPU_MYRIAD_FORCE_RESET DEVICE_ID VPU_CUSTOM_LAYERS VPU_PRINT_RECEIVE_TENSOR_TIME VPU_HW_STAGES_OPTIMIZATION MYRIAD_ENABLE_FORCE_RESET MYRIAD_CUSTOM_LAYERS MYRIAD_ENABLE_RECEIVING_TENSOR_TIME MYRIAD_ENABLE_HW_ACCELERATION ] SUPPORTED_METRICS : [ DEVICE_THERMAL RANGE_FOR_ASYNC_INFER_REQUESTS SUPPORTED_CONFIG_KEYS SUPPORTED_METRICS OPTIMIZATION_CAPABILITIES FULL_DEVICE_NAME AVAILABLE_DEVICES ] OPTIMIZATION_CAPABILITIES : [ FP16 ] FULL_DEVICE_NAME : Intel Movidius Myriad X VPU Default values for device configuration keys: PERF_COUNT : NO EXCLUSIVE_ASYNC_REQUESTS : NO LOG_LEVEL : LOG_NONE VPU_MYRIAD_PLATFORM : "" CONFIG_FILE : "" VPU_MYRIAD_FORCE_RESET : NO DEVICE_ID : "" VPU_CUSTOM_LAYERS : "" VPU_PRINT_RECEIVE_TENSOR_TIME : NO VPU_HW_STAGES_OPTIMIZATION : YES MYRIAD_ENABLE_FORCE_RESET : NO MYRIAD_CUSTOM_LAYERS : "" MYRIAD_ENABLE_RECEIVING_TENSOR_TIME : NO MYRIAD_ENABLE_HW_ACCELERATION : YES
4. Use the ‘downloader.py’ script to download the face-detection-adas-0001 model from the Open Model Zoo examples.
NOTE: Install of Open Model Zoo is covered later in the review.
a. Create a folder for the models.
mkdir ~/models && cd models
b. Get a list of face-detection examples using the ‘downloader.py’ script.
~/open_model_zoo/tools/downloader/downloader.py --print_all | grep face-detection face-detection-0200 face-detection-0202 face-detection-0204 face-detection-0205 face-detection-0206 face-detection-adas-0001 face-detection-retail-0004 face-detection-retail-0005 face-detection-retail-0044
c. Download the face-detection-adas-0001 model
NOTE: Use the ‘--precision’ option to specify the precision of the model. In this case it is FP16.
pi@raspberrypi:~/models $ python3 ~/open_model_zoo/tools/downloader/downloader.py --name face-detection-adas-0001 --precision FP16 ################|| Downloading face-detection-adas-0001 ||################ ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml ... 100%, 220 KB, 413 KB/s, 0 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.bin ... 100%, 2056 KB, 1744 KB/s, 1 seconds passed
d. The model should download into the ‘ intel/face-detection-adas-0001/FP16’ folder from where the downloader.py script is run.
NOTE: to specify an output directory, add the following option to the downloader.py script run:
--output_dir DIR
5. Download an image with people faces in it for the example.
Note: ImageMagick can be installed and run ‘mogrify’ to convert image formats, such as from JPEG to BMP.
Ex:
mogrify -format bmp faces.jpeg
Faces Image used
6. Run the Object Detection example
./armv7l/Release/object_detection_sample_ssd -m /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml -d MYRIAD -i ~/Downloads/faces.bmp [ INFO ] InferenceEngine: API version ............ 2.1 Build .................. 2021.2.0-1877-176bdf51370-releases/2021/2 Description ....... API Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] /home/pi/Downloads/faces.bmp [ INFO ] Loading Inference Engine [ INFO ] Device info: MYRIAD myriadPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Loading network files: /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device [ INFO ] Create infer request [ WARNING ] Image is resized from (474, 252) to (672, 384) [ INFO ] Batch size is 1 [ INFO ] Start inference [ INFO ] Processing output blobs [0,1] element, prob = 0.999512 (280,83)-(338,162) batch id : 0 WILL BE PRINTED! [1,1] element, prob = 0.990234 (161,89)-(216,160) batch id : 0 WILL BE PRINTED! [2,1] element, prob = 0.986816 (40,84)-(97,163) batch id : 0 WILL BE PRINTED! [3,1] element, prob = 0.980957 (399,83)-(456,162) batch id : 0 WILL BE PRINTED! [4,1] element, prob = 0.0268555 (322,91)-(365,141) batch id : 0 [5,1] element, prob = 0.0258789 (445,118)-(476,177) batch id : 0 [6,1] element, prob = 0.0249023 (424,84)-(479,150) batch id : 0 [7,1] element, prob = 0.0219727 (353,88)-(393,145) batch id : 0 [8,1] element, prob = 0.0209961 (-6,227)-(28,258) batch id : 0 [9,1] element, prob = 0.0180664 (452,141)-(477,164) batch id : 0 [10,1] element, prob = 0.0170898 (323,140)-(339,160) batch id : 0 [11,1] element, prob = 0.0161133 (313,137)-(342,162) batch id : 0 [12,1] element, prob = 0.0161133 (230,32)-(399,226) batch id : 0 [13,1] element, prob = 0.015625 (295,78)-(324,110) batch id : 0 [14,1] element, prob = 0.015625 (304,78)-(335,104) batch id : 0 [15,1] element, prob = 0.015625 (452,99)-(477,122) batch id : 0 [16,1] element, prob = 0.015625 (451,110)-(477,133) batch id : 0 [17,1] element, prob = 0.015625 (451,130)-(478,152) batch id : 0 [18,1] element, prob = 0.015625 (296,135)-(335,162) batch id : 0 [19,1] element, prob = 0.015625 (455,151)-(477,174) batch id : 0 [20,1] element, prob = 0.015625 (29,196)-(48,230) batch id : 0 [21,1] element, prob = 0.015625 (53,212)-(102,254) batch id : 0 [22,1] element, prob = 0.015625 (165,211)-(217,254) batch id : 0 [23,1] element, prob = 0.015625 (399,158)-(463,283) batch id : 0 [24,1] element, prob = 0.0146484 (324,152)-(340,173) batch id : 0 [25,1] element, prob = 0.0146484 (172,67)-(200,91) batch id : 0 [26,1] element, prob = 0.0146484 (452,88)-(477,111) batch id : 0 [27,1] element, prob = 0.0146484 (31,204)-(77,249) batch id : 0 [28,1] element, prob = 0.0146484 (433,15)-(483,84) batch id : 0 [29,1] element, prob = 0.0146484 (242,130)-(283,180) batch id : 0 [30,1] element, prob = 0.0146484 (-6,175)-(31,225) batch id : 0 [31,1] element, prob = 0.0146484 (-5,196)-(33,244) batch id : 0 [32,1] element, prob = 0.0146484 (280,212)-(331,255) batch id : 0 [33,1] element, prob = 0.0136719 (414,80)-(432,100) batch id : 0 [34,1] element, prob = 0.0136719 (322,134)-(338,151) batch id : 0 [35,1] element, prob = 0.0136719 (456,69)-(479,89) batch id : 0 [36,1] element, prob = 0.0136719 (418,78)-(449,106) batch id : 0 [37,1] element, prob = 0.0136719 (454,79)-(478,101) batch id : 0 [38,1] element, prob = 0.0136719 (305,148)-(334,173) batch id : 0 [39,1] element, prob = 0.0136719 (131,90)-(168,141) batch id : 0 [40,1] element, prob = 0.0136719 (-2,109)-(31,161) batch id : 0 [41,1] element, prob = 0.0136719 (318,135)-(370,175) batch id : 0 [42,1] element, prob = 0.0136719 (433,136)-(485,174) batch id : 0 [43,1] element, prob = 0.0136719 (299,210)-(349,255) batch id : 0 [44,1] element, prob = 0.0136719 (315,219)-(375,255) batch id : 0 [45,1] element, prob = 0.0136719 (-16,183)-(75,266) batch id : 0 [46,1] element, prob = 0.0126953 (179,83)-(193,96) batch id : 0 [47,1] element, prob = 0.0126953 (458,120)-(474,138) batch id : 0 [48,1] element, prob = 0.0126953 (459,133)-(474,150) batch id : 0 [49,1] element, prob = 0.0126953 (198,141)-(216,159) batch id : 0 [50,1] element, prob = 0.0126953 (8,195)-(24,219) batch id : 0 [51,1] element, prob = 0.0126953 (41,204)-(60,224) batch id : 0 [52,1] element, prob = 0.0126953 (57,79)-(88,106) batch id : 0 [53,1] element, prob = 0.0126953 (175,79)-(199,110) batch id : 0 [54,1] element, prob = 0.0126953 (184,79)-(208,109) batch id : 0 [55,1] element, prob = 0.0126953 (441,139)-(469,163) batch id : 0 [56,1] element, prob = 0.0126953 (56,148)-(88,172) batch id : 0 [57,1] element, prob = 0.0126953 (180,148)-(210,173) batch id : 0 [58,1] element, prob = 0.0126953 (287,147)-(314,174) batch id : 0 [59,1] element, prob = 0.0126953 (323,156)-(342,186) batch id : 0 [60,1] element, prob = 0.0126953 (-3,200)-(17,227) batch id : 0 [61,1] element, prob = 0.0126953 (128,199)-(151,234) batch id : 0 [62,1] element, prob = 0.0126953 (153,196)-(173,231) batch id : 0 [63,1] element, prob = 0.0126953 (310,203)-(334,237) batch id : 0 [64,1] element, prob = 0.0126953 (2,212)-(32,239) batch id : 0 [65,1] element, prob = 0.0126953 (152,204)-(200,247) batch id : 0 [66,1] element, prob = 0.0126953 (128,217)-(176,252) batch id : 0 [67,1] element, prob = 0.0126953 (137,226)-(166,253) batch id : 0 [68,1] element, prob = 0.0126953 (444,66)-(478,127) batch id : 0 [69,1] element, prob = 0.0126953 (19,85)-(75,150) batch id : 0 [70,1] element, prob = 0.0126953 (242,88)-(280,142) batch id : 0 [71,1] element, prob = 0.0126953 (-4,131)-(31,181) batch id : 0 [72,1] element, prob = 0.0126953 (131,130)-(172,177) batch id : 0 [73,1] element, prob = 0.0126953 (96,221)-(158,255) batch id : 0 [74,1] element, prob = 0.0126953 (412,-18)-(522,71) batch id : 0 [75,1] element, prob = 0.0117188 (189,73)-(203,86) batch id : 0 [76,1] element, prob = 0.0117188 (190,83)-(204,97) batch id : 0 [77,1] element, prob = 0.0117188 (290,80)-(308,100) batch id : 0 [78,1] element, prob = 0.0117188 (461,92)-(474,108) batch id : 0 [79,1] element, prob = 0.0117188 (461,104)-(474,117) batch id : 0 [80,1] element, prob = 0.0117188 (73,140)-(92,159) batch id : 0 [81,1] element, prob = 0.0117188 (304,144)-(316,158) batch id : 0 [82,1] element, prob = 0.0117188 (310,140)-(329,159) batch id : 0 [83,1] element, prob = 0.0117188 (87,152)-(103,172) batch id : 0 [84,1] element, prob = 0.0117188 (52,206)-(72,225) batch id : 0 [85,1] element, prob = 0.0117188 (77,209)-(92,222) batch id : 0 [86,1] element, prob = 0.0117188 (325,208)-(339,223) batch id : 0 [87,1] element, prob = 0.0117188 (123,216)-(138,232) batch id : 0 [88,1] element, prob = 0.0117188 (379,23)-(399,52) batch id : 0 [89,1] element, prob = 0.0117188 (172,59)-(201,79) batch id : 0 [90,1] element, prob = 0.0117188 (460,56)-(480,77) batch id : 0 [91,1] element, prob = 0.0117188 (183,68)-(207,96) batch id : 0 [92,1] element, prob = 0.0117188 (163,116)-(194,156) batch id : 0 [93,1] element, prob = 0.0117188 (188,137)-(221,162) batch id : 0 [94,1] element, prob = 0.0117188 (202,139)-(229,162) batch id : 0 [95,1] element, prob = 0.0117188 (332,137)-(354,165) batch id : 0 [96,1] element, prob = 0.0117188 (417,148)-(448,173) batch id : 0 [97,1] element, prob = 0.0117188 (199,154)-(218,185) batch id : 0 [98,1] element, prob = 0.0117188 (322,166)-(343,196) batch id : 0 [99,1] element, prob = 0.0117188 (65,190)-(99,216) batch id : 0 [100,1] element, prob = 0.0117188 (19,197)-(38,230) batch id : 0 [101,1] element, prob = 0.0117188 (37,200)-(65,232) batch id : 0 [102,1] element, prob = 0.0117188 (60,202)-(84,236) batch id : 0 [103,1] element, prob = 0.0117188 (70,203)-(99,234) batch id : 0 [104,1] element, prob = 0.0117188 (265,197)-(286,233) batch id : 0 [105,1] element, prob = 0.0117188 (321,202)-(346,241) batch id : 0 [106,1] element, prob = 0.0117188 (345,197)-(367,236) batch id : 0 [107,1] element, prob = 0.0117188 (-4,212)-(20,236) batch id : 0 [108,1] element, prob = 0.0117188 (121,208)-(143,242) batch id : 0 [109,1] element, prob = 0.0117188 (15,228)-(39,252) batch id : 0 [110,1] element, prob = 0.0117188 (77,11)-(125,53) batch id : 0 [111,1] element, prob = 0.0117188 (332,66)-(365,126) batch id : 0 [112,1] element, prob = 0.0117188 (83,116)-(115,178) batch id : 0 [113,1] element, prob = 0.0117188 (-5,153)-(30,204) batch id : 0 [114,1] element, prob = 0.0117188 (447,140)-(477,206) batch id : 0 [115,1] element, prob = 0.0117188 (14,176)-(58,222) batch id : 0 [116,1] element, prob = 0.0117188 (40,190)-(106,247) batch id : 0 [117,1] element, prob = 0.0117188 (412,192)-(466,245) batch id : 0 [118,1] element, prob = 0.0117188 (429,219)-(484,256) batch id : 0 [119,1] element, prob = 0.0117188 (398,193)-(535,262) batch id : 0 [120,1] element, prob = 0.0117188 (176,88)-(361,213) batch id : 0 [121,1] element, prob = 0.0107422 (179,72)-(194,85) batch id : 0 [122,1] element, prob = 0.0107422 (52,80)-(72,100) batch id : 0 [123,1] element, prob = 0.0107422 (303,82)-(317,97) batch id : 0 [124,1] element, prob = 0.0107422 (314,83)-(328,96) batch id : 0 [125,1] element, prob = 0.0107422 (425,80)-(443,101) batch id : 0 [126,1] element, prob = 0.0107422 (166,90)-(184,112) batch id : 0 [127,1] element, prob = 0.0107422 (168,135)-(182,148) batch id : 0 [128,1] element, prob = 0.0107422 (55,144)-(69,158) batch id : 0 [129,1] element, prob = 0.0107422 (86,141)-(102,160) batch id : 0 [130,1] element, prob = 0.0107422 (292,145)-(306,158) batch id : 0 [131,1] element, prob = 0.0107422 (461,146)-(474,161) batch id : 0 [132,1] element, prob = 0.0107422 (76,151)-(93,171) batch id : 0 [133,1] element, prob = 0.0107422 (200,151)-(217,172) batch id : 0 [134,1] element, prob = 0.0107422 (54,187)-(69,201) batch id : 0 [135,1] element, prob = 0.0107422 (43,198)-(58,211) batch id : 0 [136,1] element, prob = 0.0107422 (9,206)-(23,223) batch id : 0 [137,1] element, prob = 0.0107422 (88,208)-(102,223) batch id : 0 [138,1] element, prob = 0.0107422 (112,206)-(125,223) batch id : 0 [139,1] element, prob = 0.0107422 (122,204)-(139,228) batch id : 0 [140,1] element, prob = 0.0107422 (133,208)-(148,221) batch id : 0 [141,1] element, prob = 0.0107422 (144,202)-(161,225) batch id : 0 [142,1] element, prob = 0.0107422 (337,208)-(352,222) batch id : 0 [143,1] element, prob = 0.0107422 (361,26)-(388,46) batch id : 0 [144,1] element, prob = 0.0107422 (460,44)-(478,68) batch id : 0 [145,1] element, prob = 0.0107422 (162,59)-(189,79) batch id : 0 [146,1] element, prob = 0.0107422 (185,59)-(213,77) batch id : 0 [147,1] element, prob = 0.0107422 (163,66)-(189,95) batch id : 0 [148,1] element, prob = 0.0107422 (293,69)-(324,91) batch id : 0 [149,1] element, prob = 0.0107422 (163,77)-(188,109) batch id : 0 [150,1] element, prob = 0.0107422 (194,78)-(217,107) batch id : 0 [151,1] element, prob = 0.0107422 (158,84)-(200,138) batch id : 0 [152,1] element, prob = 0.0107422 (279,121)-(305,156) batch id : 0 [153,1] element, prob = 0.0107422 (329,128)-(353,154) batch id : 0 [154,1] element, prob = 0.0107422 (319,122)-(355,160) batch id : 0 [155,1] element, prob = 0.0107422 (61,136)-(97,162) batch id : 0 [ INFO ] Image out_0.bmp created! [ INFO ] Execution successful [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
7. An out_0.bmp file should have been created showing the faces highlighted
https://github.com/intel-iot-devkit/sample-videos/
1. Download Open Model Zoo from GitHub
cd ~/ sh git clone –depth 1 https://github.com/openvinotoolkit/open_model_zoo
2. Install the the Open Model Zoo requirements
cd open_model_zoo/tools/downloader python3 -m pip install -r requirements.in Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple Requirement already satisfied: pyyaml in /home/pi/.local/lib/python3.7/site-packages (from -r requirements.in (line 1)) (5.4.1) Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from -r requirements.in (line 2)) (2.21.0)
3. Verify which models are supported with the demo by viewing the model.lst file for the demo
Ex:
pi@raspberrypi:~/open_model_zoo/demos $ cat object_detection_demo/models.lst # This file can be used with the --list option of the model downloader. # For -at ssd face-detection-adas-???? face-detection-retail-???? pedestrian-and-vehicle-detector-adas-???? pedestrian-detection-adas-???? pelee-coco person-detection-???? person-detection-retail-0013 retinanet-tf vehicle-detection-adas-???? vehicle-license-plate-detection-barrier-???? # For -at yolo yolo-v3-tf yolo-v3-tiny-tf
4. The downloader.py script can be used to list the available models available for download.
~/open_model_zoo/tools/downloader/downloader.py --print_all | grep face-detection face-detection-0200 face-detection-0202 face-detection-0204 face-detection-0205 face-detection-0206 face-detection-adas-0001 face-detection-retail-0004 face-detection-retail-0005 face-detection-retail-0044
5. Use the downloader.py script to get a model to use with a demo
Ex:
cd /home/pi/models python3 ~/open_model_zoo/tools/downloader/downloader.py --name face-detection-adas-0001 ################|| Downloading face-detection-adas-0001 ||################ ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml ... 100%, 220 KB, 418 KB/s, 0 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.bin ... 100%, 4113 KB, 2451 KB/s, 1 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml ... 100%, 220 KB, 413 KB/s, 0 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.bin ... 100%, 2056 KB, 1972 KB/s, 1 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16-INT8/face-detection-adas-0001.xml ... 100%, 509 KB, 765 KB/s, 0 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16-INT8/face-detection-adas-0001.bin ... 100%, 1074 KB, 1298 KB/s, 0 seconds passed
Note: To download a specific model FP type, the precision option can be used:
python3 ~/open_model_zoo/tools/downloader/downloader.py --name face-detection-adas-0001 --precision FP16 ################|| Downloading face-detection-adas-0001 ||################ ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml ... 100%, 220 KB, 400 KB/s, 0 seconds passed ========== Downloading /home/pi/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.bin ... 100%, 2056 KB, 2008 KB/s, 1 seconds passed
The '--output_dir' options can be used to specify a directory to download the model to
Ex:
python3 downloader.py --name person-detection-0201 --output_dir ~/models
6. Download video files to use with the demos.
Ex:
wget https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4 wget https://github.com/intel-iot-devkit/sample-videos/raw/face-demographics-walking-and-pause.mp4 wget https://github.com/intel-iot-devkit/sample-videos/blob/master/face-demographics-walking.mp4
7. Run the Face Detection demo
./armv7l/Release/object_detection_demo -m ~/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml -i ~/models/face-demographics-walking.mp4 -at ssd -d MYRIAD
Face Detection Video
8. Run the object_detection_demo example with a Vehicle Detection model
python3 downloader.py --name vehicle-detection-0200 --output_dir ~/models ./armv7l/Release/object_detection_demo -m ~/models/intel/vehicle-detection-0200/FP16/vehicle-detection-0200.xml -i ~/models/person-bicycle-car-detection.mp4 -at ssd -d MYRIAD
Vehicle Detection Video
9. Run the object_detection_demo example with a Person Detection model
python3 downloader.py --name person-detection-0201 --output_dir ~/models ./armv7l/Release/object_detection_demo -m ~/models/intel/person-detection-0201/FP16/person-detection-0201.xml -i ~/models/person-bicycle-car-detection.mp4 -at ssd -d MYRIAD
Person Detection Video - Over Head
10. Person Facing Detection demo
./armv7l/Release/object_detection_demo -m ~/models/intel/person-detection-0201/FP16/person-detection-0201.xml -i ~/models/face-demographics-walking.mp4 -at ssd -d MYRIAD
Person Detection Walking Toward Camera
11. OMZ Retail Pedestrian Tracking
NOTE: This one ran a bit slow so that is why the video is a bit slow
pi@raspberrypi:~/open_model_zoo/demos/build $ ./armv7l/Release/pedestrian_tracker_demo --m_det /home/pi/models/intel/person-detection-retail-0002/FP16/person-detection-retail-0002.xml -m_reid /home/pi/models/intel/person-reidentification-retail-0286/FP16/person-reidentification-retail-0286.xml -d_det MYRIAD -d_reid MYRIAD -i /home/pi/Downloads/store-aisle-detection.mp4 InferenceEngine: API version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 Loading device MYRIAD MYRIAD myriadPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ WARN:0] global ../opencv/modules/videoio/src/cap_gstreamer.cpp (919) open OpenCV | GStreamer warning: unable to query duration of stream [ WARN:0] global ../opencv/modules/videoio/src/cap_gstreamer.cpp (956) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1 To close the application, press 'CTRL+C' here or switch to the output window and press ESC key Mean core utilization: 22.2% 20.1% 22.0% 20.5% Memory mean usage: 0.5 GiB Mean swap usage: 0.3 GiB Execution successful
Pedestrian Tracker Demo - Retail
12. Text Detection
Image used
./armv7l/Release/text_detection_demo -m_td ~/models/intel/text-detection-0003/FP16/text-detection-0003.xml -m_tr ~/models/intel/text-recognition-0012/FP16/text-recognition-0012.xml -i ~/Downloads/intelncs2.JPG -d_tr MYRIAD -d_td MYRIAD -r InferenceEngine: API version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Parsing input parameters [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] MYRIAD myriadPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Loading network files [ INFO ] Starting inference To close the application, press 'CTRL+C' here or switch to the output window and press ESC or Q 387,123,220,123,220,72,387,72,neural 371,148,314,148,314,126,371,126,stick 389,148,374,148,374,126,389,126,2 309,149,220,149,220,128,309,128,compute
The words identified were:
- neural
- stick
- 2
- compute
There are Python, as well as CPP sample Open Model Zoo demos, however I was not able to get the Python demos to run on the Raspberry Pi 4.
To attempt to run the Python examples, these are the steps used for this review.
1. The Python demos require Rust to be installed. Run the following to Install the Rust compiler.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh stable-armv7-unknown-linux-gnueabihf installed - rustc 1.50.0 (cb75ad5db 2021-02-10) Rust is installed now. Great! To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH environment variable. Next time you log in this will be done automatically.
2. Add the following to the user .bashrc, or run it from command line, to ensure the compiler is reference properly.
source $HOME/.cargo/env
3. Install the Python dependencies from the requirements.txt file
~/open_model_zoo/demos/python_demos $ python3 -m pip install -r requirements.txt Successfully installed absl-py-0.11.0 cachetools-4.2.1 cycler-0.10.0 flake8-3.8.4 flake8-import-order- 0.18.1 google-auth-1.27.0 google-auth-oauthlib-0.4.2 grpcio-1.35.0 iniconfig-1.1.1 joblib-1.0.1 kiwisolver-1.3.1 markdown-3.3.3 matplotlib-3.3.4 motmetrics-1.2.0 nibabel-3.2.1 numpy-1.20.1 pandas-1.2.2 pillow-8.1.0 pluggy-0.13.1 protobuf-3.15.2 py-1.10.0 py-cpuinfo-7.0.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycodestyle-2.6.0 pyflakes-2.2.0 pytest-6.2.2 pytest-benchmark-3.2.3 rsa-4.7.1 scikit-learn-0.24.1 scipy-1.6.1 setuptools-53.0.0 tensorboard-2.4.1 tensorboard-plugin-wit-1.8.0 tensorboardX-2.1 threadpoolctl-2.1.0 tokenizers-0.10.1 toml-0.10.2 tqdm-4.57.0 xmltodict-0.12.0
4. Running one of the demos such as 'object_detection_demo.py' just results in a 'No module named 'ngraph'' error although the environment variables are set.
~/open_model_zoo/demos $ python3 ./python_demos/object_detection_demo/object_detection_demo.py -h Traceback (most recent call last): File "./python_demos/object_detection_demo/object_detection_demo.py", line 32, in <module> from models import * File "/home/pi/open_model_zoo/demos/python_demos/common/models/__init__.py", line 19, in <module> from .yolo import YOLO File "/home/pi/open_model_zoo/demos/python_demos/common/models/yolo.py", line 18, in <module> import ngraph ModuleNotFoundError: No module named 'ngraph'
5. After posting an issue on the Open Model Zoo GiyHub repo, it was found that 'ngraph' has not been ported the Raspberry Pi code, so the YOLO examples, which use 'ngraph' will not work.
To work around this, comment out the import of YOLO in the '__init__.py file.
Ex:
/open_model_zoo/demos/python_demos/common/models/__init__.py """ Copyright (C) 2020 Intel Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ from .ssd import SSD #from .yolo import YOLO from .faceboxes import FaceBoxes from .centernet import CenterNet from .retinaface import RetinaFace
6. Object Detection demo Python
pi@raspberrypi:~/open_model_zoo/demos/python_demos/object_detection_demo $ python3 object_detection_demo.py -m ~/models/intel/vehicle-detection-0200/FP16/vehicle-detection-0200.xml -at ssd -i ~/Downloads/car-detection.mp4 -d MYRIAD [ INFO ] Initializing Inference Engine... [ INFO ] Loading network... [ INFO ] Reading network from IR... [ INFO ] Use SingleOutputParser [ INFO ] Loading network to MYRIAD plugin... [ WARN:0] global ../opencv/modules/videoio/src/cap_gstreamer.cpp (919) open OpenCV | GStreamer warning: unable to query duration of stream [ WARN:0] global ../opencv/modules/videoio/src/cap_gstreamer.cpp (956) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1 [ INFO ] Starting inference... To close the application, press 'CTRL+C' here or switch to the output window and press ESC key Latency: 73.6 ms FPS: 11.4
Vehicle Detection Python Demo
You don't have permission to edit metadata of this video.
https://medium.com/analytics-vidhya/jupyter-lab-on-raspberry-pi-22876591b227
https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html
1. Install dependencies
sudo apt-get update sudo apt-get install python3-pip sudo pip3 install setuptools sudo apt install libffi-dev
2. Use Pip to install Jupyter Lab
pip3 install jupyterlab
3. Set PATH to include the local python lib location
export PATH="$HOME/.local/bin:$PATH"
4. Create a directory for the Jupyter Notebooks
mkdir ~/notebooks
5. Launch Jupyter Lab
jupyter lab --notebook-dir=~/notebooks/ [I 2021-02-23 18:42:11.852 ServerApp] jupyterlab | extension was successfully linked. [I 2021-02-23 18:42:11.871 ServerApp] Writing notebook server cookie secret to /home/pi/.local/share/jupyter/runtime/jupyter_cookie_secret [I 2021-02-23 18:42:11.934 LabApp] JupyterLab extension loaded from /home/pi/.local/lib/python3.7/site-packages/jupyterlab [I 2021-02-23 18:42:11.935 LabApp] JupyterLab application directory is /home/pi/.local/share/jupyter/lab [I 2021-02-23 18:42:11.944 ServerApp] jupyterlab | extension was successfully loaded. [I 2021-02-23 18:42:11.945 ServerApp] Serving notebooks from local directory: /home/pi/notebooks [I 2021-02-23 18:42:11.945 ServerApp] Jupyter Server 1.4.1 is running at: [I 2021-02-23 18:42:11.945 ServerApp] http://localhost:8888/lab?token=71b83a83097167e96116a8e5a12b6ea0e8f520197312178f [I 2021-02-23 18:42:11.945 ServerApp] or http://127.0.0.1:8888/lab?token=71b83a83097167e96116a8e5a12b6ea0e8f520197312178f [I 2021-02-23 18:42:11.946 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 2021-02-23 18:42:12.061 ServerApp] To access the server, open this file in a browser: file:///home/pi/.local/share/jupyter/runtime/jpserver-9642-open.html Or copy and paste one of these URLs: http://localhost:8888/lab?token=71b83a83097167e96116a8e5a12b6ea0e8f520197312178f or http://127.0.0.1:8888/lab?token=71b83a83097167e96116a8e5a12b6ea0e8f520197312178f
6. When Jupyter Lab starts, it will be blank and either a Notebook or Console page can be created.
7. Create a Jupyter Notebook to test the environment.
NOTE: In this instance, the person-vehicle-bike-detection-crossroad-0078 model was used
8. The result is a Green box around the people identified in the image.
Running Jupyter Notebook
This example shows how to install and run an OpenVINO Docker container on a Raspberry Pi 4.
Example followed
https://www.intel.com/content/www/us/en/support/articles/000055220/boards-and-kits.html
1. Create a directory for the Docker files
mkdir ~/docker && cd ~/docker
2. Install Docker on the Raspberry Pi.
Note: Need to use the Convenience script to install Docker on the Raspberry Pi. Ensure ‘armhf’ is used for the architecture.
The Instructions are located at the following link:
https://docs.docker.com/engine/install/debian/
Ex:
a. Get the 'get_docker.sh' script and run it to install Docker
curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh
b. To use Docker as a non root user without having to using sudo, add the user to the docker group.
sudo usermod -aG docker pi
3. Create a Dockerfile using the one listed in the web link listed above.
Ex:
NOTE: The OpenVINO Release and related models were change to reflect Release 2021.2
Dockerfile
FROM balenalib/rpi-raspbian:latest ARG DOWNLOAD_LINK=https://download.01.org/opencv/2021/openvinotoolkit/2021.2/l_openvino_toolkit_runtime_raspbian_p_2021.2.185.tgz ARG INSTALL_DIR=/opt/intel/openvino ARG BIN_FILE=https://download.01.org/opencv/2021/openvinotoolkit/2021.2/open_model_zoo/models_bin/3/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.bin ARG WEIGHTS_FILE=https://download.01.org/opencv/2021/openvinotoolkit/2021.2/open_model_zoo/models_bin/3/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml ARG IMAGE_FILE=https://cdn.pixabay.com/photo/2018/07/06/00/33/person-3519503_960_720.jpg RUN apt-get update && apt-get install -y --no-install-recommends \ apt-utils \ automake \ cmake \ cpio \ gcc \ g++ \ libatlas-base-dev \ libstdc++6 \ libtool \ libusb-1.0.0-dev \ lsb-release \ make \ python3-pip \ python3-numpy \ python3-scipy \ libgtk-3-0 \ pkg-config \ libavcodec-dev \ libavformat-dev \ libswscale-dev \ sudo \ udev \ unzip \ vim \ git \ libgtk2.0-dev \ x11-apps \ mpv \ mplayer \ wget && \ rm -rf /var/lib/apt/lists/* RUN pip3 install --no-cache-dir setuptools && \ pip3 install --no-cache-dir jupyter RUN mkdir -p $INSTALL_DIR && cd $INSTALL_DIR && \ wget -c $DOWNLOAD_LINK && \ tar xf l_openvino_toolkit_runtime_raspbian_p*.tgz --strip 1 -C $INSTALL_DIR # add USB rules RUN sudo usermod -a -G users "$(whoami)" && \ /bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh && \ sh $INSTALL_DIR/install_dependencies/install_NCS_udev_rules.sh" # build Object Detection sample RUN echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \ mkdir /root/Downloads && \ cd $INSTALL_DIR/deployment_tools/inference_engine/samples/c/ && \ /bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh && \ ./build_samples.sh && \ wget --no-check-certificate $BIN_FILE -O /root/Downloads/person-vehicle-bike-detection-crossroad-0078.bin && \ wget --no-check-certificate $WEIGHTS_FILE -O /root/Downloads/person-vehicle-bike-detection-crossroad-0078.xml && \ wget --no-check-certificate $IMAGE_FILE -O /root/Downloads/walk.jpg " RUN echo "import cv2 as cv\n\ # Load the model.\n\ net = cv.dnn.readNet('person-vehicle-bike-detection-crossroad-0078.xml',\ 'person-vehicle-bike-detection-crossroad-0078.bin')\n\ # Specify target device.\n\ net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)\n\ # Read an image.\n\ frame = cv.imread('walk.jpg')\n\ if frame is None:\n\ raise Exception('Image not found!')\n\ # Prepare input blob and perform an inference.\n\ blob = cv.dnn.blobFromImage(frame, size=(1024, 1024), ddepth=cv.CV_8U)\n\ net.setInput(blob)\n\ out = net.forward()\n\ # Draw detected faces on the frame.\n\ for detection in out.reshape(-1, 7):\n\ confidence = float(detection[2])\n\ xmin = int(detection[3] * frame.shape[1])\n\ ymin = int(detection[4] * frame.shape[0])\n\ xmax = int(detection[5] * frame.shape[1])\n\ ymax = int(detection[6] * frame.shape[0])\n\ if confidence > 0.5:\n\ cv.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))\n\ # Save the frame to an image file.\n\ cv.imwrite('out.png', frame)\n\ print('Detection results in out.png')" >> /root/Downloads/openvino_fd_myriad.py
4. To build and start the docker container, run the following from the docker folder
docker buildx build --platform=linux/arm/v7 . -t openvino-rpi [+] Building 988.8s (11/11) FINISHED => [internal] load build definition from Dockerfile 0.3s => => transferring dockerfile: 3.35kB 0.1s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/balenalib/rpi-raspbian:latest 2.0s => [1/7] FROM docker.io/balenalib/rpi-raspbian:latest@sha256:566d3d4aec 51.9s => => resolve docker.io/balenalib/rpi-raspbian:latest@sha256:566d3d4aec9 0.0s => => sha256:bbf526ff29798ef56e20c818b8fe7e0f5d5fc90af0d 4.87kB / 4.87kB 0.0s => => sha256:566d3d4aec9b3cef82ddc79a5d98cbf1959cae17d28 1.78kB / 1.78kB 0.0s => => sha256:20e06662a7a272b3f858829f8818f2d6b625c7ff 38.45MB / 38.45MB 23.0s => => sha256:bbe977187d35e14e35f03180c4454e5c9f478f8029492e1 304B / 304B 0.8s => => sha256:1c62e9e51efea3549a93dc53e4e2dc4bd9458a2adcd5ca3 254B / 254B 0.6s => => sha256:ab4813588625cdf2dcda2120b677ba4b51f92fccf343d94 907B / 907B 1.0s => => sha256:70c0b6627ae89014d108c15c5294ec61d905c719698b308 176B / 176B 1.1s => => sha256:0483f1a5a01b2281a03e51d86ca0a935dac0a66fb8eb899 412B / 412B 1.3s => => sha256:8cc1424b602b64448ae4963be5840ee01b6a3181 15.34MB / 15.34MB 13.4s => => extracting sha256:20e06662a7a272b3f858829f8818f2d6b625c7ff45870cc 10.3s => => extracting sha256:bbe977187d35e14e35f03180c4454e5c9f478f8029492e10 0.0s => => extracting sha256:1c62e9e51efea3549a93dc53e4e2dc4bd9458a2adcd5ca39 0.0s => => extracting sha256:ab4813588625cdf2dcda2120b677ba4b51f92fccf343d947 0.0s => => extracting sha256:70c0b6627ae89014d108c15c5294ec61d905c719698b308a 0.0s => => extracting sha256:0483f1a5a01b2281a03e51d86ca0a935dac0a66fb8eb8998 0.0s => => extracting sha256:8cc1424b602b64448ae4963be5840ee01b6a3181b39ddeee 1.2s => [2/7] RUN apt-get update && apt-get install -y --no-install-recomme 678.8s => [3/7] RUN pip3 install --no-cache-dir setuptools && pip3 instal 136.6s => [4/7] RUN mkdir -p /opt/intel/openvino && cd /opt/intel/openvino && 41.8s => [5/7] RUN sudo usermod -a -G users "$(whoami)" && /bin/bash -c "so 2.1s => [6/7] RUN echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.b 23.3s => [7/7] RUN echo "import cv2 as cv\nnet = cv.dnn.readNet('person-vehicl 1.4s => exporting to image 49.9s => => exporting layers 49.8s => => writing image sha256:e67a6636f3d3536e496f235f906f6df3d1849675bb116 0.0s => => naming to docker.io/library/openvino-rpi
5. List installed docker images
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE openvino-rpi latest e67a6636f3d3 9 days ago 910MB
6. List docker containers
$ docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e91bb46c4068 openvino-rpi "/usr/bin/entry.sh /…" 8 days ago Exited (0) 2 days ago crazy_feynman
7. If needded, start container if it is not running
docker start crazy_feynman
8. Log into container if needed
docker exec -it crazy_feynman bash
9. Ensure the following environment variables in the container are set:
export ngraph_DIR=/home/pi/openvino/build/ngraph export InferenceEngine_DIR=/home/pi/openvino/build export PYTHONPATH=/home/pi/openvino/bin/armv7l/Release/lib/python_api/python3.7 export LD_LIBRARY_PATH=/home/pi/openvino/bin/armv7l/Release/lib/ export OpenCV_DIR=/usr/local/lib/cmake/opencv4
10. Install Open Model Zoo
git clone https://github.com/opencv/open_model_zoo.git
11. Build OpenVINO Inference Engine
root@raspberrypi:/opt/intel/openvino/deployment_tools/inference_engine/samples/cpp# ./build_samples.sh Build completed, you can find binaries for all samples in the /root/inference_engine_cpp_samples_build/armv7l/Release subfolder.
12. Run the 'hello_query_device' demo to verify the NCS 2 is identified in the container.
root@raspberrypi:~/inference_engine_cpp_samples_build/armv7l/Release# ./hello_query_device Available devices: Device: MYRIAD Metrics: DEVICE_THERMAL : UNSUPPORTED TYPE RANGE_FOR_ASYNC_INFER_REQUESTS : { 3, 6, 1 } SUPPORTED_CONFIG_KEYS : [ PERF_COUNT EXCLUSIVE_ASYNC_REQUESTS LOG_LEVEL VPU_MYRIAD_PLATFORM CONFIG_FILE VPU_MYRIAD_FORCE_RESET DEVICE_ID VPU_CUSTOM_LAYERS VPU_PRINT_RECEIVE_TENSOR_TIME VPU_HW_STAGES_OPTIMIZATION MYRIAD_ENABLE_FORCE_RESET MYRIAD_CUSTOM_LAYERS MYRIAD_ENABLE_RECEIVING_TENSOR_TIME MYRIAD_ENABLE_HW_ACCELERATION ] SUPPORTED_METRICS : [ DEVICE_THERMAL RANGE_FOR_ASYNC_INFER_REQUESTS SUPPORTED_CONFIG_KEYS SUPPORTED_METRICS OPTIMIZATION_CAPABILITIES FULL_DEVICE_NAME AVAILABLE_DEVICES ] OPTIMIZATION_CAPABILITIES : [ FP16 ] FULL_DEVICE_NAME : Intel Movidius Myriad X VPU Default values for device configuration keys: PERF_COUNT : NO EXCLUSIVE_ASYNC_REQUESTS : NO LOG_LEVEL : LOG_NONE VPU_MYRIAD_PLATFORM : "" CONFIG_FILE : "" VPU_MYRIAD_FORCE_RESET : NO DEVICE_ID : "" VPU_CUSTOM_LAYERS : "" VPU_PRINT_RECEIVE_TENSOR_TIME : NO VPU_HW_STAGES_OPTIMIZATION : YES MYRIAD_ENABLE_FORCE_RESET : NO MYRIAD_CUSTOM_LAYERS : "" MYRIAD_ENABLE_RECEIVING_TENSOR_TIME : NO MYRIAD_ENABLE_HW_ACCELERATION : YES
13. Run the 'object_detection_sample_ssd_c' demo from the container
root@raspberrypi:~/inference_engine_c_samples_build/armv7l/Release# ./object_detection_sample_ssd_c -m ~/Downloads/person-vehicle-bike-detection-crossroad-0078.xml -i ~/Downloads/walk.jpg -d MYRIAD [ INFO ] InferenceEngine: 2.1.2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] /root/Downloads/walk.jpg [ INFO ] Loading Inference Engine [ INFO ] Device info: MYRIAD myriadPlugin version ......... 2.1 Build ......... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Loading network: /root/Downloads/person-vehicle-bike-detection-crossroad-0078.xml [ INFO ] Preparing input blobs [ WARNING ] Image is resized from (960, 640) to (1024, 1024) [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device [ INFO ] Create infer request [ INFO ] Start inference [ INFO ] Processing output blobs [0, 1] element, prob = 0.999023 (467, 86)-(609, 579) batch id : 0 WILL BE PRINTED! [1, 1] element, prob = 0.998047 (266, 100)-(402, 553) batch id : 0 WILL BE PRINTED! [2, 1] element, prob = 0.256104 (390, 112)-(493, 445) batch id : 0 [3, 1] element, prob = 0.151123 (361, 127)-(423, 394) batch id : 0 [4, 1] element, prob = 0.132446 (465, 123)-(541, 398) batch id : 0 [5, 1] element, prob = 0.111450 (358, 167)-(401, 347) batch id : 0 [6, 1] element, prob = 0.108826 (898, -2)-(945, 124) batch id : 0 [7, 1] element, prob = 0.102539 (474, 92)-(547, 201) batch id : 0 [8, 1] element, prob = 0.099854 (917, -6)-(957, 140) batch id : 0 [9, 1] element, prob = 0.096008 (502, 94)-(566, 175) batch id : 0 [10, 1] element, prob = 0.095520 (587, 261)-(618, 371) batch id : 0 [11, 1] element, prob = 0.095093 (345, 71)-(428, 218) batch id : 0 [12, 1] element, prob = 0.093750 (889, 0)-(927, 89) batch id : 0 [13, 1] element, prob = 0.093201 (346, 158)-(391, 291) batch id : 0 [14, 1] element, prob = 0.090820 (414, 72)-(492, 256) batch id : 0 [15, 1] element, prob = 0.087708 (361, 111)-(403, 253) batch id : 0 [16, 1] element, prob = 0.086487 (891, 112)-(959, 322) batch id : 0 [17, 1] element, prob = 0.085510 (461, 100)-(516, 230) batch id : 0 [18, 1] element, prob = 0.084778 (918, 115)-(960, 235) batch id : 0 [19, 1] element, prob = 0.084473 (532, 123)-(610, 400) batch id : 0 [20, 1] element, prob = 0.084106 (336, 141)-(371, 231) batch id : 0 [21, 1] element, prob = 0.083435 (387, 157)-(436, 327) batch id : 0 [22, 1] element, prob = 0.083191 (566, 133)-(682, 330) batch id : 0 [23, 1] element, prob = 0.082947 (329, 172)-(383, 326) batch id : 0 [24, 1] element, prob = 0.082825 (573, 227)-(613, 374) batch id : 0 [25, 1] element, prob = 0.082458 (289, 61)-(367, 277) batch id : 0 [26, 1] element, prob = 0.082153 (387, 81)-(436, 226) batch id : 0 [27, 1] element, prob = 0.081238 (317, 56)-(458, 292) batch id : 0 [28, 1] element, prob = 0.081177 (907, 3)-(934, 74) batch id : 0 [29, 1] element, prob = 0.081177 (924, 0)-(952, 77) batch id : 0 [30, 1] element, prob = 0.081177 (654, 168)-(727, 372) batch id : 0 [31, 1] element, prob = 0.081116 (495, 121)-(572, 220) batch id : 0 [32, 1] element, prob = 0.080872 (422, 270)-(471, 427) batch id : 0 [33, 1] element, prob = 0.080750 (468, 8)-(532, 91) batch id : 0 [34, 1] element, prob = 0.080261 (872, 0)-(920, 120) batch id : 0 [35, 1] element, prob = 0.079468 (514, 97)-(548, 162) batch id : 0 [36, 1] element, prob = 0.079407 (326, 71)-(377, 190) batch id : 0 [37, 1] element, prob = 0.079102 (508, 9)-(555, 116) batch id : 0 [38, 1] element, prob = 0.078796 (942, -2)-(960, 56) batch id : 0 [39, 1] element, prob = 0.078064 (623, 291)-(660, 376) batch id : 0 [40, 1] element, prob = 0.077820 (550, 111)-(568, 169) batch id : 0 [41, 1] element, prob = 0.077637 (354, 312)-(402, 482) batch id : 0 [42, 1] element, prob = 0.077454 (316, 462)-(381, 564) batch id : 0 [43, 1] element, prob = 0.077148 (340, 60)-(397, 177) batch id : 0 [44, 1] element, prob = 0.077026 (375, 138)-(421, 270) batch id : 0 [45, 1] element, prob = 0.076721 (579, 295)-(603, 367) batch id : 0 [46, 1] element, prob = 0.076599 (322, 21)-(379, 150) batch id : 0 [47, 1] element, prob = 0.076477 (592, 162)-(658, 378) batch id : 0 [48, 1] element, prob = 0.076416 (499, 4)-(537, 82) batch id : 0 [49, 1] element, prob = 0.076416 (502, 134)-(528, 207) batch id : 0 [50, 1] element, prob = 0.076172 (523, 0)-(570, 94) batch id : 0 [51, 1] element, prob = 0.076111 (484, 123)-(520, 221) batch id : 0 [52, 1] element, prob = 0.076111 (338, 175)-(373, 280) batch id : 0 [53, 1] element, prob = 0.075806 (375, 56)-(527, 284) batch id : 0 [54, 1] element, prob = 0.075562 (570, 67)-(616, 208) batch id : 0 [55, 1] element, prob = 0.075317 (837, 178)-(912, 412) batch id : 0 [56, 1] element, prob = 0.075195 (376, 44)-(423, 175) batch id : 0 [57, 1] element, prob = 0.075134 (536, 0)-(586, 119) batch id : 0 [58, 1] element, prob = 0.074951 (366, 172)-(393, 245) batch id : 0 [59, 1] element, prob = 0.074341 (886, 33)-(960, 242) batch id : 0 [60, 1] element, prob = 0.074158 (494, 124)-(540, 189) batch id : 0 [61, 1] element, prob = 0.073914 (445, 6)-(507, 89) batch id : 0 [62, 1] element, prob = 0.073914 (419, 87)-(466, 221) batch id : 0 [63, 1] element, prob = 0.073792 (351, 123)-(386, 221) batch id : 0 [64, 2] element, prob = 0.077087 (425, 182)-(515, 244) batch id : 0 [65, 2] element, prob = 0.073425 (397, 172)-(483, 235) batch id : 0 [66, 2] element, prob = 0.065186 (382, 159)-(470, 212) batch id : 0 [67, 2] element, prob = 0.064209 (182, 4)-(273, 63) batch id : 0 [68, 2] element, prob = 0.062561 (449, 163)-(498, 280) batch id : 0 [69, 2] element, prob = 0.062317 (462, 175)-(523, 261) batch id : 0 [70, 2] element, prob = 0.061371 (398, 252)-(477, 311) batch id : 0 [71, 2] element, prob = 0.060974 (423, 210)-(482, 290) batch id : 0 [72, 2] element, prob = 0.060150 (442, 212)-(497, 254) batch id : 0 [73, 2] element, prob = 0.055206 (415, 211)-(471, 255) batch id : 0 [74, 2] element, prob = 0.055023 (419, 235)-(472, 277) batch id : 0 [75, 2] element, prob = 0.054230 (399, 245)-(457, 283) batch id : 0 [76, 2] element, prob = 0.052734 (198, -1)-(262, 32) batch id : 0 [77, 2] element, prob = 0.051880 (432, 191)-(479, 254) batch id : 0 [78, 2] element, prob = 0.051727 (444, 190)-(499, 235) batch id : 0 [79, 2] element, prob = 0.051544 (517, 267)-(609, 323) batch id : 0 [80, 2] element, prob = 0.050812 (214, 195)-(307, 250) batch id : 0 [ INFO ] Image out_0.bmp created! [ INFO ] Execution successful
14. The 'out_0.bmp' can be copied from the container to the host using the following command
pi@raspberrypi:~ $ docker cp crazy_feynman:/root/inference_engine_c_samples_build/armv7l/Release/out_0.bmp
This will be covered in a separate post.
This will be covered in a separate post due to computer issues.
My apologizes for posting this after the due date, but I ran into an issue with my Laptop and ended up have to start over with some of the testing I had performed and then other family commitments prevented me from posting this RoadTest on time, thus it is a couple of week off.
My Initial intent was to test the Intel Neural Compute Stick 2 with OpenVINO on an Ubuntu 20.04 VM, a Raspberry Pi 4 running the latest Raspbian OS, as well as with a Docker container running on the Raspberry Pi with a RealSense D415 Depth Camera. The LapTop issue caused me to have to start over with the VM testing, and there were some compatibility issues with the Intel software and the Raspberry Pi causing me to have to realign my focus. Namely, there are Python examples in the Open Model Zoo examples that will not run on the Raspberry Pi with the newer version of OpenVINO since these have not been ported over. Also, I was running with the latest OpenVINO, Release 2021.2, but then ran into an issue with compatibility between the RealSense software and the 2021.2 version of OpenVINO. Due to a suggestion in the GitHub issue, I downgrade to OpenVINO 2020.3 however this failed to compile as well. The only place I was able to run the RealSense D415 Camera with OpenVINO was on a Windows 10 system. Overall, there were some interesting Examples and Demos provided to test the Intel Neural Compute Stick 2 and associated software, however the issues that were experienced with the compatibility of the Intel software and the NCS 2 added a bit of complication and frustration. I still plan to implement the NCS 2 into a Robot config, but that will have to be another post at another time.
Top Comments
Thanks. The Raspberry Pi 4 I used only had 2Gb of RAM so some of the examples, such as the Pedestrian Tracker, may have worked better with a 4Gb Pi.
I have an ODROID C4 that I was using as well but that…
The OpenVINO Toolkit uses OpenCV but I did not do a comparison of just a plain OpenCV config. There are OpenCV examples in the mix so that could be a place to start but not my focus.