I just received my UnitV2 that I described in a previous post M5Stack UnitV2 - Standalone AI Camera for Edge Computing and thought that I'd do a quick run though of the OOB applications on the device.
Initial connection and operation is reasonably straightforward. There is an Ethernet over USB interface on the board as shown in the block diagram:
Note: there are a couple of mistakes in the diagram - DDR3 memory size is 128MB and NAND Flash is 512MB.
Connecting to the device USB-C port therefore provides Ethernet IP connectivity. It is necessary to install a device driver on the host computer (I'm using Windows 10). First you need to download and extract the SR9900.infs_amd64 driver.
When you initially connect the device with USB, a USB 10/100 LAN device will appear in Device Manager under "Other Devices". You need to update this device with the downloaded driver. Then you'll see an SR9900 USB2.0 to Fast Ethernet Adapter under "Network adapters".
Now you should be able to connect to the UnitV2 with the IP address 10.254.239.1 or the domain name unitv2.py.
To examine the File System setup on the UnitV2, I used PuTTY to connect using SSH. The default username is "m5stack" and password is "12345678".
71% of the rootfs on the Flash Memory is being used. There is a blank 16GB SD card installed for additional storage.
An older version of Linux is being used. 4.9 was the version that was used for Debian "Stretch".
The OOB applications are located in the /home/m5stack/payload/bin directory:
The associated Neural network models are located in the /home/m5stack/payload/models directory.:
To run the OOB Application Framework you need to open unitv2.py in a browser window:
The Framework defaults to starting with a real-time VGA camera stream. The above is a picture of my previous Beagles that is just above my monitors. You can configure which app is the default as shown below. You can also switch to using Jupyter Notebooks for interactive development. I'll cover that in a separate post.
There are 13 examples applications in the OOB Framework, so I thought that I'd demo the Object Recognition example. This example has 2 different models, yolo_20 and nanodet_80. I tried them both and they have similar results. I found a database of pet photos at https://www.pexels.com/search/pets/ and I'll open that on my second monitor to test the object recognition.
Sorry for the shaky video - some of the mis-classification is due to the positioning of the camera on the image on the monitor.
The classifier did pretty well with dogs and cats and a bird:
{gallery} Pet Classifier |
---|
IMAGE TITLE: THEN IMAGE DESCRIPTION |
IMAGE TITLE: THEN IMAGE DESCRIPTION |
IMAGE TITLE: THEN IMAGE DESCRIPTION |
IMAGE TITLE: THEN IMAGE DESCRIPTION |
IMAGE TITLE: THEN IMAGE DESCRIPTION |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The classifier had some problems with the Beagle because it sometimes wanted to classify the chair because of the distinct material ribbing. The Lizard is mis-classified because it isn't represented by a class. The dog with the glasses will classify correctly if I pull back from the image. And the classifier doesn't seem to be able to separate people from animals when they are overlapping.
M5Stack has a demo video of the full set of OOB applications: https://m5stack.oss-cn-shenzhen.aliyuncs.com/video/Product_example_video/Unit/UnitV2_video_en.mp4