Running Machine Vision on the Arduino Portenta + Vision Shield
What you see in this video is the Arduino Portenta running a Machine Vision algorithm to classify empty spaces and Lion / Jaguar figures. I wrote two detailed posts to explain how to achieve this in a fast and accurate way, see the links bellow and pay special attention to the second one because the training / correction / retraining process is what will allow you to achieve these levels of accuracy.
The portenta offers several options to work with #tinyML, you can use the mbed approach by using tensorFlow lite or you can use openMV which is nicely integrated to this board. I selected the second approach for two reasons; The first reason is that edgeImpulse provides a good support for this environment and the second reason is that openMV implements an amazing amount of features to work with old school computer vision techniques. Some of them are quite important for my project and they allow me to detect patterns on Jaguar fur. You have a lot of good tools such as blob detection, circle detections and several other features regarding background changes.
No matter the method you choose, the inferences will be executed on the M7 core which is pretty fast for the Job, in this example, you can see a pretty good metric of 3.4 Frames per Second with an accuracy of almost 100% on the classifications. The best part of this is that the model is able to classify Jaguars (real Jaguars) and even Jaguars in posters without including those samples on the training set.
There are many tools to work with tinyML but I think edgeImpulse has a lot of good tools to understand the dataset (see the graphical representation of features on the second post). Data is everything while working with ML and understanding data quality is a key to achieve these levels of accuracy. Edge Impulse is a free tool for developers and it also will help you to keep the interactive process under control by using their versioning control and other tools. It is also a quite productive tool because you will be able to use some predefined and optimized models to be runned on MCUs. If you saw my other posts, I develop my own models from scratch, however in this case, the predefined models (transfer learning for instance) work pretty well. You also have the option to edit these models by changing some parameters or by directly editing the Keras model using the expert mode.
To see the details please review these two short posts that will walk through the entire process, from data gathering to deployment and running the model on the Portenta!
- https://www.wildedge.info/post/machine-vision-on-the-arduino-portenta-and-vision-shield-lora-part-i
- https://www.wildedge.info/post/machine-vision-on-the-arduino-portenta-and-vision-shield-lora-part-ii
Cheers,
Carlos.