Hi vertical farmers,
In the previous post we presented our image acquisition system capable of acquiring crop images and transmit them to our vision system.
In this post we discuss the vision system at our current stage: the project stage.
The system database description intended to be addressed in this post will be postponed to the next one due to its early stage.
Vision System
Our vision system is a set of computer vision algorithms capable to detect plant characteristics for which it was trained. These characteristics may be for example dry leaves, small sized plants or sick plants. Actually, it can be anything provided that we train the system inputting pictures with the characteristics we want to detect.
The advantages of this system are:
- Low cost
- Versatility. It can detect a multitude of visual characteristics.
- Hardware simplicity. It can be streamlined to a webcam and a PC.
Conceptual Description
The proposed methodology uses appearance based features extracted from RGB cameras. The methodology starts with the offline learning process represented in figure 1.
The offline learning design requires the selection of a feature detector algorithm like SURF, STAR or MSER. These feature detectors are commonly used and provided interesting results in our early trials. One requirement in the selection is the use of invariant descriptors.
Figure 1. Offline learning process (training).
After selecting the feature detector, a set of features are extracted from the set of available observations and a clustering method is used such as K-Means to extract each word center of mass, creating a vocabulary to use in the bag of words algorithm.
With a valid vocabulary, a set of words is extracted from the set of observations and a tree is generated (or word probability).
The offline learning process should use well chosen datasets to avoid the following pitfalls. Samples too similar to the expected plant can limit the number of extracted words and bias the matching to a higher rate of false positives. On the other end, samples of very different plants will reduce the performance as important words will not be associated correctly and information will be lost.
At this point, the vision system is ready to use. It’s common operation is called here the online process.
For better understanding, the online process uses probabilistic methods to match an input model (extracted words from a given sample) to a set of virtual plants that is constructed from the training dataset. Using the words from the measured input image is estimated its similarity to the training images. The algorithm output is correspondent to the models’ class/designation (figure 2).
Figure 2. Online process (common operation for detection).
With the vision system we expect to be able to detect and evaluate:
- Plant growth and development growth stages
- Plant illnesses
- Plant disorders
This concludes the introduction of the vision system, in a few weeks we should get back to this topic to present a dataset extracted in our vertical farm and some initial outputs from the online process.
And that's it for now. If you have any questions or comments please feel free to reply to this post.
Thanks for following and keep connected!