In the previous blog I have explored the Brainium platform, but I didn’t cover any of the AI features available. I thought that, considering how much relevance has lately been given to AI for this product, it would deserve a blog of its own.
At the time of writing, out of the box, the Brainium platform offers two AI-assisted features, available at the Edge (on the SmartEdge Agile device): Motion Recognition and Predictive Maintenance. This is achieved using the SmartEdge’s AI Model executor software, which is capable of processing the real-time data coming from the inertial measurement sensor (accelerometer + gyroscope), and apply the AI models to it, detecting patterns of motion and/or vibration right on the device, and send the result (alarms) back to the portal.
The philosophy of the solution is to use a “zero coding” approach for implementing AI monitoring, and the Brainium AI Studio tool is the key to make this possible. Before describing what the AI Studio is and what you can do with it, let’s give a brief introduction to the 2 AI features available, which will give us a clue on when they can be used and how.
Motion Recognition
As shown in the first blog, SmartEdge Agile include a combined inertial (accelerometer + gyroscope) sensor. The use of the data from this sensor allows to accurately perform three-dimensional motion analysis, which is the basis of motion recognition. Thanks to the improvement in power consumption and accuracy, inertial motion recognition represents a valid alternative to visual motion recognition.
Once a set of known motion patterns have been recognised and classified, they can be later used as model for detecting the same (or very similar) patterns in a live stream of data. One application for motion recognition is as a motion-based user interface. Another interesting application is as Human Activity Recognition (i.e. walking, sitting, standing, laying, etc.) tool, where the aim is to predict human movement from the sensor’s data stream.
Predictive Maintenance
Predictive maintenance has been around for several decades, and it is used to monitor mechanical condition, equipment efficiency and other parameters and attempts to derive the approximate time of a functional failure. It employs various techniques, such as vibration analysis, acoustic emissions, oil and wear debris analysis, ultrasonics, thermography, performance evaluation and others, to assess the equipment condition.
The idea behind is pretty simple: observing streams of data, collected from sensors, it is possible to build a model of “typical operation” for any kind of machinery, assuming the observations happen over a reasonable amount of time and are repeated at specific intervals. Changes to the data can then be correlated to anomalies within the machines: analysing such changes can reveal valuable information on the current health of the machinery, determining if maintenance work may be required, before the machinery actually fails.
For the Brainium solution, Predictive Maintenance focuses on vibration analysis. Data are collected from the inertial sensors, looking for anomalies, based on the fact that, whenever either one or more parts are unbalanced, misaligned, loose, eccentric, out of tolerance dimensionally, damaged or reacting to some external force, higher vibration levels will occur.
Machine Learning primer
Before digging deeper on how Brainium AI is implemented, it is beneficial to learn some basic nomenclature about AI and Machine Learning algorithms. This is an extremely simplified introduction to the topic, here only to give some “context” to the information given later.
Machine learning is the sub-field of AI devoted to the discovery of structures of algorithms that enable learning from data. These algorithms build a mathematical model of sample data, in order to make predictions or decisions, without being explicitly programmed to perform the task. We cannot provide all the preconditions in the program; the algorithm is designed in such a way that it learns itself.
There are three different types of machine learning strategies:
- supervised learning: the model is built using input training data that are “labelled” (the known output or outcome of the processing). The mapping between input and output inferred during the training is later used to predict the output form “live stream” data;
- unsupervised learning: the model is built identifying commonalities in the training data that are not labelled (input doesn’t have an associated known output). Such commonalities are used to identify patterns in the data that allow to organise them in groups or sets;
- reinforcement learning: the model is built using a “trial-and-error” learning strategy for finding a solution (or take an action), improved by employing a reward/punishment feedback system to help the model learn from the past experiences and converge to such solution.
Typically, supervised learning is applied to regression and classification problems. The most used algorithms are: Linear Regression, Logistic Regression, Support Vector Machines (SVM), Neural Networks, Decision Trees, Naive Bayes, Nearest Neighbor.
Unsupervised learning is used to find the underlying structure of a dataset and to summarise it into groups (example application are clustering, compression, anomaly detection, autoencoding, etc.). The most used algorithms are: K-means clustering, Hierarchical clustering, Competitive Learning (K-SOM), Association rules.
Reinforcement learning is all about learning from the environment and learning to be more accurate with time. Some typical applications for reinforcement learning are self-driving cars, computer played board games and robotic hands. Amongst the most used algorithms are: Markov Decision Process, Dynamic Programming, Temporal-Difference, Deep Q Networks.
How Brainium implements AI features
Both Motion Recognition and Predictive Maintenance techniques are not new, but with the advent of AI and Machine Learning, they are being enriched with the addition of autonomous learning and recognition capabilities.
The two features use different machine learning algorithms to process data and get their results. The learning approach is also different: Motion Recognition is a typical supervised machine learning problem, and it uses a classifier algorithms to compute its predicted result, while Predictive Maintenance is a unsupervised machine learning problem, and uses clustering algorithms to compute its results.
For Motion Recognition training pipeline, data from inertial sensors go through multiple steps like the quality check, labelling, augmentation, training real-time algorithms. The choice of which algorithm is used for the classification of real-time data depends on factors like the dataset size and/or the data domain. From the traditional machine learning algorithms, the ones most frequently employed by Brainium AI when the dataset size is small are: Dynamic Time Warping, KMeans, Hidden Markov Models and Logistic Regression. For large dataset, the choice falls into the Deep Learning algorithms, as they show better performance. In the current pipeline, Brainium uses several Convolutional Neural Networks architectures, and to train such networks, a combination of pre-trained weights followed by the fine-tuning technique are used.
For Predictive Maintenance, the AI algorithm provides a tool for the continuous classification of vibration patterns, which are dynamically detected and labelled, becoming part of the set of recognised patterns (the patterns the machine has learnt).
More specifically, Brainium uses an unsupervised learning algorithm, and the model is built using time series analysis and vibration analysis techniques, together with custom data stream clustering approach. Acceleration stream is processed using sliding feature frames. For each feature frame, metrics like standard deviation and root mean square signal value are computed and used as input for the stream clustering algorithm. Vibration patterns are obtained as the output of such algorithm.
AI Studio
After the bird-eye view of the AI features and their implementation in Brainium, it is interesting to discover if the “zero code” approach has managed to effectively hide this complexity while still providing a powerful solution.
Regardless of the type of AI model you are going to create, the starting point is always the generation of the training dataset. Normally this would be a long and tedious task, where the data need to collected, organised and adjusted in order to get the best results from the learning process, but AI studio removes the majority of the burden, providing the users a simplified workflow, guiding them through the stages of learning pipeline.
After creating the workspace for the AI feature of choice, there are different ways to create the training dataset.
For Motion Detection, being an instance of supervised learning, the user first needs to create a motion (the label - the known output of the training), then provide the sample data by performing the movement using the SmartEdge Agile device, which are recorded. For each motion, the user needs to repeat the recording many times, to help improving the reliability of the model (Brainium uses a quality indicator for the model, called maturity, which improves with the quality and the number of the recordings).
This process can be repeated to add more motions. Once all the motions needed have been recorded, a model for recognition can be generated. The newly generated model is now available to be used for the creation of AI Rules and AI Widgets.
Predictive Maintenance, on the other hand, is an instance of unsupervised learning, so the learning process won’t involve any labelling: the users only need to define which SmartEdge Agile device will be used for the learning, and this will start the collection and analysis of data from the device. Any vibration pattern or spike present in the data stream is identified and automatically labelled (the user can change the label, to make it more meaningful). Once all the necessary patterns have been detected and recorded, the users can stop the initial learning and generate the model. Just like in case of Motion Recognition, the model can be used for AI Rules and AI Widgets. Once deployed to a SmartEdge Agile device, continuous learning will start.
Testing Brainium Predictive Maintenance AI
After exploring several aspects of this platform, it is time to give it a go, testing one of the features. I have chosen to test the Predictive Maintenance feature, more precisely I want to test how the vibration recognition worked.
To do so, I have tried to reproduce a use case, typical for this kind of maintenance tool: monitoring of a motor’s vibration during operation. I set up a very basic test bench, using a 12V DC motor, driven by a PWM signal generated by a Raspberry Pi 2 GPIO pin.
The motor has been secured using a PCB clamp and some bi-adhesive tape, and the SmartEdge Agile device has been attached to PCB clamp using cable ties. The photos below show how the test environment has been set up. I have fixed a big washer on top of the motor, aiming to "amplify" the vibrations generated by the motor.
The motor is driven using a 50% duty cycle PWM signal (from GPIO19 pin of the Raspberry Pi2), and the circuit used is shown below.
Once the circuit is wired and working, a python script is used to control the motor’s operation. For the machine learning training, the assumption is that the normal operation of the motor would encompass the following static and dynamic states:
- motor off (static)
- motor transition off to on (dynamic)
- motor on (static)
- motor transition on to off (dynamic)
Initial learning
For the the initial learning, the motor is started repeatedly and run each time for 1 minute. The vibration patterns are discovered as the learning progresses.The result of the initial learning is pretty good: all the states listed above are detected, and no other “spurious” pattern has appeared. Each repeated run of the motor keeps detecting the same pattern, denoting those as stable vibration patterns.
This is a good baseline to generate the model with.
{gallery} My Gallery Title |
---|
Initial Learning |
Model Creation |
AI Widget and Rules Creation
With the model created, now we need to add Widgets and Rules if we want to monitor our motor. Before any AI-related Widget or Rule can be created, we need to specify which AI Workspace and AI Model we will be using. The creation of the AI Widget, besides allowing the monitoring of the vibration prediction events, marks the start of the continuous learning on the device target of the Widget: from that moment on, any new patterns detected will show in the list of the patterns (in the AI Workspace), and can be later used to "grow" the model.
{gallery} My Gallery Title |
---|
Selection of AI Workspace and Model |
AI Widget Creation 1 of 2 |
AI Widget Creation 2 of 2 |
AI Widget visualisation |
AI Rules creation |
The AI Widget (as shown in the screenshot) lists not only the known vibration patterns, but also any newly recognised one, and will also show any spike anomaly. Spikes are easily generated, especially in my test setting, as it is enough to lightly tap on the desk to cause a spike to be detected (this explains also the numerous spikes shown on the screenshot, due to my not-so-gentle handling of the PCB clamp!).
The last operation to perform, before actually start testing the detection of anomalies, is the creation of rules. As said before, the rule specifies the "trigger/threshold" the SmartEdge Agile device needs to react to, causing to send alarms back to the Brainium platform if the condition is met. For the test, I have added 2 rules for the alerts: new detected pattern event and extended dynamic pattern. The new detected pattern event would cover any quasi-stationary change (for static patterns) and quickly varying transitions or anomaly (for dynamic patterns), while the extended dynamic pattern event would cover all the anomalies that cause a prolonged transitional pattern.
Anomaly Testing
Now that all has been set up, it is time to test the anomaly detection. To cause such anomaly, I have chosen a very scientific approach: used a pencil with a small rubber on the tip, and use to "mess about" with the washer attached to the motor . In a real life scenario, this kind of anomaly would be caused by some foreign object coming into contact with the motor, like some dirt (admittedly quite a big object, considering the size of the pencil compared to the motor).
Armed with my pencil, I have applied some force, which slowed down the rotation speed, and kept it that way for a few seconds, and finally removed the pencil. The results are visible in the screenshot below.
As it can be seen, the interaction generated 3 new patterns, as expected: 2 dynamic pattern (steady speed to lower speed and lower speed back to steady speed) and 1 static (the lower speed maintained while the pencil was touching the washer), so the detection algorithm behaviour seems quite consistent, and worked quite well. The patterns have been recorded and the alarms received, and several spikes (not shown in the screenshot above) have been detected as well, consistently with me touching and tapping the PCB clamp.
The video below shows one of the motor runs used to check if the 4 standard patterns, used for the training and used to generate the model, were detected correctly by the SmartEdge Agile.
After the test, I wanted to check if I could get some actual data regarding the vibration pattern, using the World Acceleration Widget to record the data stream (with the device tracking rate set to extreme). The aim would be to see if I could get good enough data to perform some processing and get more insights on the vibration pattern and, more importantly, on the anomalies.
I have created the widget and recorded data of a sample, undisturbed, motor run. The graph for the data reading is shown below.
From the graph, I can identify the 4 vibration patterns (from the left: motor off, motor off to on, motor on and finally motor on to off). The impression I got, for the motor on pattern, is that the data have been collected at too low sampling frequency (undersampled). Typically, vibration frequencies spectrum can extent from few Hz to 10kHz, although for our test (motor's max speed 6000 RPM, duty cycle 50%) the speed in the motor should be about 3000 RPM, which give us a fundamental frequency for the vibration at 50Hz, and if we limit the spectrum to the 5th harmonic, gives us a bandwidth of 250Hz for the spectrum.
Nyquist-Shannon sampling theorem tells us that, to correctly sample a signal, we need to make sure the sampling frequency is at least double of the maximum frequency component of the sampled signal, which in our case means a sampling frequency of at least 500Hz.
But the only data we can have access to are the ones coming from the widget, and unfortunately it looks like the samples are collected at a rate of roughly 10Hz, well below what we would need to be able to extract any meaningful information from spectrum analysis!
Obviously, this "cut-down" data rate is probably due to some bandwidth optimisation done by the Brainium team, to reduce the amount of data needed to be transmitted to the cloud from the SmartEdge Agile, and probably most of the time this is the most efficient choice, but it feels quite limiting not being able to retrieve the data we know are available on the device, or at least be able to set the sampling rate for the data requested to be sent back to the widget.
I know Brainium is quite customizable, so probably the development team would be able to change this, if required. But this would involve engaging with AVNET, while I think there would be value making this option available to all the users regardless.
This concludes my exploration of the SmartEdge Agile device and Brainium platform. The blogs of this series are part of the Roadtest I'm involved with at the time, where I am going to write my impression about this product. Thank you for reading, and I hope you found the blogs enjoyable and the information useful.
Fabio
From the same series:
AI to the Edge - Part 1: Introducing the SmartEdge Agile device
Top Comments