In the previous blog , I have explored the features of the SmartEdge Agile device, focusing on its hardware features. The device is the actual interface of the solution with the world: its many sensors scan the surrounding environment, producing a flow of data. But where do the data go?
In this blog, I will take a look to the software part of the solution which deals with the management of the device and the data produced: the Brainium software stack.
The Brainium Software Stack
The software side of the solution provided by Brainium, offers a complete device-to-cloud IoT platform, which also includes AI capability. The backbone of the platform is based on the Octonion IoT framework.
Brainium is developed as an application built on top of this framework, which adds AI to the IoT management functionalities already available through Octonion’s platform. The main components of this architecture are:
- device
- gateway
- cloud
Looking at the picture illustrating the architecture, Brainium appears to be quite a flexible solution, capable of integrating with both the major cloud players in the industry and with custom private cloud infrastructure.
If your hands are already itching thinking of the possibilities, just take a deep breath: for the roadtest, or if you only buy the Agile device, you can only use the 6-month trial Microsoft Azure-backed cloud infrastructure, and if you want to explore any other integration path, or investigating any further customisation option, you will have to get in touch with an AVNET representative. After all, let’s not forget the focus of this product is to engage with manufacturers, not makers/hackers.
Lets take a closer look to the several components.
The Device
As already seen, the SmartEdge Agile provides the meta-sensing functionality. Out-of-the box, the device can collect information about:
- temperature (internal)
- acceleration (absolute value)
- humidity
- angular velocity (absolute value)
- pressure
- IR light
- visible light
- magnetic field (absolute value)
- proximity
- sound level
- world acceleration (absolute value)
The management of the device is performed by the Brainium firmware, which is based on the Octonion Embedded Framework. Besides managing the sensors’ and collecting their data, the framework is responsible for establishing a secure communication link with the platform gateway (leveraging the Nordic SoC Bluetooth 5 BLE secure connection features), and providing the OTA firmware update service. Moreover, the firmware offers some “smart monitoring” features for processing at the edge, like condition-based alarm triggering, and includes the Brainium AI model executor, for AI-based alarms.
This is another trend re-gaining affirmation in the past few years: as the power of those devices increases, more and more processing tasks are delegated to the edge devices. And there is a name for it: edge computing (sometimes also called fog computing). Despite the new names, there is nothing new really: for the older amongst us, it will be very clear the resemblance with the recurring cycle of centralised vs distributed computing. But this is a digression, lets move on.
A device can be managed only via the Brainium portal. Using the portal, each of the device’s sensors can be independently enabled/disabled, and the frequency of the sampling can be set as well (called “device tracking transport rate”, with allowed values Low,Medium,High and Extreme).
The Gateway
Due probably to power efficiency constraints, the device can only use BLE to connect to the outside world, with no TCP/IP stack, which means that, on its own, it would not be able to connect to the Internet and communicate with the server in the cloud. It is the task of the gateway to facilitate such connection, by providing secure device-to-gateway and gateway-to-cloud connections. As such, this is a fairly simple component, with very little configuration settings to tweak.
{gallery} Brainium Gateway |
---|
Gateway App: Main screen |
Gateway App: Settings screen |
Brainium supports Android, iOS and Linux (Raspbian Stretch for Raspberry Pi) as operating systems for the gateway application. I am using an Android smartphone as gateway. The only hardware requirement for the gateway is the ability to support BLE (in order to be able to connect to the devices), and obviously, the availability of either a mobile data or Wi-Fi connection to the internet. The Bluetooth 5 BLE should guarantee a stable connection between devices and gateway up to 25 meters away from each other, although I was able to get a decent connection only at about 10 meters away while indoor (with the gateway and the device located on different floors).
Each gateway connectivity is limited to 2 devices. I suppose this limitation applies only for the "trial" period, with the gateway actually able to manage many devices at once. Overall, the gateway application is quite robust, and the connection with the device is stable. Occasionally it loses the connection to the device, and it is not able to reconnect till the app is recycled (this behaviour was quite frequent during the beta testing, but since then, after several device firmware and gateway app updates, it has only happened to me couple of times).
The Cloud
This is the component that does all the heavy lifting. All the core services are hosted in the cloud, and amongst them it is worth mentioning the following:
- Identity Management
- IoT Device Management and Monitoring
- Storage Management
- AI Model Builder
- UI interface - Portal
- Platform API interface
I haven't mentioned anything about security and certificate management, because this service is actually transversal to all the other services. Some of the security features include OAuth 2.0 and multi-factor authentication, X509 certificates and SSL/TLS encryption.
For the trial users, all the cloud infrastructure is hosted on Microsoft Azure, but I have been assured the Brainium platform cloud components can be deployed on, and/or integrated with, other cloud platforms, like Amazon AWS.
The only components completely reachable by the user are the Portal and the API interface, so I will focus on them in the rest of the article. The other components are only visible if they expose a UI interface through the Portal, and if they don't, you are made aware of their presence only if they fail.
The Portal
No matter how many components will be at work behind the scene, the user will always perceive the platform through its user interface. When I logged into the Brainium portal for the first time, I found the interface to be quite easy and clean. I also found the philosophy behind the organisation of logically linked functionalities into workspaces, to be a big help while moving the first steps. The workspaces are: the Projects, the Equipment and the AI Studio. Each of them manages and presents a different view of the platform, trying to avoid overlapping: the Equipment workspace is all about the physical platform management (i.e. adding/removing gateways and devices), the Project workspace is the operational view of the system (i.e. management of what data to collect, creating monitoring rules, visualisation of the data collected), and the AI Studio is the tool that deal with the creation of AI models and the training data sets.
{gallery} Brainium Portal |
---|
Portal: The different workspaces, logically grouped |
Equipment workspace: Gateways |
Equipment workspace: Devices |
Project View: Devices |
Project View: Widgets |
.
If, on one hand, the clear separation of the functionalities is a big help, on the other hand it can make "operational workflows" (e.g. setting up an alarm for a device, monitoring a sensor, etc.) a little less intuitive. It is not a big deal, as you get used to it once you go through the process couple of times. But let me explain what I mean with an example.
Everything related to the operational view of the platform belongs to the Projects workspace, more precisely, you need to create a Project. The Project, to do anything meaningful with it, needs to have at least one device assigned to it, so that the user can manage, monitor and/or create alarms for such device. But devices, in order to be "assignable" to a project, they need to be added to the system first, and this can only be done from a different workspace: the Equipment one. So, if you haven't added any devices to your system yet and you create a Project, in the middle of your Project setup, you need to abandon what you were doing, to switch workspace to Equipment->Devices, add a new device, switch back to your Project and then retry assigning a device, which is not ideal. Would have been more "user friendly" to include a "shortcut" in the Project: an "Add new device" entry in the drop-down list of the assignable devices, that would take you automatically to the "Add device" workflow defined in the Equipment workspace to perform the action, and then take you back to the Project workspace where you left.
Lets move on, and see what you can do with a Device, once it is assigned to a Project (by the way, a Project can have multiple Devices assigned to it, but a Device can only be assigned to 1 Project). Fundamentally, there are two groups of actions that can be performed on a Device: data collection and event monitoring.
Data collection and tracking can be set up using "Widgets". The Widget allows a visual representation of the data collected (either as a number or a plot) on the Portal, and moreover allows the recording of the collected data, which are stored in the cloud, to be downloaded at later date as CSV files (the picture on the side shows few widgets: note the temperature is measured inside the device case, and it is quite higher than the room temperature, due to the charging of the battery).
Event monitoring is performed by creating event rules on the target device. There are 2 kind of rules available: Smart Rules and AI Rules.
The Smart rules basically specify which sensor is used and the type and threshold value for the event to trigger an alarm (an example of Smart Rule is illustrated at the left side of the picture on the left).
AI Rules are slightly different, as they refer to an AI Model, which must have been previously created using AI Studio. There are 2 type of AI models used to monitor devices and can be used for creating rules: Motion Recognition and Predictive Maintenance.
I will not go into the details of AI Studio and how the models are created and used, for the moment it will suffice to say that both models use data from the Inertial Measurement Unit (Accelerometer/Gyroscope) Sensor, to identify either movement or vibration patterns.
When the data collected match the Rule threshold, and alarm is reported back to the server, and signalled on the portal.
All the processing related to the triggering of alarms is delegated to the device (inline with the predicaments of Edge computing).
The API
Typically a system does not live in isolation, but needs to interact and integrate with other systems around it. The Brainium platform makes no exception, and defines some API interfaces to allow other systems to tap into its information and data.
There are 2 sets of API available, one based on the REST architectural style and the other based on the MQTT protocol over WebSocket (the full documentation for both API can be found on the Brainium.website). The REST API are useful to get information about devices, gateways, projects, widgets, alerts, recordings and motions. The data accessed via this API are not real-time, but historical. If you want to get access to real-time data, you can use the MQTT API, which provides the data thanks to its subscription model.
Both API are secured using an access token to grant authorization, which is provided by the portal. One important thing to notice is the API allows only "read-only" access to the Brainium resources: you can list the devices, but you cannot add or remove them, and same applies to all the other objects.
While this could be a choice made to preserve the security of the platform, it is very limiting in terms of integration capability with 3rd party software, as it makes Brainium available as data source only and does not allow for its management.
I believe those published are not the only API available for the platform, as I'm sure there must be more of them (and more "powerful") to help with the customisation of the solution, but I suspect they are available only to those who decide to engage with AVNET to try and build a solution to take to the market.
This concludes the overview of the Brainium software stack. So far, I have purposefully only touched on the AI features of the solution, without entering in any detail. This is because the "AI side" will be the subject of my next blog.
From the same series:
AI to the Edge - Part 1: Introducing the SmartEdge Agile device
Top Comments