element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
RoadTests & Reviews
  • Products
  • More
RoadTests & Reviews
Blog AI to the Edge - Part 2 : Introducing the Brainium Platform
  • Blog
  • RoadTest Forum
  • Documents
  • RoadTests
  • Reviews
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join RoadTests & Reviews to participate - click to join for free!
  • Share
  • More
  • Cancel
  • Author Author: gecoz
  • Date Created: 29 Mar 2019 1:05 AM Date Created
  • Views 1202 views
  • Likes 7 likes
  • Comments 6 comments
Related
Recommended
  • edge computing
  • st sensors
  • avnet
  • ai
  • brainium
  • iot
  • octonion
  • smartedge agile

AI to the Edge - Part 2 : Introducing the Brainium Platform

gecoz
gecoz
29 Mar 2019

  • The Brainium Software Stack
    • The Device
    • The Gateway
    • The Cloud
      • The Portal
      • The API

 

 

In the previous blog , I have explored the features of the SmartEdge Agile device, focusing on its hardware features. The device is the actual interface of the solution with the world: its many sensors scan the surrounding environment, producing a flow of data. But where do the data go?

 

In this blog, I will take a look to the software part of the solution which deals with the management of the device and the data produced: the Brainium software stack.

 

 

The Brainium Software Stack

image

 

The software side of the solution provided by Brainium, offers a complete device-to-cloud IoT platform, which also includes AI capability. The backbone of the platform is based on the Octonion IoT framework.

 

Brainium is developed as an application built on top of this framework, which adds AI to the IoT management functionalities already available through Octonion’s platform. The main components of this architecture are:

 

  • device
  • gateway
  • cloud

 

Looking at the picture illustrating the architecture, Brainium appears to be quite a flexible solution, capable of integrating with both the major cloud players in the industry and with custom private cloud infrastructure.

 

If your hands are already itching thinking of the possibilities, just take a deep breath: for the roadtest, or if you only buy the Agile device, you can only use the 6-month trial Microsoft Azure-backed cloud infrastructure, and if you want to explore any other integration path, or investigating any further customisation option, you will have to get in touch with an AVNET representative. After all, let’s not forget the focus of this product is to engage with manufacturers, not makers/hackers.

 

Lets take a closer look to the several components.

 

The Device

As already seen, the SmartEdge Agile provides the meta-sensing functionality. Out-of-the box, the device can collect information about:

 

  • temperature (internal)
  • acceleration (absolute value)
  • humidity
  • angular velocity (absolute value)
  • pressure
  • IR light
  • visible light
  • magnetic field (absolute value)
  • proximity
  • sound level
  • world acceleration (absolute value)

 

The management of the device is performed by the Brainium firmware, which is based on the Octonion Embedded Framework. Besides managing the sensors’ and collecting their data, the framework is responsible for establishing a secure communication link with the platform gateway (leveraging the Nordic SoC Bluetooth 5 BLE secure connection features), and providing the OTA firmware update service. Moreover, the firmware offers some “smart monitoring” features for processing at the edge, like condition-based alarm triggering, and includes the Brainium AI model executor, for AI-based alarms.

 

This is another trend re-gaining affirmation in the past few years: as the power of those devices increases, more and more processing tasks are delegated to the edge devices. And there is a name for it: edge computing (sometimes also called fog computing). Despite the new names, there is nothing new really: for the older amongst us, it will be very clear the resemblance with the recurring cycle of centralised vs distributed computing. But this is a digression, lets move on.

 

A device can be managed only via the Brainium portal. Using the portal, each of the device’s sensors can be independently enabled/disabled, and the frequency of the sampling can be set as well (called “device tracking transport rate”, with allowed values  Low,Medium,High and Extreme).

 

The Gateway

Due probably to power efficiency constraints, the device can only use BLE to connect to the outside world, with no TCP/IP stack, which means that, on its own, it would not be able to connect to the Internet and communicate with the server in the cloud. It is the task of the gateway to facilitate such connection, by providing secure device-to-gateway and gateway-to-cloud connections. As such, this is a fairly simple component, with very little configuration settings to tweak.

 

 

{gallery} Brainium Gateway

image

image

Gateway App: Main screen

image

Gateway App: Settings screen

 

 

 

Brainium supports Android, iOS and Linux (Raspbian Stretch for Raspberry Pi) as operating systems for the gateway application. I am using an Android smartphone as gateway. The only hardware requirement for the gateway is the ability to support BLE (in order to be able to connect to the devices), and obviously, the availability of either a mobile data or Wi-Fi connection to the internet. The Bluetooth 5 BLE should guarantee a stable connection between devices and gateway up to 25 meters away from each other, although I was able to get a decent connection only at about 10 meters away while indoor (with the gateway and the device located on different floors).

Each gateway connectivity is limited to 2 devices. I suppose this limitation applies only for the "trial" period, with the gateway actually able to manage many devices at once. Overall, the gateway application is quite robust, and the connection with the device is stable. Occasionally it loses the connection to the device, and it is not able to reconnect till the app is recycled (this behaviour was quite frequent during the beta testing, but since then, after several device firmware and gateway app updates, it has only happened to me couple of times).

 

The Cloud

This is the component that does all the heavy lifting. All the core services are hosted in the cloud, and amongst them it is worth mentioning the following:

 

  • Identity Management
  • IoT Device Management and Monitoring
  • Storage Management
  • AI Model Builder
  • UI interface - Portal
  • Platform API interface

 

I haven't mentioned anything about security and certificate management, because this service is actually transversal to all the other services. Some of the security features include OAuth 2.0 and multi-factor authentication, X509 certificates and SSL/TLS encryption.

 

For the trial users, all the cloud infrastructure is hosted on Microsoft Azure, but I have been assured the Brainium platform cloud components can be deployed  on, and/or integrated with, other cloud platforms, like Amazon AWS.

 

The only components completely reachable by the user are the Portal and the API interface, so I will focus on them in the rest of the article. The other components are only visible if they expose a UI interface through the Portal, and if they don't, you are made aware of their presence only if they fail.

 

The Portal

No matter how many components will be at work behind the scene, the user will always perceive the platform through its user interface. When I logged into the Brainium portal for the first time, I found the interface to be quite easy and clean. I also found the philosophy behind the organisation of logically linked functionalities into workspaces, to be a big help while moving the first steps. The workspaces are: the Projects, the Equipment and the AI Studio. Each of them manages and presents a different view of the platform, trying to avoid overlapping: the Equipment workspace is all about the physical platform management (i.e. adding/removing gateways and devices), the Project workspace is the operational view of the system (i.e. management of what data to collect, creating monitoring rules, visualisation of the data collected), and the AI Studio is the tool that deal with the creation of AI models and the training data sets.

 

{gallery} Brainium Portal

image

Portal: The different workspaces, logically grouped

image

Equipment workspace: Gateways

image

Equipment workspace: Devices

image

Project View: Devices

image

Project View: Widgets

.

If, on one hand, the clear separation of the functionalities is a big help, on the other hand it can make "operational workflows" (e.g. setting up an alarm for a device, monitoring a sensor, etc.) a little less intuitive. It is not a big deal, as you get used to it once you go through the process couple of times. But let me explain what I mean with an example.

 

Everything related to the operational view of the platform belongs to the Projects workspace, more precisely, you need to create a Project. The Project, to do anything meaningful with it, needs to have at least one device assigned to it, so that the user can manage, monitor and/or create alarms for such device. But devices, in order to be "assignable" to a project, they need to be added to the system first, and this can only be done from a different workspace: the Equipment one. So, if you haven't added any devices to your system yet and you create a Project, in the middle of your Project setup, you need to abandon what you were doing, to switch workspace to Equipment->Devices,  add a new device, switch back to your Project and then retry assigning a device, which is not ideal. Would have been more "user friendly" to include a "shortcut"  in the Project: an "Add  new device" entry in the drop-down list of the assignable devices, that would take you automatically to the "Add device" workflow defined in the Equipment workspace to perform the action, and then take you back to the Project workspace where you left.

 

Lets move on, and see what you can do with a Device, once it is assigned to a Project (by the way, a Project can have multiple Devices assigned to it, but a Device can only be assigned to 1 Project). Fundamentally, there are two groups of actions that can be performed on a Device: data collection and event monitoring.

 

 

image

Data collection and tracking can be set up using "Widgets". The Widget allows a visual representation of the data collected (either as a number or a plot) on the Portal, and moreover allows the recording of the collected data, which are stored in the cloud, to be downloaded at later date as CSV files (the picture on the side shows few widgets: note the temperature is measured inside the device case, and it is quite higher than the room temperature, due to the charging of the battery).

 

 

 

 

Event monitoring is performed by creating event rules on the target device. There are 2 kind of rules available: Smart Rules and AI Rules.

 

imageimage

 

The Smart rules basically specify which sensor is used and the type and threshold value for the event to trigger an alarm (an example of Smart Rule is illustrated at the left side of the picture on the left).

 

AI Rules are slightly different, as they refer to an AI Model, which must have been previously created using AI Studio. There are 2 type of AI models used to monitor devices and can be used for creating rules: Motion Recognition and Predictive Maintenance.

 

I will not go into the details of AI Studio and how the models are created and used, for the moment it will suffice to say that both models use data from the Inertial Measurement Unit (Accelerometer/Gyroscope) Sensor, to identify either movement or vibration patterns.

 

When the data collected match the Rule threshold, and alarm is reported back to the server, and signalled on the portal.

 

All the processing related to the triggering of alarms is delegated to the device (inline with the predicaments of Edge computing).

 

The API

Typically a system does not live in isolation, but needs to interact and integrate with other systems around it. The Brainium platform makes no exception, and defines some API interfaces to allow other systems to tap into its information and data.

imageimage

There are 2 sets of API available, one based on the REST architectural style and the other based on the MQTT protocol over WebSocket (the full documentation for both API can be found on the Brainium.website). The REST API are useful to get information about devices, gateways, projects, widgets, alerts, recordings and motions. The data accessed via this API are not real-time, but historical. If you want to get access to real-time data, you can use the MQTT API, which provides the data thanks to its subscription model.

Both API are secured using an access token to grant authorization, which is provided by the portal. One important thing to notice is the API allows only "read-only" access to the Brainium resources: you can list the devices, but you cannot add or remove them, and same applies to all the other objects.

 

While this could be a choice made to preserve the security of the platform, it is very limiting in terms of integration capability with 3rd party software, as it makes Brainium available as data source only and does not allow for its management.

 

I believe those published are not the only API available for the platform, as I'm sure there must be more of them (and more "powerful") to help with the customisation of the solution, but I suspect  they are available only to those who decide to engage with AVNET to try and build a solution to take to the market.

 

This concludes the overview of the Brainium software stack. So far, I have purposefully only touched on the AI features of the solution, without entering in any detail. This is because the "AI side" will be the subject of my next blog.

 

 

 

From the same series:

AI to the Edge - Part 1: Introducing the SmartEdge Agile device

AI to the Edge - Part 3 : AI Studio

  • Sign in to reply

Top Comments

  • dubbie
    dubbie over 6 years ago +2
    Fabio, Very comprehensive. It does still seem overwhelmingly complex. At the moment I cannot see how I might fit something like this into my mobile robots but I'm sure it must be possible and will probably…
  • gecoz
    gecoz over 6 years ago in reply to dubbie +2
    Hi Dubbie, Thank you. Undoubtedly, these kind of systems are inherently complex, but I hope my description didn't scare you off! I have tried to give more of system view rather than just simply show the…
  • kiri-ll
    kiri-ll over 6 years ago +1
    World acceleration? What is it?
  • gecoz
    gecoz over 6 years ago in reply to DAB

    Thank you DAB.

     

    Fabio

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • DAB
    DAB over 6 years ago

    Very good update.

    Well explained.

     

    DAB

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • gecoz
    gecoz over 6 years ago in reply to kiri-ll

    World acceleration is the name used by Brainium, and it is 3-axis acceleration in world-wide ENU  (ENU - East, North, Up) coordinates (with excluded gravity). This is different from the "basic" acceleration, which uses the device coordinates and includes the gravity.

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • kiri-ll
    kiri-ll over 6 years ago

    World acceleration? What is it?

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • gecoz
    gecoz over 6 years ago in reply to dubbie

    Hi Dubbie,

     

    Thank you. Undoubtedly, these kind of systems are inherently complex, but I hope my description didn't scare you off! I have tried to give more of system view rather than just simply show the typical end user scenario.

     

    One of the goals of Brainium platform is exactly what you are after: to hide all the complexity, giving users the power to do what they need, but without being bogged down by the details. In your case, for example, your mobile robot would incorporate the SmartEdge Agile device somehow (it is a small device, so I'm sure there will be a corner where it can fit in image), then all you would need is to have a smartphone nearby, with the Gateway software installed and some mobile data connection, and you are ready to get all your data and manage them from the Brainium portal.

     

    The problem I think is: do you have a use case for this device? Do you need all this power, all those sensors, and the AI added features, or you could do all you need to just by using couple of sensors and a microcontroller?

     

    Fabio

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
>
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube