AVNET Azure Sphere MT3620 Starter Kit - Review

Table of contents

RoadTest: AVNET Azure Sphere MT3620 Starter Kit

Author: roger.livesey@bcs.org

Creation date:

Evaluation Type: Development Boards & Tools

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?: fipy from pycom (attended 1-day seminar at their London office). Particle IoT platform.

What were the biggest problems encountered?: Minor issues with IoT Connectivity not working in Visual Studio (simple manual workaround supplied within hours) and an application template that was not found at the documented link (new link to working template again supplied within hours).

Detailed Review:

Summary

This road test has been documented with this overall summary, followed by a brief background of the writer and then a description of the intended project.  Experience with the technical training and software development follows next, then observations on the deployment of IoT systems in general and a conclusion at the end.

 

The MT3620 is a fine piece of silicon with all the attributes you would expect of an IoT device.  The development kit is well manufactured and shows an attention to detail one would expect from a major supplier.  I started this road test with the idea of building a CCTV security system - not hugely IoT territory, but enough to both exercise the kit and gain experience with it.  It soon became apparent, however, that the implementation of an IoT system of any useful size is neither a hardware nor software problem, but one of organisation.  Yes, the individual unit has to do its job reliably, safely and without interference from malicious actors, but there are two more complex issues that spring to my mind.  The first is the organisation of the units themselves and the second is entitlements.  I try to develop these ideas in the final sections.

 

Background

The writer's background is 10 years of software development in a traditional DP environment, then 10 years in the building controls industry as an employee of Johnson Controls, a further 15 years in the same industry as a self-employed consultant, then the last 13 years spent as a CRM consultant/developer before being forced to retire in 2013 following a plane crash.  Since then, have designed and built a number of hardware prototypes, driven by individual iOS apps, on a hobby basis.  This includes: CNC machine, voice controlled robotic arm, scuba diver surface location transponder (identifies diver on iPad map for pickup by dive boat), RFID tag printer and wand, digital oscilloscope, and ongoing fun with a Petoi Nybble robotic kitten.  The writer has a BA in Computer Science, is a Chartered Engineer, a Chartered Information Technology Practitioner and a full Member of the British Computer Society.

 

The Project

I already had an analog CCTV camera lying around that I had thought to connect up to a Raspberry Pi to create a security system.  The project that immediately came to mind for the Avnet Azure Sphere Kit was to mount the camera over the front door, send camera frames up to the web, do face recognition in Azure and open the door lock automatically.  I live in a house converted into 4 flats, and it already has an intercom and door opening system that I could break in to.  I could also develop an iOS app to give a remote visual and audio feed.  Yes this has all been done before, but there would be enough content to develop an interesting road test.

 

I investigated analog CCTV to digital conversion and the only item I could find that was cheap and cheerful (Video Experimenter from nootropic design) only supported capturing frames in black and white.  I could convert my existing analog CCTV camera to an Internet Protocol camera, but this would be more expensive than buying a new IP CCTV camera designed for the job.

 

It quickly became clear that this project would not be exercising the kit in doing what it has been designed for, and that this was really a software project with a relay output to control a door lock.  I worked through the video training and lab exercises and came to the conclusion that implementing IoT with Avnet Azure Sphere was neither a hardware nor software problem, but an organisational one.  The MT3620 chip has all the capabilities one might need, and if you really run out of GPIOs then I2C expanders cost nothing.  So I decided to defer my project and write this review based on my experience with the software.

 

Technical Training

This was executed by following the excellent series of videos published on hackster.io  The package arrived and I was surprised that I was not charged any VAT or import duty, given that the shipping documents included an invoice for 75 $US.  Earlier this year I was charged 20% VAT plus a £12 handling fee on an electronic device shipped from the States.  The documentation in both video and PDF format is more than adequate (classic British understatement for very good).  I'm running Visual Basic 2019 v 16.2.3 under Windows 10 under VMware Fusion 11.1.1 under MacOS Catalina 10.15 beta on a MacBook Pro, and the device connected without incident.

 

When checking for an Azure Sphere account I found two Windows programs MS Azure Command Prompt - V2.9 that appears in the documentation to be the one to use (but doesn't have the azsphere executable present) and MS Azure Developer Command Prompt Preview that works.  Managed to create an account and sign in, although you need to ignore the helpful, if somewhat ominous, warning No Azure Sphere tenants found under the selected AAD user which is fine if you know what you are doing, but this is still early days.  My notes indicate that Labs 0 and 1 completed without incident, and that I was able to shutdown the VM overnight and then pick up everything simply with azsphere login and reconnecting the device.  Labs 2 and 3 then completed without issue.

 

At this point we start creating device names, user names, capabilities and a whole bunch of certificates that may or may not be required in the future.  I have hopefully covered my observations below.  I had an issue where the References section did not appear in the Solution Explorer window of Visual Studio nor in a Search initiated from the same window.  I randomly went into Team Explorer, clicked on AvnetStarterKitReferenceDesign.sln at which point the app_manifest.json file appeared in the edit window.  Going back to Solution Explorer I could now find References, but had to right-click and choose Add Reference before getting the error below.  I posted the issue in the forum and had a response within hours that the Add Reference function in Visual Studio was not working, and detailing a very simple workaround that involved finding some magic long numbers and putting them into app_manifest.json.

 

Up to this point I had been using IE11, but Azure IoT Central complained and demanded the use of MS Edge.  The link in the documentation to the template file was giving the message Template ID could not be found and again, within hours of reporting in the forum this road block, a new link was supplied.  Cutting and pasting commands that span two lines in the PDF (in my environment anyway) to the command line results in the command getting cut short and the second line being ignored - not an issue, but something for the reader to be aware of.

 

In Lab 5 using the azsphere utility, we create a second certificate to upload to Azure IoT Central.  I'm not sure whether these certificates are required after they have been uploaded, however the alert user would note that the first time we did this the filename ValidationCertificate.cer was used and the second time ValidationCertification.cer was used.  This is no doubt to avoid the novice making a mess of things, but begs the question as to whether these certificates have any value once they have been uploaded.  If not and they have outlived their usefulness, then reusing the name ValidationCertificate.cer every time you need one would at least tell the user what the file is.

 

Eventually completed Lab 5 after fixing a typo where I'd labelled the toggle switch added in Azure IoT Central as appLED, but used appLed in the code.  Requested and received my course completion certificate, although somewhat bemused by my 85.71% score when the answers all looked correct (no grade-inflation on this site).

 

image

 

Software Development

I have not used the Visual Studio 2019 IDE before, yet found it easy to use and relatively intuitive.  Most of my IoT device development has been with the Arduino IDE and the on-line Mbed Compiler that provide little in the way of a debug environment.  Visual Studio compares more with Apple's Xcode and appears to have similar overall functionality as one might expect.  The sample code supplied by Avnet was well written and fully commented, and I experienced only the issues mentioned above which were rectified within hours of being raised.  The greatest area of difficulty during my development of systems, was communication between the devices.  You need to implement a protocol that has an indication of the start of a new packet, a byte count, the payload itself and some form of CRC check.  Then you need to acknowledge correctly received packets so that they can be removed from the transmitters stack, or ask them to be retransmitted.  This is required whether you are using Bluetooth, LoRa or any form of network communication, and with Avnet Azure Sphere this is all done for you.  The logic to control the operation of the device itself should, in my experience, be trivial to implement, enabling rapid development and deployment of IoT systems.

 

Deployment

As mentioned in the Summary above, in my considered opinion the deployment of just a few devices, let alone hundreds and thousands, requires a high degree of organisation.  The devices need a naming convention that indicates where they can be found and what they are doing, their configuration and their software version.  The naive observer might imagine that all devices are the same and interchangeable - after all, isn't that what IoT is all about?

In the early 90s the writer was called in to a project where the documentation had got out-of-hand.  This was for the building management system in a medical research complex that consisted of 6-7 multi-story buildings spread over a several acre site.  The control system comprised around 10,000 individual systems.  Some were massive air handling units, chillers and boilers, but the bulk were for the control of individual cells, with each and every system requiring its own individual Description of Operation.  These DoEs had to include the equipment tags of all the physical hardware (sensors, dampers, actuators, etc.) as assigned by the consulting engineers, plus the software addresses of each control point.  In theory, the majority of the small systems fell into one of two types - those with HEPA filters on the inlet side to create a sterile environment inside the cell, and those with the filter on the outlet side to prevent anything naughty getting out.  The controls engineers had produced their documents by cloning a master copy and then using search-and-replace.  However, it turned out that nearly all of the 9,900+ systems had their own peculiarity causing the cloned documentation to have errors and being rejected by the consulting engineers as unfit for purpose.  The solution was to start from scratch, and agree a standard format and wording with the consulting engineers.  The documents were still generated manually (this was the early 90s), but then analysed programmatically and crosschecked with a number of databases, with an exception sheet generated for manual review.

I mention this because the writer sees no reason why the deployment of an IoT system would not share the same fate.  Devices are deployed in the field, then on viewing the results, the users have new requirements and the software must be changed.  Sensors become unavailable and alternatives with slightly different characteristics must be used that are made in another country (cue for Brexit joke).  In theory, this is all managed by having one master copy of the software, with individuality configured through the device twin.  However, personal experience dictates that this Utopia will be difficult to achieve, and that multiple versions and configurations of both the software and the physical devices will need to be managed.  And all this on a scale that probably exceeds the capability of the humble spreadsheet (and who owns the most up-to-date copy of that anyway!)

Moving on, I would ask the reader to consider Entitlements - this is all about who can do what to whom, when and under which circumstances.  Users of building management system fall in into all sorts of categories, basic environmental engineers and security staff, and those with the specialist disciplines and certification required to operate big items of equipment such as generators and high voltage switchgear.  Entitlements are also the subject of heated discussion for users of CRM systems (but I digress).  The important thing is that all the actors in an IoT system must be identified (programmers, administrators, end-users in different disciplines) and their capabilities defined.  Actors may be given different capabilities in different parts of the system, where this can be defined by the geographical location of the device or the engineering discipline.  For example, administrators may have limited access to change settings for devices in their base region, yet only able to monitor devices operating elsewhere as a second pair of eyes.  The devices themselves (especially given the capability of the Avnet device) may be performing separate functions that are being accessed by say chemists and physicists, who are naturally protective of who can interfere with their own data and experiments.  This is not intended as a treatise on entitlements, but simply an attempt to highlight that this is an important aspect of the deployment.  It is much easier to build an entitlements model into the base system, than trying to retrofit one later.  The writer's experience is that entitlements are very project specific, and that the central software needs to provide the tools to build a model, rather than supplying a model that has limited customisation.

 

Conclusion

The MT3620 obviously has a high capability, but my personal concern would be that it might be over-configured and I question the need for 2 programmable CPUs.  My experience with Arduino and ESP32 is that you don't run out of steam, but often run out of memory.  Both of the ARM Cortex-M4F subsystems are well configured and appear to have dedicated GPIO/UART blocks, so the MT3620 is quite a heavyweight in terms of what it is capable of doing, especially when all the heavy lifting of communication is being handled by a third ARM Cortex-A7 subsystem.  In my opinion, this is not a device designed to control a single humble water meter.  This capability will come at a price differential that is no doubt trivial, although when deploying tens of thousands of units this will mount up.

 

https://docs.microsoft.com/en-us/azure-sphere/hardware/mt3620-product-status

 

I also note that there appears to be a focus on the unit's ability to communicate over WiFi.  During my 1-day seminar on the pycom fipy development platform, Vodafone were invited to extol the virtues of LTE-M and NBIoT for communication, and their ability to connect with water meters located several meters below the pavements of Barcelona.  It is clear that the mobile network has far greater coverage than that of WiFi, and that lower frequencies have deeper ground and building penetration.  The only reference I could find on the internet regarding Avnet and LTE-M, was one covering the marriage of Avnet’s SmartEdge Agile IoT devices with Octonion’s Brainium software platform, which leaves me wondering if the Azure Sphere device also has mobile network capability built in, or whether I have missed something fundamental.

Given that, by nature, an IoT system will be geographically dispersed, I would suggest that Azure IoT Central provides a map-view for user navigation.  By assigning each unit its latitude and longitude, the dropping of an annotation onto the map would be trivial to implement (my own dive app transmits the newly surfaced diver's GPS location and emergency status back to the dive boat for display on an iPad).  Allocating fixed elements of the device twin would enable the status of the device to be displayed on the map in colour - green normal, red alarm, yellow out-of-limits and blue offline for example.  Clicking on the device could then bring up the Device Page in Azure IoT Central directly without the need for further navigation.  In my experience, satellite images provide less clutter and higher contrast to enable devices to be more easily seen against the background.  The dispatching of a service engineer to a know GPS coordinate would also be facilitated.

 

This RoadTest ends with a number of questions still open, and I hope that the teams at both Azure and Avnet will fill in the gaps.  They must also feel free to challenge any of my observations in what has been a limited exposure to their work.

Anonymous
  • when you are connected to the cloud, your device has infinite memory available. The local processing power will create the differentiation. The data rates of the sensors on the Azure sphere are low compared to the communication bandwidth, so RAM can be used to buffer and process(compress/integrate/fuse). I am not aware of any application that could not be supported because of memory limitations.

  • I like your well written Road Test. It does raise interesting concerns about the memory vs. CPU resources. This problem seems pervasive in the industry at large. Even PC's come over powered with scant memory/RAM available to utilise the processor(s). My guess is the processors are more noticed by the average user before the amount of RAM.

    Thanks