element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet & Tria Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • About Us
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Azure Sphere Starter Kit
  • Products
  • Dev Tools
  • Avnet & Tria Boards Community
  • Azure Sphere Starter Kit
  • More
  • Cancel
Azure Sphere Starter Kit
Blog Forest Defender: Protecting Our Forests with Machine Learning
  • Blog
  • Forum
  • Documents
  • Events
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Azure Sphere Starter Kit to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: elegantdesign
  • Date Created: 1 Dec 2019 5:30 AM Date Created
  • Views 2178 views
  • Likes 8 likes
  • Comments 9 comments
  • forests
  • audio
  • azure iot
  • sensing the world project
  • cloud services
  • machine learning
Related
Recommended

Forest Defender: Protecting Our Forests with Machine Learning

elegantdesign
elegantdesign
1 Dec 2019

  • Introduction
    • Importance of the Earth's Forests
    • Protecting Our Forests
    • Operational Overview
  • Building the Forest Defender
    • Hardware
      • Hardware List
      • Hardware Modifications
    • Software
      • Embedded Software
        • Training a Machine Learning Model for the Azure Sphere
        • Recording ADC Measurements
      • Adding Azure Cloud Services
        • Sending Notifications with Azure Cloud Services
      • Adding a Mobile App
    • Adding a Case
  • Improvements
  • Final Thoughts
  • Acknowledgements

Introduction

Importance of the Earth's Forests

image

Forests across the Earth are incredibly important. Covering almost a third of the Earth's surface, forests provide habitats for countless different animals, many different types of raw resources,

naturally clean the soil and water, prevent erosion, mitigate climate change, and perhaps most importantly, provide much of the oxygen we breathe. It is often said that forests are the lungs of the Earth. To be even more clear: forests are essential for humans' survival. Yet forests face countless natural and man-made threats today.

 

Threats to Our Forests

Wildfires are a major threat to forests, and can be started both by natural causes, such as by lightning or volcanoes, and by humans, for example from arson, discarded cigarette butts, downed electrical lines, etc.. Much of the world was horrified when great swathes of the Amazon Rainforest was burning in late 2019. Other threats include tree diseases, clearing the land for agricultural use, excavating forests to mine resources, and illegal logging. Illegal logging is another big threat to forests, fueling a multi-billion dollar industry (source), and removing an estimated 14.7 million cubic meters of timber from forests (source). Knowing all this, we must protect our forests.

 

Protecting Our Forests

Our forests need protecting in a way that is cheap, reliable, and effective. Early wildfire detection contributes greatly to its management. The longer a wildfire goes undetected, the more it grows, and the more resources are needed to extinguish it (source, pp. 362). Similarly, the longer illegal logging goes undetected, the more damage loggers can do to our forests. It is therefore, necessary to provide an early warning detection system for threats to our forests. While there are methods of monitoring our forests for signs of wildfire, many of them are outdated, and systems for detecting illegal logging are impractical, requiring high processing power and energy, or constant human surveillance (examples).

The Forest Defender I created provides a robust solution to these issues by continuously monitoring audio and analyzing it using machine learning to detect if there are any sounds of fires or illegal logging.

Forest Defender has the following features:

  • Built on the Microsoft Azure Sphere for security, and reliability.
    The Azure Sphere is specifically designed for security, featuring secure hardware, a custom OS designed to combat Internet of Things threats, and automatic security updates.
  • On-device audio classification using machine learning.
    The machine learning model runs directly on the Azure Sphere, which reduces the communication bandwidth needed, and therefore power consumption. Additionally, it reduces detection latency, allowing fires or logging to be detected in under a second once sounds of either event are heard. There is also the potential to update the models on-device, allowing the monitoring device to become better over time.
  • Connected to Azure Cloud Services to allow near real-time notifications to be sent to law enforcement or the fire department.
    Real-time notifications are important so that law enforcement or the fire department can react and prevent logging or extinguish any fires.

 

Operational Overview

The Forest Defender device works in general following the flow in the schematic below. The Mic 2 essentially measures air pressure which is then sampled 16,000 times a second by the Azure Sphere. The Sphere runs an audio classifier continuously on this data to determine if sounds of an event are detected. In this case, there are three categories: background_noise, fire, and chainsaw. A category prediction of background_noise is ignored, while a prediction of fire or chainsaw causes an event to be sent to the connected Azure IoT Hub which then triggers the configured Logic App (via Azure Event Grid) to send a notification as an email. A mobile app also allows the list of devices owned by the IoT Hub to be viewed, tested to ensure they are working correctly, and apply some configuration.

image

Building the Forest Defender

Building a Forest Defender can be accomplished in a few steps. First, the hardware is prepared to record audio data. Next, the embedded software for the Azure Sphere is configured and loaded onto the device. This software performs the audio classification, and communicates with Azure Cloud Services. After that, the Azure Cloud Services are commissioned and configured. Finally, a mobile app is developed which allows the list of Forest Defender devices to be viewed and tested.

 

Hardware

Preparing the hardware for the Forest Defender requires purchasing the list of items below, and then modifying the Mic 2 so it works with the Azure Sphere.

 

Hardware List

The hardware needed to build a Forest Defender consists of:

 

Item NamePurposeLink
Avnet Azure Sphere MT3620 Starter KitSecure microcontroller kit which runs the audio classification and communicates with Azure Cloud Services.Buy LinkBuy Link
Mikroe Mic 2Microphone click which fits into click slot #1 of the Azure Sphere Starter Kit.Buy LinkBuy Link
SMD Resistor 150 OhmUsed to modify the Mic 2 so it outputs the correct voltage.Buy LinkBuy Link
USB-A to Micro-USBUsed to connect the Azure Sphere Starter Kit to your computer.
Soldering IronFor modifying the Mic 2.
Soldering WireFor modifying the Mic 2.
3D PrinterUseful for printing the case.

 

For the most part, selection of these components fell into place naturally. The provided Avnet Azure Sphere Starter Kit has two mikroBUS slots where a variety of different click boards with a standard pinout can be placed which add additional sensors or functionality to the Starter Kit. I knew I needed to add audio recording capabilities to the setup, so I looked at the Mic Click, and the Mic 2 Click. After considering both of them, I ultimately chose the Mic 2 Click because it offered better audio capture capabilities. The Mic 2 is an omnidirectional microphone, whereas the Mic Click has just a top port microphone, and the Mic 2 can record a wider range of frequencies than the Mic click. The SMD resistor, soldering iron, and soldering wire were not something I planned on needing, but I came across an issue described in the next section that required them.

 

Hardware Modifications

image

After committing to using the Mic 2 Click, I found out that while the Azure Sphere Starter Kit is ready to use right out of the box, the Mic 2 Click needs some adjusting. While researching, I came across some information in the MT3620 Hardware Notes which states that when the input pins are configured for use with the ADC, the input voltage cannot exceed 2.5V. However, the Mic 2 Click operates on 3.3V (or 5V) and outputs 0 - 3.3V to the ADC depending on the intensity of the sound it senses. I was pretty disappointed after reading this because it meant that I couldn't just plug the Mic 2 Click into the slot on the Starter Kit and start using it.
Now I just needed to figure out what values of resistors to use to bring the Mic 2 voltage to an acceptable level. Looking again at the Mic 2 schematic, the microphone datasheet, and the op-amp datasheet, I calculated the impedance of the Mic 2 to be 439 Ohms, which means that a 150 Ohm resistor in series before the Mic 2 circuit should reduce the input voltage to 2.5V. Conveniently, the Mic 2 uses a 0 Ohm resistor to select between 3.3V and 5V, so I could just replace this resistor with a 150 Ohm one (see red circle on picture) and I would be good to go! After adding the new resistor to the Mic 2 circuit, it was time to move on to the Azure Sphere software.In fact, I wasn't sure I was going to be able to use the Mic 2 Click at all, but after studying the Mic 2 schematic, I figured out a solution: since the current consumption of the Mic 2 mainly depends on two relatively constant components, the electret condenser microphone and the op-amp, a simple voltage divider can be used to step the Mic 2 voltage down to 2.5V. This will also limit the Mic 2 output voltage to 2.5V and the input to the Azure Sphere ADC will not be exceeded. Additionally, I found that both the electret condenser microphone and the op-amp can function on 2.5V, but the programmable potentiometer that is used to adjust the op-amp gain needs at least 2.7V to function. So after adding the voltage divider, the gain will no longer be adjustable. I hoped that this would not be a big issue, however, as the default gain should work for recording and classifying general audio.

 

Software

The software is a key part of the Forest Defender device and consists of different parts: the code running on the Azure Sphere written in C, the connected services running in the cloud, and the app used to view a list of devices and test that each one is working. All the code for the project can be found in the Forest Defender Repository. I started out by creating the code for the Sphere.

 

Embedded Software

After unboxing the Azure Sphere Starter Kit and following the directions in the documentation to set up the SDK, claim the device, and configure the WiFi connection, I loaded up the HelloWorld Sample just to test that everything was working. Once I knew that the Azure Sphere was working and I could modify code to run on it, I turned to figuring out how to tackle the machine learning portion of the project.

 

Training a Machine Learning Model for the Azure Sphere

I initially considered running the audio classification in the cloud, but quickly discarded that for reasons mentioned in the Introduction. I then looked into Tensorflow Lite for Microcontrollers, a version of Tensorflow (a machine learning library), specifically built for running on microcontrollers, but this library requires C++, which the Azure Sphere doesn't support. After more research and almost getting to the point of giving up, I came across the Embedded Learning Library (ELL) developed by Microsoft. ELL is also specifically created for running machine learning models on smaller devices and one of the developers, Chris Lovett, had put together an example for how to cross-compile an ELL model to run on the A7 cortex. It was perfect!

I was able to get the example running on the Azure Sphere, and then I started figuring out how to train and compile my own ELL model.

As an overview, training and building an ELL model generally consist of first installing the ELL library dependencies and compiling the ELL library, then gathering training data, defining and compiling a featurizer, defining and compiling a classification model, converting the training data into features, training the model, converting the model to the ELL format, testing the model to ensure it has good accuracy, and then downloading the model and loading it onto the Azure Sphere. Each of these steps is explained in more detail in the Python notebook I created for this project which you can experiment with (no installation required) by opening it here:

Open In Colab.

By far, gathering training data took the most amount of time. I actually began using the Audioset dataset, but this ended up being quite noisy with a lot of mislabeled data. This impacted the machine learning model quite negatively, allowing it to achieve an accuracy of only 50%. After switching to using data downloaded from Freesound.org (and writing a custom script to do so), I was able to create a much better model, with accuracy above 80%.

Finally, I ran the ELL model on the Azure Sphere and after a few tweaks here and there, it worked quite well. See below for an example prediction created by the Sphere. The next step was to test it on data gathered from the ADC.

image

 

Recording ADC Measurements

Adding data recorded from the ADC (which the microphone is connected to) is not as simple as it sounds because the data from the ADC needs to be sampled at regular intervals while the rest of the application continues to work. This meant a separate thread for collecting data from the ADC. If a separate thread wasn't used, the application could get stuck waiting on network I/O and miss many samples. Additionally, after collecting one "chunk" of audio data, 512 samples for this project, the audio classifier would need to be run on it. Because there is no guarantee that the audio classifier would immediately be able to process each chunk of data once it was ready, a series of buffers was created to temporarily store the audio data until it can be processed by the machine learning model. This effectively decouples audio recording from audio classification. After I had the microphone plus audio classification working, it was time to connect the application to Azure Cloud Services.

 

Adding Azure Cloud Services

If you would like to follow along with the next few steps, clone the Forest Defender repository from here: Forest Defender. You will also need to enable development on the Azure Sphere and build and deploy the code. See the documentation of the Hello World Sample for instructions on how to do this.

Connecting the Azure Sphere to Azure IoT Hub is relatively straightforward. I looked at the Azure IoT Sample for how to work with the Azure Sphere IoT SDK and followed the instructions in the documentation to setup an Azure Device Provisioning Service and an Azure IoT Hub. The Device Provisioning Service automatically takes care of registering new devices claimed by one's tenant to the IoT Hub, and the Hub enables communication with the new device.

There is a bit more setup required to associate the Azure Sphere application with your Azure IoT Hub. I simply followed the Azure IoT Sample instructions for this part and everything went smoothly.

 

Sending Notifications with Azure Cloud Services

In the event that a fire or chainsaw is detected, the system should notify law enforcement immediately. This would normally be implemented with some sort of integration with a government server. To simulate that, I instead decided to send an email every time an event was received. Due to the flexibility of Azure Cloud Services, this can easily be replaced with a call to a government server API later on.

To determine how to hook up the Azure Cloud Services to send an email, I mainly followed this excellent tutorial with a small change. Instead of using the "Device Created" Event Type with Event Grid, I used "Device Telemetry". The "Device Telemetry" triggers whenever a telemetry message is received by the IoT Hub.

 

Adding a Mobile App

While receiving a notification is important, I can also imagine that maintainers of the Forest Defender system would occasionally like to ensure that a particular device is online. They also might want to check that it is still working by simulating an event. This functionality should also be available from the field in case government agents need to also physically check on a device. So I decided to develop a mobile app to allow viewing and testing Forest Defender devices associated with an IoT Hub.

To make the app work on as wide a range of devices as possible, I chose to develop the app with Flutter, an open-source UI SDK developed by Google to allow a single codebase to run on Android or iOS devices. The app is designed with three screens, one to view a list of all the devices, one to view information about a particular device, and one to update any configuration settings. Currently the only settable configuration setting is the cool-down period between detecting consecutive events (when multiple events are detected during the cool-down period, only the first one is reported). You can see some screenshots of the app below. Unfortunately I only have one Azure Sphere device, so the list of devices isn't very long, but the app should work with a much larger list.

imageimageimage

The biggest challenge to putting the app together was determining how to access the IoT Hub endpoints and format requests for information. Finding documentation for the IoT Hub endpoints that would return the desired information was difficult, and figuring out how to authenticate a request was not super simple (mostly because I'm not that familiar with crypto libraries), although the documentation was very clear. The language that Flutter uses, Dart, doesn't have an Azure IoT Hub SDK so I largely had to create one. I read the documentation for how to build a Shared Access Key and eventually was able to query the IoT hub for a list of devices. Once that was solved, everything else was pretty straightforward.

If you would like to set up the app for yourself, navigate to the forest_defender_app/lib folder in the Forest Defender repository and open main.dart. Two variables need filling out: iotHubEndpoint, and sharedAccessKey. The iotHubEndpoint can be found by navigating to your iotHub overview page and copying the value under "Hostname".

image

To get the sharedAccessKey, click on "Shared access policies" in the left hand menu, then "iothubowner" in the page that loads, and finally copy the "Primary key".

image

Once the variables are filled out in main.dart, if you haven't already, install Flutter. Then follow these directions to build the app and load it onto an iOS device, and these steps, step 1, step 2, step 3, if you have an Android device.

 

Adding a Case

I designed a water-resistant case for the Forest Defender with a small lip at the top for water to run off. Additionally, the Azure Sphere simply slides in (like a drawer) into the bottom of the case, so it's very easy to insert or remove. The bottom seals to the top with an o-ring to ensure no water finds its way in through any gaps there. Unfortunately, the device cannot be completely waterproof because a hole is needed for the microphone to sense the audio. I also didn't have time to 3D print the case, so instead take a look at this render.

image

 

Improvements

I chose some methods of doing things in this project for convenience, rather than how they would be done if the Forest Defender was placed into production. One big difference is that the mobile app would usually not communicate directly with the IoT Hub, but with a backend server. This way authentication can be handled by the backend server in an automated and more secure way. The backend server may also cache some of the information to reduce calls to the IoT Hub.

Another issue, which I considered, but ultimately decided was outside the scope of the project is networking. It is likely that in many of the areas where the Forest Defender device would be deployed that there is little to no cell reception, and of course no WiFi access. This means that an alternative means of communicating with each device would need to be implemented. I think a doable approach would be to create a mesh network using low-power wide-area network (LPWAN) technologies to distribute the Forest Defender devices throughout a forest with a high-power base transmitter at the forest edge that can communicate with a nearby GSM network.

Additionally, there is the issue of power. The devices would need to generate sufficient power to last years in the field. I have not examined the power consumption of the Azure Sphere plus Mic 2, or optimized the code to reduce power usage, but a simple area to gain would be to limit the audio recording and classification to once every few minutes. Furthermore, small solar panels could be installed as part of the case that would allow the device to gather energy throughout the day.

 

Final Thoughts

At the end of the whole project, I am pretty happy with how it turned out, although I didn't accomplish everything I had hoped to. I wanted to work in a solar cell and rechargeable battery into the case to make the project truly independent. I also wanted to 3D print and test the case, but I didn't have time to do so. Working with the Azure Sphere had a bit of a learning curve and getting the audio classifier to work also took some effort, but in the end I think it was worth it!

 

Acknowledgements

Thank you to Chris Lovett for showing how to use the Embedded Learning Library on the Azure Sphere.

  • Sign in to reply

Top Comments

  • elegantdesign
    elegantdesign over 5 years ago in reply to javagoza +3
    Thank you very much! I appreciate the kind words. I learned a lot over the course of the project and had great fun doing it, so I think it was a success.
  • javagoza
    javagoza over 5 years ago +2
    Lot of interesting references in your project. It shows that you have thoroughly investigated the subject. You have plenty of reasons to be happy with your project. Congratulations!
  • three-phase
    three-phase over 5 years ago +1
    Great project and blog, you have accomplished quite a lot here. Do you plan on doing any further developments? Kind regards
  • firdausbinali
    firdausbinali over 4 years ago

    Greetings.

     

    Dear Mr Jeremy. My name is Firdaus from Malaysia. Actually I interested in learning more on your project here (forest defender and safe sound in hackterio). I plan to do a project almost the same, except I would like to capture a audio signal of a moving timing belt. So, there will be 2 main sound, which are sound from AC motor and sound from the timing belt. Meaning I will used ML to eliminate sound from motor and I need to collect the sound from timing belt (its frequency) to my PC. Appreciate if you could give me your opinion on this.

     

    Thank you.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • elegantdesign
    elegantdesign over 5 years ago in reply to kevinkeryk

    Thank you for the compliments!

    Figuring out the Embedded Learning Library was an interesting challenge. One major issue I forgot to mention in the post was that I initially developed an audio classification model that was too large (~170 kB). The code compiled successfully, but when I went to load it onto the Sphere, an error popped up telling me I needed to enable debugging mode. Having already done that it took me a bit to realize what the problem probably was. image

    Due to some bad planning on my part, I left the write-up to the last minute, so there are some details I missed out putting in, including a video I wanted to do. If there is interest I will put together a video, but it will have to wait a couple weeks because I had to pack everything away since I am moving.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • bwilless
    bwilless over 5 years ago

    Nice project! 

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • kevinkeryk
    kevinkeryk over 5 years ago

    This project writeup looks really well done and I am amazed by the use of the Embedded Learning Library from Microsoft.  I am curious if you put together a video that shows your creation in action?

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • elegantdesign
    elegantdesign over 5 years ago in reply to three-phase

    Thank you.

    At the moment, its just a proof-of-concept. I don't plan to enhance this specific project beyond perhaps what I mentioned in the improvements section, unless there was interest from a government or environmental organization. It would be hard to deploy these at any scale on my own so any improvements would simply be to learn how things can be done differently.

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
>
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube