The following webinar is now available for On-Demand Viewing:
Edge Impulse enables developers to create the next generation of intelligent device solutions with embedded Machine Learning. Machine Learning at the very edge will enable valuable use of the 99% of sensor data that is discarded today due to cost, bandwidth, or power constraints. Edge Impulse enables the easy collection of real sensor data, live signal processing from raw data to neural networks, testing and deployment to any target device. You can sign up for a free developer account and get started with the ST IoT Discovery board or the Arduino Nano Sense 33. Their open source SDKs allow you to collect data from or deploy code to any device. TinyML enables exciting applications on extremely low-power MCUs. For example, you can detect human motion from just 10 minutes of training data, detect human keywords and classify audio patterns from the environment in real-time.
Buy NowBuy Now |
Jenny Plunkett from Edge Impulse gave a fantastic presentation . She is a self-described Texas Longhorn and software engineer, now working as a User Success Engineer at Edge Impulse. Since graduating from The University of Texas I she's been working in the IoT space, from customer engineering and developer support for Arm Mbed to consulting engineering for the Pelion platform.
She was supported during the Q&A by Daniel Situnayake. Daniel is a founding TinyML engineer at Edge Impulse and the co-author of the definitive book on TinyML: "TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers." He previously worked at Google as a Developer Advocate for TensorFlow Lite, enabling developers to deploy machine learning to edge devices, from phones to SoCs. He was also Developer Advocate for Dialogflow, a tool for building conversational AI.
{gallery} My Gallery Title |
---|
{tabbedtable} Tab Label | Tab Content |
---|---|
Page 1 |
Q&A Session:
I've built a voice-controlled faucet where you can change the water temperature using voice commands but it only works well in my bathroom . Is it possible with Edge Impulse to automatically create new samples by adding random noise or mixing pre-recorded noise sources with actual samples? It would be very interesting to have a preprocessor that simulates different types of acoustics and generated new samples.
Nice, that sounds like an awesome project! Edge Impulse supports data augmentation, which introduces random changes to your data to make your model more robust. We have a blog post about it here: https://www.edgeimpulse.com/blog/make-the-most-of-limited-datasets-using-audio-data-augmentation We'll be extending the capabilities of this feature over time.
How will edge impulse solutions revolutionize medication and treatment of patients
There are a ton of applications for embedded machine learning in healthcare and medicine. We've already seen developers use Edge Impulse to build things like wearable sensors that can help predict disease and train models to identify possible oral cancer.
Is the machine always on ? and only sends the processed data or data if there is a change ?
You can choose to run the model whenever you want, so for example it's possible to run it periodically if you want to send energy. It doesn't have to send any data to the cloud in order to run the model.
Is this better suited to edge computing?
Yes, it's designed primarily for edge computing (everything from microcontrollers up to embedded linux)
I am interested in edgeImpulse, however, I am waiting for it to Support the Sony Spresense, any concrete plan for that ? The Spresense has 6 cores and it is ultra low power consumption, It is a building block for my project www.wildedge.info
We are hoping to support the Spresense in the future! I can't share more than that
I am building a project using EI to detect the simulate Industrial IoT use case. Since I am WFH, I am using my washing machine to identify its state like whether its washing, spinning, drawing water or stopped.. I believe data gather is the most difficult part. I would like some insights how to gather data easily and efficiently.
You're right that collecting a dataset is often the hardest part! My top tip would be that you should come up with a plan for exactly what data you need to collect, and under which conditions, before you start.
Hi! On larger edge devices (e.g. Raspberry Pi) it's possible
Good question! It's definitely possible—for example, some microcontroller platforms might support Over-the-Air firmware updates, and you can include new models in these updates. You could also send a new model over the network and write it to flash as part of your own application.
Case: I will have noise/bad information at the beginning and finishing the test in the file. Let say, the dog will start to run after 10 seconds, and I will have to stop the dog to recover the board from the collar, another 15 seconds. So the file will not have good information, garbage, at the beginning 10 seconds, and the end 15 seconds. I hope to explain it enough. Questions: How can I know the bad parts of the file? and how to cut the file? In order to cut them and clean the file. could I use your graphs/tools, in your web page, in some way to do that?
Yes, you can crop and split any data sample from within Edge Impulse. Here's a guide: https://www.edgeimpulse.com/blog/crop-split-data
What about size of controller? Feasibility of deploying in walking stick?
Definitely! If you're designing your own PCB, you could use any 32-bit microcontroller with enough RAM and flash. If you're looking to use a pre-built dev board for prototyping, the Arduino Nano could be a good fit since it is nice and small.
Edge Impulse modelling pipelines for TinyML is a great accelerator for development. But real environment context typically drifting from the model. Is it possible to tune the model continuously using Edge Impulse?
Yes, our solution is fully API-driven. If your edge devices have internet access in the field, it is possible to send additional data samples, retrain the model, and download the updated model
How much power will the board running TinyML be expected to draw if it was say running in standby mode looking for those particular cases when the data behaves differently from the expected way.
A lot of this is a function of the processor's standby and active power consumption profiles and how often it performs inferencing and data acquistion, for example. For low power computer vision applications, it could take tens to hundreds of milliseconds to several seconds, depending on the application, image resolution, color (RGB or grayscale), type of Neural Network, and processor being utilized. For image classification, you can get the inferencing time estimations as part of the metrics that are displayed when a model is trained within Edge Impulse Studio
Do you do Kalman filtering on sensor data ?
No, but you could implement one using your own custom DSP block: https://docs.edgeimpulse.com/docs/custom-blocks
How is the power consumption typically influenced by adding ML processes to an Embedded System?
Power consumption will be influenced by how often inferencing engine is being called. This can be from every few seconds to every few hours depending on the use case.
Can I use input previously collected data (csv format) on Edge Impulse to train my TinyML model?
Yes! Check out our data uploader tutorial here: https://docs.edgeimpulse.com/docs/cli-uploader
What is the pricing model for edge impuse for developer who wants to use it for building applcaiton?
Our free tier offering works for the majority of applications to start with, and we usually start offering the enterprise subscription with larger enterprise use. Please contact us at hello@edgeimpulse.com so we can discuss with you further!
Is there a way to directly analyse the power consumption of a model on an embedded processor without deploying it at the edge or do field tests need to be performed for each model we develop?
At the present time it would have to be directly analyzed with real world measurements, one could use the inferencing metrics available within Edge Impulse Studio to get a first approximation of the inferencing time and input that into the power consumption model
I have come across use cases where the power consumption of embedded devices can be tapped to decrypt the underlying encprytion of data. How do we secure against such threats in the TinyML landscape?
Even with embedded machine learning, there should be an overall consideration towards ensuring system level security for the device being created from secure key storage to ensuring that security keys are not transported in the clear, ever. If tampering with the physical device is something of grave concern, likely more so with a smart meter vs. a smart plug, then this has to be considred as part of the overall hardware design strategy to mitigate physical attack vectors.
how can a camera be interfaced to the arduino nano for classifying images?
Here's an tutorial to get you started with the OV7670 CMOS VGA Camera Module & the Arduino Nano 33 BLE Sense: https://blog.arduino.cc/2020/06/24/machine-vision-with-low-cost-camera-modules/ , and then you can follow our image classification tutorial here: https://docs.edgeimpulse.com/docs/image-classification.
Do you have an App note on the signal processing you do on different types of signals?
All of our processing blocks are available to view here: https://github.com/edgeimpulse/processing-blocks , a tutorial on how to create your own custom blocks can be viewed here: https://docs.edgeimpulse.com/docs/custom-blocks
Hi there! I've built a Wearable tech that won in a major national level hackathon and I wanted to know if I can use Edge Impulse to detect various activities and also run minor Signal Processing algorithms real-time to show the user outputs instantly? Thanks in advance!
Yes, follow our getting started tutorials here! https://docs.edgeimpulse.com/docs
Are tiñyml neéd internet connection for use
Edge Impulse is a SaaS platform and needs an internet connection to run, the resulting trained model library deployed to the device does not need an internet connection however and classification is done at the edge
I have developed machine olfaction called Electronic Nose using array gas sensors. To get precise raw data and to reduce a noise, is it possible to deploy using TinyML by Edge Impulse to rapidly classify the pattern of odor each objects sensed?
Yes, absolutely. One way to get data in from your sensors into Edge Impulse is to use the data forwarder engine via the Edge Impulse CLI. This just requires a UART (Tx/Rx) connection from your device and you can directly get your gas sensor data into the Edge Impulse studio. You could then perform a series of classifications on that data (see the accelerometer example that's already available).
I really like that we can do AI with microcontrollers. This is a great start to make Gadgets to monitor movements when we are physically training at home for example. My question is this. Can we make Python based applications directly on the Arduino Nano 33?
Hi, check out CircuitPython: https://circuitpython.org/board/arduino_nano_33_ble/ - you can then deploy your Edge Impulse model to the device as a library that can be imported with python
does edgeimpuls have a annotation tool?
We provide tools to label, crop, and split your data, however we do not have AI powered labeling/annotation - https://docs.edgeimpulse.com/docs/cli-uploader
Hi Jenny/Tariq, The Nano Sense 33 is 32-bit Cortex-M4 based, what would be the minimal specification MCU that could be used for TinyML? For example could I drop down to an 8-bit MCU? Thank you
A lot of this again depends on the application (is it doing measurement and sensing, speech analysis, computer vision, any 32-bit Cortex-M0/M3/M4/M7 class device is a good baseline.
Can we use edge impulse for sound detection?
Yes! Check out our tutorial here: https://docs.edgeimpulse.com/docs/audio-classification
How is the labelling of the data performed? Is there an initial unsupervised approach to help in the creation of a dataset?
You will need to analyze data you have already collected and label it yourself (or with the cropping/splitting/uploader tools in Edge Impulse), however there is no unsupervised approach in the Edge Impulse studio
How long do the models need to train to perform fairly well on not-so-powerful devices?
Really just depends on the use case, the type of data, and what you want to accomplish, some models can be trained with a high amount of accuracy in just 30 epochs
Can we collect data from my phone and deploy the model on my hardware?
Check out the tutorial here: https://docs.edgeimpulse.com/docs/using-your-mobile-phone
How can EdgeImpulse be integrated on LPWAN technologies like LoRa/LoRaWAN, because of the low-bandwidth, it would be very useful to send the result of Model, and if neededed send a command to device to gather either image or audio.
Yes! https://www.edgeimpulse.com/blog/adding-machine-learning-to-your-lorawan-device
Is there a limit to the amount of sensors that can be connected to a tinyML board? Or is it just the matter of power usage?
It depends on which board you are using, and how much power you wish to draw. Feel free to provide more specifics on the Edge Impulse forum and we can discuss further: https://forum.edgeimpulse.com/
How do I use Edge Impulse on non-supported devices like ESP32 with the MPU9250 gyro sensor? If it is not supported, how do I help the community to set-it up?
You can deploy your trained Edge Impulse model as a C++ library and integrate your model into your application code for the ESP32, check out these forum posts: https://forum.edgeimpulse.com/search?q=esp32
Is there documentation available on how Edge Impulse selects a particular signal processing algorithm for inference. Is it mentioned in the APIs generated? Also, is the user allowed to make changes during the API generation process like in MATLAB?
You can choose a signal processing block that we provide in the Edge Impulse studio, or you can use your data as-is ("raw data"), here's our current processing blocks in the studio: https://github.com/edgeimpulse/processing-blocks
In what form are collected data stored? and can I generate graphs from these data?
You can collect data however you would like, and then upload to the studio, then you can view the raw data as a graph on the Data Acquisition page of your project: https://docs.edgeimpulse.com/docs/cli-uploader
Like the Human nose, using array of gas sensors is very possible to develop an artificial nose (electronic nose). Is it possible to integrate TinyML by edge impulse to build electronic nose?
Yes, absolutely. One way to get data in from your sensors into Edge Impulse is to use the data forwarder engine via the Edge Impulse CLI. This just requires a UART (Tx/Rx) connection from your device and you can directly get your gas sensor data into the Edge Impulse studio. You could then perform a series of classifications on that data (see the accelerometer example that's already available).
is it possible to write your own processing and learning blocks?
Yes, custom signal processing block tutorial here: https://docs.edgeimpulse.com/docs/custom-blocks & you can edit the python code of any learning block in the studio and selecting "Switch to Keras (expert) mode"
Hi Dan, is Edge Impulse basically an easier to use, graphical alternative to something like Google Colab? With built-in ability to write to supported microcontrollers
Yes, spot on
What Neural Network classifier is used? What flexibility of design do we have? Is an optimizer used to determine the best model?
Tensorflow & Keras, all blocks are customizable - yes EON Tuner (coming soon)
Does window size and window increase the increases the efficiency of the model
Depends on what you're trying to accomplish with your model, your dataset, etc.
I would like to use Edge Impulse to make a system that uses a microphone to listen to the noise produced while a car is running and, from it, determine if there are any problems (e.g. flat wheels, non-functioning shock absorbers, worn tires, engine problems , etc.) by analyzing the spectral components of the sound produced by a neural network. It could be interesting?
Check out our audio classification tutorial! https://docs.edgeimpulse.com/docs/audio-classification
Smart watches have a lot of sensors, which I'd like to use in my project. Are there any smart watch platform (like Wear OS), which can be programmed using Edge Impulse?
If your smart watch platform can use a C++ library or WebAssembly, etc. then you can deploy your Edge Impulse model to your watch |
Page 2 |
can the models be run considering a Low-Power Device? Like a Board running on battery, waking on specific events/times to collect and send, etc?
Yes
I am working on vibration/sound alerting blind walking stick which gets trained on terrains using sensors data.. Is it possible to do transfer learning with online data on device..to improvise stick based on terrains persons walking upon?
I think the best way to do this would be to archive the sampled data locally on your processor then upload the data to Edge Impulse for training purposes either in CSV or JSON format.
I am curious about implementing this for Image Processing. I am currently using ImageJ for processing images and would be interested to see where I can use Edge Impulse.
Try us out! The Edge Impulse online documentation has a tutorial for "adding sight to your sensors" which gets you bootstrapped very quickly on performing low power computer vision. See https://docs.edgeimpulse.com/docs/image-classification for more info on getting started. At the present time, you can use the OpenMV Cam H7 Plus, Himax WE-I Plus hardware to get the best out-of-box experience for embedded platforms. If you don't have any of these you can even use your mobile phone initially for some early experimentation!
We are planning to use Edge Impulse to classify ECG signals. Edge Impulse can be used for biosignals, but does the Arduino Nano BLE Sense 33 have the computational capability for such a project? Also could you talk about how an existing dataset could be imported to Edge Impuls
One way to get data in from your sensors into Edge Impulse is to use the data forwarder engine via the Edge Impulse CLI. This just requires a UART (Tx/Rx) connection from your device and you can directly get your gas sensor data into the Edge Impulse studio. You could then perform a series of classifications on that data (see the accelerometer example that's already available).
My 13 year old son has learned some basics of traditional programming at school. I explained the basics of machine learning to him, and his reaction was “well that makes more sense!” I wonder if kids find this method more intuitive in their data-filled world. Do you see machine learning, and Edge Impulse specifically, having a future in the schools and teaching kids?
Definitely, and I believe the world needs more experts in embedded systems which in some respects is becoming a bit of a lost art. Getting more kids exposed to STEM and embedded systems with platforms such as Edge Impulse will help ensure a future pipeline of embedded systems engineers and scientists!
Is only c or c++ is used for compiling?
Check out this tutorial here: https://docs.edgeimpulse.com/docs/running-your-impulse-locally
How are the custom blocks integrated in Edge impulse pipeline? Python scripts or what other options?
Hi, check out our custom blocks tutorial here: https://docs.edgeimpulse.com/docs/custom-blocks
I didn't see Raspberry Pi boards on the list of selection boards? Is it easy to add it or it's better to adapt our model to the compatible boards?
It's easy to add your deployed C inferencing library onto any device! Check out our forum for others in our community using a raspberry pi: https://forum.edgeimpulse.com/
can it be used to diagnose biosignals?
Absolutely, we have customers doing biosignal analysis on a variety of topics: from sleep stage detection to COVID onset detection./
Just to make it clear, is the collection stage done sending data from the device to the cloud or could it be from the device to a PC (serial/usb) and then to the cloud?
Training data collection can be done however you'd like, as long as you can upload it to the Studio: https://docs.edgeimpulse.com/docs/cli-uploader
Hi. Is it possible to add a support for Avnet Ultra96 board in your system? Ultra96 is Arm-based, Xilinx Zynq UltraScale%2B MPSoC development board.
The C++ Library export will run on almost anything with a C++ compiler, so you should be able to get this to work pretty quickly. See https://github.com/edgeimpulse/example-standalone-inferencing
I'm not too well versed in ML and TinyML. I'm assuming that 8/16 bit processors, including AVR8, PIC8, and 8051(and other 8/16 bit chipsets) aren't too powerful and can't run TF/TinyML, but how much power would be suitable? I'd assume ARM would be more suitable, and many boards supported include Cortex M0+, but is there any "loose" chip strength requirements to run TinyML?
Depends on the use case. E.g. gesture detection on accelerometer is very doable on M0+, audio on M4, vision M7 and A-class.
Can we use TimyML with some 32 bit MCU and plug in an mic and have the MCU identify who is talking based on just hearing their voice. How would you go about training/getting data for something like this.
Yeah, just try it out by uploading data of two different people speaking using your phone (see Data Acquisition tab in Edge Impulse) and train a classifier. Should be pretty quick.
if I have a frozen_graph.pb, can I use ei to convert the model to tflite?
No.
Can we deploy more than one model for two different functionalities on the same micro controller?
Not two completely different models at the moment out of the box, but you can mix and match learning blocks (e.g. both classifier + anomaly detection).
How would I implement a "do not know" category? Applied to your example this morning, instead of forcing the answer into the three categories you mentioned give a e.g. circular motion a "none of the above" answer ?
The machine learning model with output classifications that are "uncertain" as they do not correspond with any of the trained classes. You can also train another category/ML class to be "still motion" or something that is dissimilar to any other labeled motion in your dataset
How to implement Computer vision projects using TinyML?
Check out our blog post here: https://www.edgeimpulse.com/blog/computer-vision
What is kind of use case example using TinyML in agriculture? is it possible to deploy on rural area ?
Hi! Check out what our community has done with ElephantEdge: https://www.edgeimpulse.com/blog/smartparks
Could you detect sudden impact using arduino nano sens 33 board? and then train the board using edge impulse to detect abnormal shock? For example, if we were to create an airbag system
Yeah, but if the usecase is simple, e.g. here by just measuring total impact, I'd program it out rather than use ML for it.
Can I go as low a 2 bit quantization for my model weights?
Not in Edge Impulse at the moment
Is Edge Impulse free to use?
Yes! Sign up here: https://edgeimpulse.com/
Im also working on a early warning sytem for forest fires? whats ur opinion implementing tinyml
Good usecase, one of our partners (IRNAS) is doing fire detection using Edge Impulse in electricity poles, so would be interested to see what you'd come up with. |