Today’s products are becoming more sophisticated as they better understand the world around them. Using AI and sophisticated algorithms, sound and images can be analyzed in real time and that intelligence enables better contextual awareness for security and surveillance. Today, these AI networks are being used for a wider variety of tasks and do not require cloud resources. Xilinx MPSoCs make this possible by more efficiently processing these AI networks at the edge while offering standard linux software (Ubuntu) and popular AI framework environments (Keras, Tensorflow, PyTorch) that AI and embedded developers are familiar with.
Product teams are also discovering that the use of these AI networks, processing different types of sensor data such as microphones and cameras, can take on more sophisticated tasks and produce more reliable results. This workshop brings together a confluence of vision and sound AI network models that when used together enable more intelligent products that make better application decisions. More specifically, this workshop will explain how to use sound detection (with localization) and vision detection of that location together in our reference design to enable higher level applications, such as with security and surveillance, to leverage events it sees and hears. In this workshop we will demonstrate a reference design that detects a dog bark and swivels the camera to that location where the vision model detects what is there.
What you will learn by attending:
- What makes the Xilinx MPSoC unique regarding neural network processing at the edge
- How to use Aaware Sonus AI to tune sound classification models (how to retrain models with additional environmental background noise)
- How to use accelerated Aaware sound classification models together with localization
- How to use ComputEra Vision Accelerator to detect objects in real time using YoloNano
- How to access the Aaware sound and ComputEra vision reference design
- Security & Surveillance
- Point of Sale (POS)