image

 

Today’s products are becoming more sophisticated as they better understand the world around them. Using AI and sophisticated algorithms, sound and images can be analyzed in real time and that intelligence enables better contextual awareness for security and surveillance. Today, these AI networks are being used for a wider variety of tasks and do not require cloud resources. Xilinx MPSoCs make this possible by more efficiently processing these AI networks at the edge while offering standard linux software (Ubuntu) and popular AI framework environments (Keras, Tensorflow, PyTorch) that AI and embedded developers are familiar with.

 

Product teams are also discovering that the use of these AI networks, processing different types of sensor data such as microphones and cameras, can take on more sophisticated tasks and produce more reliable results. This workshop brings together a confluence of vision and sound AI network models that when used together enable more intelligent products that make better application decisions. More specifically, this workshop will explain how to use sound detection (with localization) and vision detection of that location together in our reference design to enable higher level applications, such as with security and surveillance, to leverage events it sees and hears. In this workshop we will demonstrate a reference design that detects a dog bark and swivels the camera to that location where the vision model detects what is there.

 

What you will learn by attending:

 

  • What makes the Xilinx MPSoC unique regarding neural network processing at the edge
  • How to use Aaware Sonus AI to tune sound classification models (how to retrain models with additional environmental background noise)
  • How to use accelerated Aaware sound classification models together with localization
  • How to use ComputEra Vision Accelerator to detect objects in real time using YoloNano
  • How to access the Aaware sound and ComputEra vision reference design

 

 

Target markets:

 

  • Security & Surveillance
  • Robotics
  • Conferencing
  • Kiosks
  • Point of Sale (POS)

 

image

 

The Presenters:

imageimage
Chris Eddington, CTO and Founder, Aaware IncAlan Mishchenko, Chief Architect at ComputEra

Seasoned entrepreneur of products based on embedded algorithm, signal processing, and machine learning technologies, with dozens of successful products launched over the last 30 years.  Current work at Aaware is in developing complete edge solutions for sound source localization, detection, separation and an integrated deep neural network acceleration platform for sound artificial intelligence that enables true real-time solutions for multi-sensor sound source localization, detection, separation, and classification which includes speech recognition and speaker diarization and speaker verification.

Alan is the chief architect at ComputEra and a Research Scientist at UC Berkeley.  He holds a PHD in Computer Science, has over 20 years of experience in R&D, and has over 200 publications. He is known for his logic synthesis and formal verification, and as the main developer of open-source CAD tool ABC.  Part of the Berkeley team winning first place in the Hardware Model Checking Competition (HWMCC), in 2008 and 2017.  His Research Interests include hardware design, machine learning, FPGA-based CNN acceleration, compilation, and quantization.