BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
BEGIN:VEVENT
ORGANIZER:MAILTO:
DTSTART:20210729T160000Z
DTEND:20210729T170000Z
LOCATION:On24
SUMMARY:Sound & Vision AI: Adding Eyes and Ears to Surveillance
DESCRIPTION;ENCODING=QUOTED-PRINTABLE:contentimage_203248.png-420x105.png?sv=2016-05-31&amp;sr=b&amp;sig=YGHkh6caF5zNk0xOEEX93e%2F8nzHbCx%2FbdUCufv6RmGQ%3D&amp;se=2026-05-07T23%3A59%3A59Z&amp;sp=r&amp;_=fiXCzR3+1zw4KjvIp36B3g== Today’s products are becoming more sophisticated as they better understand the world around them. Using AI and sophisticated algorithms, sound and images can be analyzed in real time and that intelligence enables better contextual awareness for security and surveillance. Today, these AI networks are being used for a wider variety of tasks and do not require cloud resources. Xilinx MPSoCs make this possible by more efficiently processing these AI networks at the edge while offering standard linux software (Ubuntu) and popular AI framework environments (Keras, Tensorflow, PyTorch) that AI and embedded developers are familiar with. Product teams are also discovering that the use of these AI networks, processing different types of sensor data such as microphones and cameras, can take on more sophisticated tasks and produce more reliable results. This workshop brings together a confluence of vision and sound AI network models that when used together enable more intelligent products that make better application decisions. More specifically, this workshop will explain how to use sound detection (with localization) and vision detection of that location together in our reference design to enable higher level applications, such as with security and surveillance, to leverage events it sees and hears. In this workshop we will demonstrate a reference design that detects a dog bark and swivels the camera to that location where the vision model detects what is there. What you will learn by attending: What makes the Xilinx MPSoC unique regarding neural network processing at the edge How to use Aaware Sonus AI to tune sound classification models (how to retrain models with additional environmental background noise) How to use accelerated Aaware sound classification models together with localization How to use ComputEra Vision Accelerator to detect objects in real time using YoloNano How to access the Aaware sound and ComputEra vision reference design \n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n \n \n \n\n \n \n \n \n	\n \n\n\n\n\n\n \n\n \n \n You don't have permission to edit metadata of this video. \n \n \n\n \n \n\n\n \n\n \n \n \n \n \n Edit media \n \n \n \n \n \n Dimensions \n \n \n \n x \n \n \n \n \n Small \n Medium \n Large \n Custom \n \n \n \n \n Subject (required) \n \n \n Brief Description \n \n \n Tags (separated by comma) \n \n \n Video visibility in search results \n \n Visible \n Hidden \n \n \n \n Parent content \n \n \n \n \n \n A== The Presenters: 7ecsQA== RX5V5xHhkv4lnCJao9wg== Chris Eddington, CTO and Founder, Aaware Inc Alan Mishchenko, Chief Architect at ComputEra Seasoned entrepreneur of products based on embedded algorithm, signal processing, and machine learning technologies, with dozens of successful products launched over the last 30 years. Current work at Aaware is in developing complete edge solutions for sound source localization, detection, separation and an integrated deep neural network acceleration platform for sound artificial intelligence that enables true real-time solutions for multi-sensor sound source localization, detection, separation, and classification which includes speech recognition and speaker diarization and speaker verification. Alan is the chief architect at ComputEra and a Research Scientist at UC Berkeley. He holds a PHD in Computer Science, has over 20 years of experience in R&amp;D, and has over 200 publications. He is known for his logic synthesis and formal verification, and as the main developer of open-source CAD tool ABC. Part of the Berkeley team winning first place in the Hardware Model Checking Competition (HWMCC), in 2008 and 2017. His Research Interests include hardware design, machine learning, FPGA-based CNN acceleration, compilation, and quantization.
X-ALT-DESC;FMTTYPE=text/html:<html><body><p style="margin:0;"><span><a href="https://community.element14.com/resized-image/__size/420x105/__key/telligent-evolution-extensions-calendar-calendarfiles/00-00-00-00-71/contentimage_5F00_203248.png"><a href="https://community-storage.element14.com/communityserver-components-secureimagefileviewer/telligent/evolution/extensions/calendar/calendarfiles/00/00/00/00/71/contentimage_203248.png-420x105.png?sv=2016-05-31&amp;sr=b&amp;sig=YGHkh6caF5zNk0xOEEX93e%2F8nzHbCx%2FbdUCufv6RmGQ%3D&amp;se=2026-05-07T23%3A59%3A59Z&amp;sp=r&amp;_=fiXCzR3+1zw4KjvIp36B3g==">contentimage_203248.png-420x105.png?sv=2016-05-31&amp;sr=b&amp;sig=YGHkh6caF5zNk0xOEEX93e%2F8nzHbCx%2FbdUCufv6RmGQ%3D&amp;se=2026-05-07T23%3A59%3A59Z&amp;sp=r&amp;_=fiXCzR3+1zw4KjvIp36B3g==</a></a></span></p><p style="margin:0;padding:0px;">&nbsp;</p><p style="margin:0;">Today’s products are becoming more sophisticated as they better understand the world around them. Using AI and sophisticated algorithms, sound and images can be analyzed in real time and that intelligence enables better contextual awareness for security and surveillance. Today, these AI networks are being used for a wider variety of tasks and do not require cloud resources. Xilinx MPSoCs make this possible by more efficiently processing these AI networks at the edge while offering standard linux software (Ubuntu) and popular AI framework environments (Keras, Tensorflow, PyTorch) that AI and embedded developers are familiar with.</p><p style="margin:0;padding:0px;">&nbsp;</p><p style="margin:0;">Product teams are also discovering that the use of these AI networks, processing different types of sensor data such as microphones and cameras, can take on more sophisticated tasks and produce more reliable results. This workshop brings together a confluence of vision and sound AI network models that when used together enable more intelligent products that make better application decisions. More specifically, this workshop will explain how to use sound detection (with localization) and vision detection of that location together in our reference design to enable higher level applications, such as with security and surveillance, to leverage events it sees and hears. In this workshop we will demonstrate a reference design that detects a dog bark and swivels the camera to that location where the vision model detects what is there.</p><p style="margin:0;padding:0px;">&nbsp;</p><p style="margin:0;"><strong>What you will learn by attending:</strong></p><p style="margin:0;padding:0px;">&nbsp;</p><ul><li>What makes the Xilinx MPSoC unique regarding neural network processing at the edge</li><li>How to use Aaware Sonus AI to tune sound classification models (how to retrain models with additional environmental background noise)</li><li>How to use accelerated Aaware sound classification models together with localization</li><li>How to use ComputEra Vision Accelerator to detect objects in real time using YoloNano</li><li>How to access the Aaware sound and ComputEra vision reference design</li></ul><p style="margin:0;padding:0px;">&nbsp;</p><p style="margin:0;"><span id="9468f0d7_9986_4ae0_884c_ce79392bdefa"><span>        <div style='width: 740px; height: 466px;' class='se-vm-wrapper fragment-b3ddff4f373f41a3befafb182a78f9e9943911149_media se-video-view se-video-frame-container hidden'>                    <iframe loading="lazy" allowfullscreen src='https://players.brightcove.net/1362235890001/NkxiVJdjx_default/index.html?videoId=6265195274001' frameborder="0"></iframe><div class="se-video-viewer-flex-space-between se-video-viewer-controls hidden">    <div>        <span class="se-media-rating-control"></span>	</div>		</div>    </div>    <div class='se-vm-wrapper fragment-b3ddff4f373f41a3befafb182a78f9e9943911149_media se-video-message se-video-message-no-access hidden'>        <div class='se-video-message-box'>            <div>You don't have permission to edit metadata of this video.</div>        </div>    </div>    <div class="se-vm-wrapper fragment-b3ddff4f373f41a3befafb182a78f9e9943911149_media se-video-editor hidden">        <div class='se-video-modal-curtain hidden'></div><div class='se-video-modal'>        <div class="modal-title hidden">        <div class="modal-close-wrapper">            <div class="se-video-modal-close modal-close"></div>        </div>        <div style="pointer-events: none; user-select: none;">Edit media</div>    </div>                <div>                <div class="se-video-flex-row-center se-video-editor-dimensions hidden" style="margin-bottom: 20px;">            <label style="margin-right: 25px;">Dimensions</label>                                    <input type="text" class="se-editor-width mce-textbox" name="se-editor-width" class="mce-textbox mcemedia-width" value="" size="4">                        <span style="margin: 0 5px;">x</span>                                    <input type="text" class="se-editor-height mce-textbox" name="se-editor-height" class="mce-textbox mcemedia-width" value="" size="4" style="margin-right: 10px;">                                    <select class="se-video-editor-select se-video-editor-dimension-select" style="border-radius: 3px;">                <option value="small">Small</option>                <option value="medium">Medium</option>                <option value="large">Large</option>                <option value="custom">Custom</option>            </select>        </div>                                <label class="se-video-editor-label" style="margin-top: 10px;">Subject <b>(required)</b></label>        <input type="text" class="se-video-editor-title se-video-editor-input mce-textbox" />                        <label class="se-video-editor-label">Brief Description</label>        <input type="text" class="se-video-editor-description se-video-editor-input mce-textbox" />                        <label class="se-video-editor-label">Tags (separated by comma)</label>        <input type="text" class="se-video-editor-tags se-video-editor-input mce-textbox" />                        <label class="se-video-editor-label">Video visibility in search results</label>        <select class="se-video-editor-search-visibility-select se-video-editor-select">            <option value="true">Visible</option>            <option value="false">Hidden</option>        </select>                        <div class=" hidden ">            <label class="se-video-editor-label">Parent content</label>            <div class="se-video-flex-row" style="align-items: center; margin-bottom: 15px;">                <select class="se-video-editor-parent-select se-video-editor-select" style="margin-right: 15px;">                                </select>                                <a class="se-video-editor-open-content-button se-video-editor-anchor" target="_blank">                    <a href="https://community-storage.element14.com/defaultwidgets/00ec0202bb9041c288cc63c14b3070ba/b3ddff4f373f41a3befafb182a78f9e9/view-video.js?sv=2016-05-31&amp;sr=b&amp;si=telligent&amp;sig=8MzPAZnyJ8ioSoedQNHb47AeE2DWxcm2%2FhliV%2F1eIEE%3D&amp;_=CFBOVZ9Hl72qCFKNy8OB/A==">A==</a></a></span></p><p style="margin:0;padding:0px;">&nbsp;</p><h1><span style="color:#3334ca;">The Presenters:</span></h1><table border="1" class="jiveBorder mce-item-table" style="border:1px solid #ffffff;width:100%;"><tbody><tr><td style="border:1px solid black;border:1px solid #ffffff;width:50%;padding:6px;"><span><a href="https://community.element14.com/resized-image/__size/221x176/__key/telligent-evolution-extensions-calendar-calendarfiles/00-00-00-00-71/contentimage_5F00_203250.png"><a href="https://community-storage.element14.com/communityserver-components-secureimagefileviewer/telligent/evolution/extensions/calendar/calendarfiles/00/00/00/00/71/contentimage_203250.png-221x176.png?sv=2016-05-31&amp;sr=b&amp;sig=aT8%2FHxRsMGoRQBIYp%2BUedILG%2FphPmKnz39CxZneCu8A%3D&amp;se=2026-05-07T23%3A59%3A59Z&amp;sp=r&amp;_=XfH4kclkwjQZlZm/7ecsQA==">7ecsQA==</a></a></span></td><td style="border:1px solid black;border:1px solid #ffffff;width:50%;padding:6px;"><span><a href="https://community.element14.com/resized-image/__size/174x174/__key/telligent-evolution-extensions-calendar-calendarfiles/00-00-00-00-71/contentimage_5F00_203251.png"><a href="https://community-storage.element14.com/communityserver-components-secureimagefileviewer/telligent/evolution/extensions/calendar/calendarfiles/00/00/00/00/71/contentimage_203251.png-174x174.png?sv=2016-05-31&amp;sr=b&amp;sig=0%2BZMsk%2B32bUMvls6WHWD%2BfVoy1hexvvjZK3HErP30BU%3D&amp;se=2026-05-07T23%3A59%3A59Z&amp;sp=r&amp;_=c/RX5V5xHhkv4lnCJao9wg==">RX5V5xHhkv4lnCJao9wg==</a></a></span></td></tr><tr><td style="border:1px solid black;border:1px solid #ffffff;width:50%;padding:6px;"><strong>Chris Eddington, CTO and Founder, Aaware Inc</strong></td><td style="border:1px solid black;border:1px solid #ffffff;width:50%;padding:6px;"><strong>Alan Mishchenko, Chief Architect at ComputEra</strong></td></tr><tr><td style="border:1px solid black;border:1px solid #ffffff;width:50%;padding:6px;"><p style="margin:0;">Seasoned entrepreneur of products based on embedded algorithm, signal processing, and machine learning technologies, with dozens of successful products launched over the last 30 years.&nbsp; Current work at Aaware is in developing complete edge solutions for sound source localization, detection, separation and an integrated deep neural network acceleration platform for sound artificial intelligence that enables true real-time solutions for multi-sensor sound source localization, detection, separation, and classification which includes speech recognition and speaker diarization and speaker verification.</p></td><td style="border:1px solid black;border:1px solid #ffffff;width:50%;padding:6px;">Alan is the chief architect at ComputEra and a Research Scientist at UC Berkeley.&nbsp; He holds a PHD in Computer Science, has over 20 years of experience in R&amp;D, and has over 200 publications. He is known for his logic synthesis and formal verification, and as the main developer of open-source CAD tool ABC.&nbsp; Part of the Berkeley team winning first place in the Hardware Model Checking Competition (HWMCC), in 2008 and 2017.&nbsp; His Research Interests include hardware design, machine learning, FPGA-based CNN acceleration, compilation, and quantization.</td></tr></tbody></table></body></html>
CLASS:PUBLIC
END:VEVENT
END:VCALENDAR