element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
RoadTests & Reviews
  • Products
  • More
RoadTests & Reviews
Blog Avnet UltraZed-EV Starter Kit Road Test - Project
  • Blog
  • RoadTest Forum
  • Documents
  • RoadTests
  • Reviews
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join RoadTests & Reviews to participate - click to join for free!
  • Share
  • More
  • Cancel
  • Author Author: ralphjy
  • Date Created: 28 Aug 2020 4:35 AM Date Created
  • Views 644 views
  • Likes 2 likes
  • Comments 0 comments
Related
Recommended
  • ultrazed-ev
  • ultrazed-ev starter kit
  • ultrazed-ev road test

Avnet UltraZed-EV Starter Kit Road Test - Project

ralphjy
ralphjy
28 Aug 2020

I have two basic criteria that have to be met before I will apply for a roadtest:

  1. The item being roadtested needs align with my interests and it needs to provide sufficient value relative to the amount of effort that would be required to roadtest it.  This could range from simple lower value items that don't require a lot of time invested to roadtest to very complex high value items that require significant effort and time.
  2. I need to think that I have the capability (knowledge and equipment) to be able to complete the roadtest in the 60 day roadtest window.  In some cases this would require learning new software and possibly acquiring additional equipment or peripherals.

 

The Roadtest for the UltraZed-EV Starter kit was an opportunity that I had been wanting for a while.  I've been interested in developing a video surveillance and security system for my house.  I've got various bits and pieces of a system that I've created over the last 5-6 years.  My primary camera is an HD pan/tilt IP camera at the front of the house.  It has a 270 degree horizontal view and allows me to see my front door, front yard, driveway and up and down the street.  It allows me to monitor package deliveries, visitors and the mail.  I also have a front doorbell camera and a couple of indoor cameras.  I use an NVR (network video recorder) for local archiving and monitoring.  In addition I can monitor the cameras using various mobile and browser apps and I have a dedicated monitor that displays video streams using a Raspberry Pi.  I currently use PIR sensors for intrusion detection and I've been thinking of using microwave sensors for broad area coverage.  It would be nice if I could aggregate all of these inputs and add AI to create an intelligent NVR.  I'd like to do detection and classification with the outdoor cameras and correlate the camera outputs and the sensors.

 

The UltraZed-EV SOM and Carrier card that are in the starter kit are ideal for this application because its feature set covers all of the elements that I would need:

  1. Gigabit ethernet to interface the IP cameras, NVR , sensors and network storage
  2. SATA interface for high bandwidth local storage
  3. VCU that can handle up to 8 HD video streams @ 30 frames per second
  4. High performance FPGA that enables implementation of AI engines and other hardware acceleration
  5. USB3 interface for peripherals (USB drive, HD webcam)
  6. PS DisplayPort interface for local monitoring
  7. PL PMOD interfaces for wireless (WiFi/Bluetooth) interface to sensors - I don't currently have this piece of hardware

 

image

 

Of course,  I knew that there was no way that I could even come close to implementing this system in the 60 day span of a roadtest.  I think that for me this will be a 6 month project!  So, what did I hope that I would be able to accomplish?  I think that a good demonstration of the UltraZed-EVs capabilites would include the following:

  1. Implement video input processing from two simultaneous RTSP video streams
  2. Display streams on a FHD DisplayPort monitor
  3. Store streams on a SATA SSD drive
  4. Perform detection and classification on streams using the Xilinx DPU (Deep Learning Processor) with Vitis-AI

 

I am in the final week of the roadtest and I'll need to admit that I will not be able to demonstrate a functioning system with even those limited goals.  I had hoped that my previous experience video processing with IP cameras and using the DNNDK with an MPSoC FPGA on an Ultra96v2 would have been sufficient to allow me to finish within the roadtest window.  Unfortunately, the devil in is the details and I think as many of us have experienced with these advanced hardware capabilities there are a LOT of details that you need to get correct.  Not that any of this was unanticipated - just taking a lot more time to get through.

 

I was able to demonstrate all the capabilities that I will need by using the reference designs and I verified that I could build all of the reference designs from the initial Vivado TCL scripts through the PetaLinux build.  So, where am I?  And where have I had problems?

 

Problems processing RTSP streams using GStreamer

This is an problem area that surprised me as I've been using RTSP streams for quite a few years.  My video sources are all unique (either by vendor or model) because I acquired them over time rather than all at once.  These are all sources that I've used with VLC (ffmpeg) on Linux (primarily Ubuntu) and Windows and also with Omxplayer on various Raspberry Pis.  The NVR is also recording using RTSP streams from the cameras.

 

I encountered problems on the UltraZed-EV using a simple GStreamer pipeline to receive, decode, and display the camera RTSP stream.  One of the cameras works correctly but the other 3 cameras and the NVR all have the same issue where the image initially displays but does not update after that.  I verified that I could get all these sources to work using a similar GStreamer pipeline in an Ubuntu VM and on an Ubuntu laptop.  Of course, Ubuntu is using a software decoder and a different display sink therefore not quite an apples-to-apples comparison.  Even though I've set the camera capabilities the same at the camera end, I can see that the capabilities negotiated are not precisely the same which I'm sure is due to the firmware on the camera and what parameters it is sending.  In the working case I'm getting a frame rate but not a frame size and in the non-working case I'm getting a frame size but not a frame rate.  I would have expected that GStreamer would be able to handle both these cases so this may not be the issue.

 

For those of you that are familiar with GStreamer here is a representative pipeline (of the working camera but the non-working case is only different in the rtspsrc location):

gst-launch-1.0 -v rtspsrc location="rtsp://admin:adminpw@10.0.0.212:554/cam/realmonitor?channel=1&subtype=0" ! rtph264depay ! h264parse ! omxh264dec ! kmssink bus-id="fd4a0000.zynqmp-display" fullscreen-overlay=true

 

Interestingly enough, if I save the video stream from the non-working camera to a file -> the file playback works!

gst-launch-1.0 -v rtspsrc location="rtsp://admin:adminpw@10.0.0.210:554/11" ! rtph264depay ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=/media/usb/videos/dump210.h264

gst-launch-1.0 -v filesrc location=/media/usb/videos/dump210.h264 ! h264parse ! omxh264dec ! kmssink bus-id=fd4a0000.zynqmp-display fullscreen-overlay=true

 

I've requested some help on the UltraZed forum and I'm sure that I'll get this resolved.  I've never had to do serious debugging with GStreamer pipelines - I suspect because software codecs are more tolerant to timing issues.  It will probably be a problem that has a simple fix.  I'll stick to using only the "working" camera until I can resolve this issue.

 

 

Problems integrating Vitis-AI (DPU) with the UZ7EV_EVCC VCU TRD design

I posted earlier about trying out the Vitis-AI tutorial that Mario Bergeron posted on Hackster IO.  That particular tutorial was using the DNNDK Vitis flow for the DPU.  He also posted a tutorial that used the newer VART (Vitis AI Runtime) flow for the DPU.  The hardware part of the two flows is very similar but the target deployment and initialization is quite different due to setting up the runtime environment.  I was able to successfully run through all the example applications again using the pre-built image and also successfully built the sample platform which integrates the UZ7EV_EVCC OOB design with the DPU.   When I went to modify the hardware design to use the VCU TRD design I realized that while I thought that I understood the Vitis AI flow that I really didn't know how to use it.  Vitis and Vitis AI have "simplified", integrated and automated not only the software flow but also configuration of hardware elements like the DPU at build time.  Unfortunately, I am a relative newcomer to Vitis and most of these types of tutorials don't start at the "beginning" but understandably at the unique part of the flow.  So, I had to figure out platforms and how to build and configure spfms and xpfms.  This is where I am today trying to work my way through all of it.  I understand at a high level what I need to do - but the devil is in the details.

 

This is typically where I encounter the challenges of developing with the Xilinx toolset and this time has been no different.  If you make any mistakes in setting up or configuring the development environment the effects to productivity are devastating.   I run in 6 different VMs just to be able to run different tool versions and I live in constant fear that an unintended update to the OS will break tools.  I understand that this is the price of entry to advanced development but until you get sufficiently up the learning curve, mistakes will significantly slow your development time.  I created a new VM to try to start clean with the Vitis and Vitis AI flow using Ubuntu 18.04.2 and the 2019.2 toolset.  Setting up and configuring the VM took me more than half a day.  And now I see that designs are moving to the 2020.1 toolset.

 

One impediment to progress is the capability of my development computer.  I am using an older Win10 3GHz i7 with 8 processors and 64GB of memory.  It has been taking me 3-4 hours to build new Vitis AI projects.  This shouldn't be an issue except with the learning curve it takes me multiple tries to get a working configuration.  The configuration that I'm currently focused on is to get the one good camera working with the VCU and DPU using Vitis AI Runtime (VART).   I think that I've been suffering from Covid-19 related fatigue the last few weeks and have decided to reduce the amount of time I'm putting into the project as I've been making some careless mistakes.

 

I thought since I've been using the Vitis AI projects that I mentioned earlier that I would demo a modified one of those on the hardware.  In addition to the DNNDK and VART tutorials, Mario also did one using Python with Vitis AI for face detection and tracking on the Ultra96v2: https://www.hackster.io/AlbertaBeef/face-detection-and-tracking-in-python-on-ultra96-v2-02d104 .   I modified it to run on the UltraZed-EV using RTSP as the input source instead of a webcam.   I discovered a couple of things: 1) Facial recognition isn't going to work unless I can implement a tracking zoom on the camera.  You'll see in the demos that my field of view on the driveway camera is so large that facial detection and tracking aren't effective until the subject is within 10-15 feet of the camera.  2) I may not be able to use my doorbell camera with AI.  I run all of my cameras hardwired except for the doorbell camera because of its location.  It's running 2.4GHz WiFi.   I found that when there is a lot of video activity I am dropping enough frames to make the detection unusable.  I'll need to see if I can fix that.

 

I think for my final implementation I'll switch to using an SSD classifier for my main camera like the one used in the Video_Analysis example application.  Maybe I'll try resnet50 first.

 

Here's a few videos showing typical use cases (using my iPhone to record the monitor screen - I can never get a reasonable image of the monitor using my HD webcam).

 

Driveway Face Detection 1

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

 

Driveway Face Detection 2

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

 

Workroom Face Tracking

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

 

Apologies for the fan noise - I forgot to turn off the audio and the iPhone was right above the UltraZed-EV board.  I guess I could have edited it out.

 

The choppiness in the video is because I am not using the VCU for decoding with the DPU yet.  I'm not sure if that is also the cause of the detection/tracking delay.  I am using the multi-threaded example with 4 threads.

 

Summary

I don't have an estimate as to when I will have a working project but I will definitely post it when I'm done.

 

I discovered that Xilinx did an Embedded Vision Reference Platform using the ZCU104 board that is incredibly similar to what I want to implement.  Funny how difficult it is to discover these designs.  I'm going to take some time out to evaluate that design and see if I can port it or otherwise reproduce the relevant parts on the UltraZed-EV.  Here is the link to the project: https://github.com/Xilinx/Embedded-Reference-Platforms-User-Guide/blob/master/Docs/overview.md

image

 

Time to work on writing the roadtest review......

 

 

 

Links to previous posts for this roadtest:

  1. Avnet UltraZed-EV Starter Kit Road Test- the adventure begins.....
  2. Avnet UltraZed-EV Starter Kit Road Test - VCU TRD
  3. Avnet UltraZed-EV Starter Kit Road Test - VCU TRD continued
  4. Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5
  5. Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5 continued
  6. Avnet UltraZed-EV Starter Kit Road Test - Vitis AI
  7. Avnet UltraZed-EV Starter Kit Road Test - Overview
  8. Avnet UltraZed-EV Starter Kit Road Test - GStreamer difficulties
  9. Avnet UltraZed-EV Starter Kit Road Test - Network Performance Test
  10. Avnet UltraZed-EV Starter Kit Road Test - SATA Performance Test
  • Sign in to reply
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube