element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet & Tria Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • About Us
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Path to Programmable 3
  • Challenges & Projects
  • Design Challenges
  • Path to Programmable 3
  • More
  • Cancel
Path to Programmable 3
Blog Blog 5: Computer Vision based on PYNQ
  • Blog
  • Forum
  • Documents
  • Leaderboard
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Path to Programmable 3 to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: pandoramc
  • Date Created: 25 Jul 2023 10:26 PM Date Created
  • Views 960 views
  • Likes 5 likes
  • Comments 4 comments
  • Ultra96-V2 Board
  • amd
  • Path to Programmable III
  • Path to Programmable 3
  • computer vision
Related
Recommended

Blog 5: Computer Vision based on PYNQ

pandoramc
pandoramc
25 Jul 2023

Table of Contents

  • PYNQ
  • Computer Vision
  • An example
  • Conclusion

PYNQ

According to webpage, PYNQ is an open-source project from AMD that makes it easier to use Adaptive Computing platforms. Nowadays, there are support for a wide variety of boards; among the listed boards, the Ultra96 V2 has an space to use Python for specialized applications in an accelerated way. The first step in to-do list is have an image that supports the operating system and drivers to access to the PL by Overlays and devices like cameras. You can construct one accoding to the project requirements, use a manufacture's prebuild image, or the community has own images with bug fix or enhanced properties. You can get some images from this link.

You only need some coonections to start to develop in the PYNQ image. I recommend be careful with your network infrastructure, like in schools, since the port could be resticted and you will not be able to download the required content to Computer Vision Extensions.

image

After the system boot, you will be able to connect your board by the USB cable and access to the 192.168.3.1 ip address to reach a jupyter notebook with user password request, only write xilinx for all, user ans password. You are able to get a jupyterlab interface using the 192.168.3.1/lab url, I prefer that interface. In order to minimize the errors and maximize the compatibility, I used the PYNQ v2.5 image. I configured the WiFi interface using the common/wifi.ipynb.

Computer Vision

PYNQ has a community to share your applications but this time I was interested on the Computer Vision repository. The image handling is powered by OpenCV since that is pre-installed on the PYNQ. As like the OpenCV are libraries for computer vision, xfOpenCV is based on overlays. An Overlay is a special kind of file that allows the PL reprogramming in order to accelerate the computing. to install the base libraries you need execute the following line in a terminal

sudo pip3 install git+https://github.com/ComputerVision.git

this decision was based on a blog since the --upgrade flag will rise an error with Cython.

NOTE: If you present error with that, you can use sudo pip3 install --upgrade cython and update the command above with the --upgrade flag.

This brings only some overlays related to image dilation, image filtering and image thresholding, but you are free to get new overlays from your own designs or third party applications like in PYNQ Community.

An example

The most popular example is the Sobel filtering for edge detection. To check this application I coded the following blocks in a jupyter notebook

import cv2 #NOTE: This needs to be loaded first

# Load filter2D + dilate overlay
from pynq import Overlay
bs = Overlay("/usr/local/lib/python3.6/dist-packages/pynq_cv/overlays/xv2Filter2DDilate.bit")
bs.download()
import pynq_cv.overlays.xv2Filter2DDilate as xv2

# Load xlnk memory mangager
from pynq import Xlnk
Xlnk.set_allocator_library('/usr/local/lib/python3.6/dist-packages/pynq_cv/overlays/xv2Filter2DDilate.so')
mem_manager = Xlnk()

Here we load the PL bit file and download to the device, this turn off the red ligth in the board since the logic was programmed. It is required a way to manage the information between PS and PL, consequently, the memory management and additional APIs must be considered (import) in the project.

import cv2

camera = cv2.VideoCapture(0)

width = 1280
height = 720
camera.set(cv2.CAP_PROP_FRAME_WIDTH,width)
camera.set(cv2.CAP_PROP_FRAME_HEIGHT,height)

We need a capture device, which is a C270 logi camera, to get the vision from the environment and start to process the images here. According to the camera specs, we have a Maximum capture of 1280x720@30FPS.

import numpy as np
import time

kernelD   = np.ones((3,3),np.uint8)
frame_out = np.ones((height,width),np.uint8)
xFin      = mem_manager.cma_array((height,width),np.uint8)
xFbuf     = mem_manager.cma_array((height,width),np.uint8)
xFout     = mem_manager.cma_array((height,width),np.uint8)
kernelVoid = np.zeros(0)

frameSize = (width, height)
out = cv2.VideoWriter('output_video.avi',cv2.VideoWriter_fourcc(*'XVID'), 30.0, frameSize, 0)
kernelF  = np.array([[1.0,2.0,1.0],[0.0,0.0,0.0],[-1.0,-2.0,-1.0]],np.float32) #Sobel Hor filter

# font
font = cv2.FONT_HERSHEY_SIMPLEX
# org
org = (50, 50)
# fontScale
fontScale = 1
# Blue color in BGR
color = (255, 0, 0)
# Line thickness of 2 px
thickness = 2


def saveVideo():
    nFrames = 120
    tFrame = 0;
    
    for _ in range(nFrames):
        ret, frame_in = camera.read()
        frame_in_gray = cv2.cvtColor(frame_in,cv2.COLOR_RGB2GRAY)
        
        xFin[:]    = frame_in_gray[:]

        start = time.time()
        
        xv2.filter2D(xFin, -1, kernelF, xFbuf, borderType=cv2.BORDER_CONSTANT)
        xv2.dilate(xFbuf, kernelVoid, xFout, borderType=cv2.BORDER_CONSTANT)    
        
        time_hw_total = time.time() - start
        tFrame = tFrame + nFrames / time_hw_total
        
        frame_out = np.ones((height,width),np.uint8)
        frame_out[:] = xFout[:]
        
        frame_out = cv2.putText(frame_out, str(int(nFrames / time_hw_total)) + "FPS", org, font, 
                   fontScale, color, thickness, cv2.LINE_AA)
        
        out.write(frame_out)
    out.release()
    print("Frames per second:  " + str(tFrame/nFrames))

saveVideo()

For this test I used the processing frame-by-frame and the velocity is ridiculous. According to the code and time measure, we have about 18K frames per second in processing. I did not take in account the image acquisition since that part, at this moment, is not accelereted. After the acquisition and processing, a video is saved to get evidence of this. and I will show below a demo of this functionality.

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

Conclusion

This hardware is really impresive, I take more time acquiring images from a USB camera than the processing itself. A diferent camera interface could be used to improve the latency, unfortunately, there are no quick options in my region; in addition to specialized devices unavailability, I yet working with this awesome AMD platform to bring a better ressults in my microscopy system. This will be a platform to improve observational skills and a wide application computer vision systems.

  • Sign in to reply
  • saadtiwana_int
    saadtiwana_int over 2 years ago in reply to pandoramc

    I see. Well, my concern is that if we need to use the old Vitis tool versions for compatibility with the older PYNQ versions then it will be a step backwards. I hope it doesn't come down to that :)

    I am surprised Xilinx (AMD) is not updating the PYNQ-ComputerVision repository actively...the last update was 4 years ago!

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • prashanthgn.engineer
    prashanthgn.engineer over 2 years ago

    Great blog

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • pandoramc
    pandoramc over 2 years ago in reply to saadtiwana_int

    I tried to use the 2.6 and the 3.0 version without success. The 2.7 ran well but the file organization changed from the 2.5. I am working with some different overlays and the Computer Vision git mention that, up to the 2.5, there is a warranty of working, otherwise the overlays must be updated. I understand that the pynq platform improves some features in the interface and the retocompatibility is limited on some cases. I am testing still about this limitation to try the newest overlay generation and funtion on old versions.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • saadtiwana_int
    saadtiwana_int over 2 years ago

    Thanks for sharing this. 

    I notice that you used 2.5 image for maximum compatibility. That probably explains why you could get the hardware acceleration to work on the PYNQ-Comptervision overlays, while I could not (i used Pynq v3.0.1). 

    My question: Would you still be able to compile overlays for PYNQ v2.5 with the latest Vitis/Vivado toolchain?

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube