element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • About Us
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Path to Programmable 3
  • Challenges & Projects
  • Design Challenges
  • Path to Programmable 3
  • More
  • Cancel
Path to Programmable 3
Blog PTPIII - Final Project - Ultra-Gimbal
  • Blog
  • Forum
  • Documents
  • Leaderboard
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Path to Programmable 3 to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: saadtiwana_int
  • Date Created: 5 Sep 2023 3:48 PM Date Created
  • Views 2287 views
  • Likes 9 likes
  • Comments 10 comments
  • ultra96-v2
  • video processing
  • winners
  • Graduation project
  • amd
  • final project
  • Path to Programmable III
Related
Recommended

PTPIII - Final Project - Ultra-Gimbal

saadtiwana_int
saadtiwana_int
5 Sep 2023
PTPIII - Final Project - Ultra-Gimbal

Foreword

I have some past experience of working with FPGAs. I am, by no mean, an expert, but learnt what I needed to get the jobs done for some medium complexity projects. All my previous FPGA projects were running some IPs in the FPGA fabric and running baremetal or FreeRTOS on the soft or hard processor. Those worked well for me but it always took a very long time to complete those projects (with my skill level anyway). This issue of requiring more time and effort has naturally prevented me from prototyping and trying out things on FPGAs, compared to other platforms (microcontrollers, Jetson devices, or PCs), whenever possible.

For this reason, this time around I set myself a goal to find some easier ways to prototype and try-out-ideas on FPGAs. If I can simplify this process for myself, then I can spend more time building awesome projects!

Goal

I started out with the idea of building a 2-axis gimbal platform for imaging, with the Ultra96 controlling the motors, sensors as well as doing the image processing on the live video coming from the camera. For my gimbal, I chose Azimuth and Pitch to be the two degrees of freedom since that's the most common configuration for most 2 axis systems.

Over the course of the project, my project evolved to work around some obstacles I encountered and limitations I faced.

In this blog post I want to share my journey on this project.

The Development Process

Since ease-of-development was one of my main considerations, I immediately gravitated towards wanting to use PYNQ for this project. It's something I came across recently but never had the chance to explore much in the past.

http://www.pynq.io

PYNQ promises to bring Python simplicity to FPGA development, making it easier for developers to unleash FPGA's potential without complex hardware languages or coding low-level applications. It runs over Linux, but provides a familiar Jupyter-based environment on top of it, which feels more welcoming for many of us! Also, there is a vibrant community around PYNQ, meaning many people are willing to help and also share their own creations. All of this makes PYNQ a good candidate for prototyping ideas. For example, if you have an idea for a cool image processing algorithm that you want to try out on the FPGA, you can do that much more easily using PYNQ, in a matter of hours instead of days to weeks. Then, once the algorithms shows promise, you can identify bottlenecks using profiling tools available within PYNQ notebooks and look into accelerating the parts taking most time as Programmable logic accelerators.

Motion

For the 2 axes of (rotational) motion, I decided to use two RMD-L-12025 motors. These are "direct-drive" motors  (meaning no gearing) with built-in servo controllers and allow control over a CAN interface. This is helpful in my case since the real-time servo control is off-loaded to the motors' onboard controllers. This means my application does not need to process encoder data and control MOSFET switching at several tens of thousand times per second. Rather, my application only needs to give high-level motion commands at lower update rates. This should allow me to use something not very deterministic (linux/pynq?) to manage to control the motion effectively. In future, I want to build the motor controllers inside the FPGA too, but I knew that this would be a major project to tackle -not possible in the current project's timeframe.

I decided to use direct drive motors instead of geared since the lack of gearing means the platform's own inertia actually helps with stabilization, provided it is balanced well in all axes. This is especially helpful when operating in environments having high frequency vibrations/movements. The other benefit is that you don't get the backlash that is present in most geared systems, and the only limit to angular positional accuracy is the encoders on the direct drive system.

The main obstacle for using these motors in my system was that the default PYNQ image does not include a CAN controller device (even though the hard-processor on Zynq has 2 of them). However, the image does have SPI devices in it, which gave me the idea of using commonly available MCP2515 modules for this task. These modules give the ability to use CAN over a SPI bus.

Inertial Measurement Sensor

I needed an inertial measurement sensor to get the platform's orientation in 3D space. My first idea was to use the LSM6DSL based IMU "click" module sent with the PTP-3 kit from element-14. To use it, I had to install the spidev library using the terminal (accessible through the PYNQ environment)

sudo pip3 install spidev

After this, I could see the SPI devices in the /dev folder in linux

image

After this I could use spidev library inside my jupyter notebook to talk to the LSM6DSL Inertial Measurement Unit (IMU) connected to "spidev0.0".

image

However, once I got it working, I realized that only the SPI bus on click 1 of the click mezzanine board is usable with the default PYNQ image. Since I was going to need a SPI bus for my MCP2515 (to get CAN interface), I had to let the LSM6DSL board go.

Luckily I had a MPU6050 IMU board, which works over I2C interface. So I set about getting I2C working. This turned out to be fairly easy. The I2C devices were already appearing in the linux devices.

image

To use them, I installed the "python3-smbus" package

sudo apt install python3-smbus

I also installed i2ctools package which gives some good tools to see what all is there on our i2c buses

sudo apt install i2c-tools

I was able to find my MPU6050 IMU connected to i2c-3

image

The address 0x68 is the default address of MPU6050, which is why we see it here. After this it was simple to talk to the MPU6050 using the smbus library

image

After this I proceeded to write the full code to Initialize the MPU6050 and then retrieve the acceleration and gyro values from it. 

This is where I hit another major roadblock because I found out that the I2C bus would crash if I started to use it too much. I spent almost 2 days trying to figure out the reason and more importantly some way to circumvent it. However, I couldn't find a way. I did ensure that it wasn't the IMU malfunctioning.

As a result, I had to make changes to my project plans such that I don't have to query the IMU too often. 

Motors & CANBUS interface

As mentioned earlier, I had decided to use two RMD-L-12025 motors from a company called MyActuator. These motors work on CANBUS interface, which meant I needed a CANBUS interface in my system. 

Since there was no CANBUS interface in the default PYNQ image, the only option I had (short of creating a custom image) was to use a MCP2515 module. These modules, based on Microchip's MCP2515 ICs provide a CANBUS interface over a SPI bus. Very useful devices if your controller does not have a CANBUS interface. (To be clear, the Zynq processor in the ultra96 actually has two CANBUS interfaces, it's just that these aren't configured for use in the PYNQ image)

I had already sorted out the SPI communication part earlier, however, I found out that getting the MCP2515 to work wasn't as simple as I had hoped. All the examples I could find on the internet were for Arduino/ESP32 platforms and were using libraries that I couldn't just port to python easily. 

Eventually, I came up with a workaround. I connected an ESP32 with the MCP2515, setup a basic example using the "ACAN2515" library and then used my trusty Saleae Logic Pro to sniff the communications between the ESP32 and MCP2515. This gave me a good starting point. I copied the initialization sequence from the sniffed SPI data and it worked perfectly.

image

On the other hand, in order to write the functions to write /read CAN buffers in the MCP2515 I had to spend some more time understanding the specifics since the MCP2515 has multiple send and receive buffers you have to cater-for in the code. It took me quite some time to get the MCP2515 code to work 100%.

Once this was done, I moved on to writing the higher level functions of sending absolute and relative position move commands to the RMD-L-12025 motors, and retrieving the position data. This was relatively simpler, now that my CANBUS interface was solid. The manufacturer had given a document that detailed their protocol for CANBUS which was fairly simple to use.

Voltage-Level Translations

I want to mention here that having the click mezzanine board was very helpful since the Zynq processing system IOs on the Ultra96-v2 are running on voltages lower than 3.3V. The click mezzanine board has voltage level translators which allowed me to use my 3.3V devices seamlessly.

Mechanics

Naturally, I needed some physical body/chassis to hold all the parts together. For this I designed the necessary parts in Solidworks.

image

I then printed the parts on a 3d printer and assembled them together.

image image

It took me ~3 days of design and print iterations to get to the current state.  While there are still improvements to be made, the design is workable and I am happy with it for now.

Experiment - Platform Auto Levelling system

At this point, now that all the basic building blocks had been built and tested to be working, I put together my first application. My original plan was to build a stabilized gimbal in which I continuously use the IMU data to generate corrections to the motors' position, with the objective of keeping the platform pointed in a particular direction (pitch, yaw). However, since I was having issues with my I2C bus, I could not query the IMU too often. As a result, I changed my experiment to level the camera platform based on initial pitch angle from the IMU. 

Here's a short video demonstration of this working:

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

Adding youtube link for the video, just in case.

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

The full code for this application, including all the building blocks from the spi interface, MCP2515 code, motor command functions as well as IMU code is inside the Jupyter notebook below. You will notice that while the application is simple, the building blocks required a LOT of code to get everything working. The good thing is that, now that these building blocks are there, writing more high-level applications will be very easy.

The jupyter notebook with all the code is attached below.

PTP3_PROJ_PLATFORM_LEVELLING.zip

Attaching the jupyter notebook as a pdf for convenience:

Experiment - Keypoint based Stabilization

Moving towards the goal of getting stabilized video out of my gimbal, another aspect was to explore stabilizing the video coming out of the platform. Mechanical /active/optical stabilization is good but has it's limitations. To take stabilization to the next level, usually you would use some sort of image-processing based stabilization in addition to the mechanical/active/optical stabilization.

To explore this, I wrote an algorithm based on keypoint matching technique. The concept is simple: You take a reference image and calculate it's keypoints using one of the popular algorithms. Then for each incoming image, you also extract the keypoints, match them with the original keypoints to obtain a transformation that makes the new image match the original image's position on the screen. This is a simple algorithm but gives fascinating results. 

I implemented and ran this algorithm on the Ultra96-v2 and obtained decent results. I was using a USB-C camera with a C mount 35mm lens which gave a very narrow field of view. The result of the stabilization were very good. Here's a short video to demonstrate this:

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

Youtube link to the same video

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

In the video you can see that even when the camera moves position, the algorithm keeps the new frame aligned to the original frame's position. Running the code on the Ultra96-v2, even without any hardware acceleration, the code was able to run at 10-15 fps. 

Using the profiling tools available within the jupyter environment, I was able to identify which parts were taking the most time. It was no surprise that the image processing functions (keypoints extraction, applying image transformations) were taking most amount of time, making them good candidates for accelerating in the PL. Unfortunately I found out that the OpenCV-for-PYNQ  libary isn't compatible with the version of PYNQ i was using (v 3.0) so I would need to recompile the library for my version, or build my own accelerators. This is something I am going to learn in future as I work on this project further.

All the code for this is in the attached jupyter notebook below

PTP3_PROJ_KeypointStabilizer.zip

Jupyter notebook as pdf for convenience:

Experiment - Keypoint based Moving video stabilization

The algorithm in the last experiment is good if you want to observe one particular (fixed) location from a far distance. Naturally that reduces it's application to very specific cases. As a natural progression from that, I wanted to try a variation on this whereby the reference frame is constantly updated such that the stabilized video follows the camera but in the process the high-frequency jitters/movements are stabilized due to the keypoint matching. This would be a good algorithm to integrate into the Gimbal's stabilization algorithm by providing additional feedback on movements of the camera.

A short video demonstration is below:

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

In the video you can see the black edges appearing on the monitor, sometimes. Those happen when the actual frame has a sudden motion and are a proof of our algorithm doing its job.

Here is the full jupyter notebook code

PTP3_MovingVideoStabilizer.zip

as well as the pdf print for convenience

Safety

One major concern I had in this project was that the motors I am using have a HUGE amount of torque (~10N peak). This amount of torque can cause serious injury and/or destroy mechanics as well as electronics. This was an issue because I am not using any slip rings, which means if the motors were to spin uncontrollably (wrong motion command?) it would cause the motors to break the whole setup, potentially destroying the mounted electronics and camera. To address this, I implemented a few measures:

First of all, I discovered that the motors allowed setting an angle limit such that if I command the motors to move out of that limit, they will not go beyond the set limit.
So I first moved the pitch motion motor to zero pitch angle (verified with an inclinometer)

image image

and then set that encoder position as the motor's zero using the basic utility from the manufacturer.

image


Then, I checked the "safe" range of motion in both directions by moving the pitch axis in each direction and concluded that 35degrees pitch up or down was safe (while still providing enough range of motion for my use case).

image

So then I set +-35 as the maximum allowable angle on the motor. I verified this afterwards by trying to move the motor out of this range via position commands, which it refused. This gave me some sense of safety.

image

I also tried to reduce the values for Max Acceleration, Max Speed and Max torque Current using the motor's utility to make things safer while still getting decent motion performance. I had mixed results with this and it's something I need to tune further in future.

First I did the process for the Pitch motor, and later, once the azimuth motor/mounting was ready, repeated the process for the Azimuth motor.

The other thing I did was incorporating some slots in the mechanical parts' design to allow physical limiting of the range of motion. This was a backup option in case the motors for some reason did try to go rogue. However, I have to say I am not sure if my 3D printed design can withstand the motors' torque. A machined metal part would be the way to go for this, in future.

One more thing related to safety was the decision not to mount the Ultra96-v2 on the moving platform itself. This was opposed to my original plan. This was done so that, if some issue happens, at least my precious Ultra96 board can still stay out of harm's way. The platform on which the IMU board & camera are mounted are also removable and I only installed it after doing some "dry runs" every time I added new code.

Major Issues Faced/Challenges

- The first issue I faced was that the PYNQ image only allows using one SPI device (position 1 on the click mezzanine board). This became an issue because I wanted to use the SPI based IMU click board (provided with PTP3 kit) along with MCP2515 based board for CANbus communications. The MCP2515 also uses SPI communications so I had to choose to keep one out of the two. I naturally chose MCP2515 since I had an I2C based alternative for the IMU (MPU6050)

- After getting the MPU6050 working in PYNQ, I found out that if I queried the I2C bus too frequently, it just crashed. So much so that even the device disappeared. I spent a few days trying to debug the issues, but couldn't find a fix. It was a major issue for me since I was going to depend on this IMU for stabilizing my platform, for which I needed to query the IMU few hundreds of times per second. My guess is that since the I2C lines are going through an I2C mux on the Ultra96-v2, probably some other process on the image queries some other sensors (PMICs?) periodically and at some point there is a conflict or crash or some race condition. I had to change my project plans due to this issue.

- The motors I am using are big and powerful, but I also underestimated the rotational inertia experienced by the Azimuth motor. I had a LOT of trouble trying to tune the azimuth motor to prevent the whole top platform from going into uncontrolled oscillations.

Lessons learnt/Future plans

I learnt a lot of things over the course of this project. However, here I want to talk about what I would do differently if I were to start over again, or even as I continue working on the project in future.

  • Use of "PYNQ Microblaze Subsystem": This is the way-to-go for offloading IO intensive tasks or those requiring hard timing which is exactly what I need for my IMU based stabilization algorithm.
    PYNQ MicroBlaze Subsystem — Python productivity for Zynq (Pynq)
  • Custom PYNQ overlays for video processing speed up: I definitely need to learn how to build custom image processing overlays to speed up the parts of algorithms taking up most time.
  • Better Mechanical design: I found out that the motors are too heavy for my 3d printed parts, even though I designed them thinking they would be strong enough. A stronger structure would mean less flex and less oscillations.
  • Moving forward, I would use the CAN based MTI-680 from Xsens as my Inertial Measurement Unit (IMU). I received one of these units for roadtest last year and was super impressed by the performance. I couldn't use it for this project because I realized I did not have the mating connector to use just the MTi-680 device in my project. The MTi-680 also does the sensor fusion onboard which gives VERY good performance!

Conclusion

One aspect I am very pleased about is achieving the "ease-of-development" objective I set for myself in the beginning of the project. Using PYNQ really helped me try out several ideas I had regarding video stabilization and tracking. These would have taken a LOT of effort if I had tried to prototype these on the FPGA via the normal route of building things in Programmable Logic directly. This quick prototyping ability should allow me to finalize the vision algorithms and eventually select the functions that can benefit from a speed-up in the FPGA fabric. This will be my path of least resistance towards building some very neat smart camera platforms. I'm already excited about the projects I will build using this methodology in future!

Thank you to AMD and Element14 for providing such a good opportunity to learn more about development with FPGAs!

  • Sign in to reply

Top Comments

  • saadtiwana_int
    saadtiwana_int over 2 years ago in reply to cghaba +1
    Hi cghaba , Thank you for your kind comment. My bad for not explaining it very well (I wasn't in the best of health when writing the post so I seem to have missed good explanations). You're right, SPI…
Parents
  • javagoza
    javagoza over 2 years ago

    I enjoyed reading about your project. Previously, I worked on a project that utilized OpenCV to monitor the status of building windows:

     Window Opening Monitor with ArUco - Final device 

    I found your use of corner recognition techniques to be quite interesting. Thank you for sharing.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
Comment
  • javagoza
    javagoza over 2 years ago

    I enjoyed reading about your project. Previously, I worked on a project that utilized OpenCV to monitor the status of building windows:

     Window Opening Monitor with ArUco - Final device 

    I found your use of corner recognition techniques to be quite interesting. Thank you for sharing.

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
Children
  • saadtiwana_int
    saadtiwana_int over 2 years ago in reply to javagoza

    Hi javagoza , thank you for reading. I just had a look the project link you shared above. It's actually a very good approach for controlled environments where you need very high reliability. In my case, though, when looking out at the "uncontrolled" environment, there is no option but to utilize any features you can extract from the environment itself. The interesting thing is that some of these algorithms will not only extract the corners/features, but also generate descriptors for each feature such that they can be matched across images. This is the major enabler for the stabilization algorithms I used. 

    My plan was to feed the movement information extracted from the video to the platform stabilization algorithm. However, I need to sort out some of the issues standing in the way first. Above all, I need to get the tuning of the yaw/azimuth motor right so it doesn't oscillate, and also get the IMU data at a fast sampling rate reliably. I plan to continue working on this and will post an update blog sometime in the coming months!

    Thanks and Regards,
    Saad

    • Cancel
    • Vote Up 0 Vote Down
    • Sign in to reply
    • More
    • Cancel
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube