element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Robotics
  • Technologies
  • More
Robotics
Blog My Path to Learn Robotics #2 - Calibrating RealSense Sensors
  • Blog
  • Forum
  • Documents
  • Quiz
  • Events
  • Polls
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Robotics to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: yosoufe
  • Date Created: 14 Mar 2021 10:00 PM Date Created
  • Views 1225 views
  • Likes 4 likes
  • Comments 0 comments
  • robotics
  • opencv
  • pointcloud
  • 3dscanning
  • realsense
Related
Recommended

My Path to Learn Robotics #2 - Calibrating RealSense Sensors

yosoufe
yosoufe
14 Mar 2021

Hello,

 

  • Introduction
  • Calibration using checkerboard pattern
  • Calibration from the CAD
  • Conclusion
  • Next Step

 

Introduction

In previous blog (My Path to Learn Robotics ) I demoed the first setup of (maybe) 3d scanning using RealSense sensors. So far what we saw is that the visualization is not perfectly consistent. The objects are fixed in the real world while they are moving in visualization when we are moving the sensors. This is what we have got so far.

 

image

One reason of error is that the two sensors are not calibrated properly. In the demo above the displacement between two sensors are ignored. At least this is how I thought it was.

 

I used two methods to find the pose of the sensors relative to each other.

 

Calibration using checkerboard pattern

A very kind person has a script exactly for this purpose. He has a pull request on GitHub with the script. You can find the script here: https://github.com/IntelRealSense/librealsense/pull/4355/files

 

What I needed is a checkerboard which I just downloaded from somewhere and printed it and measured the size of each cell in the checker board. For me it is around 29.8 millimetre.

 

I held the checkerboards in front of the cameras in a way that it is in the field of view of both cameras. And I took pictures of the checkerboard in different orientations. The script is helping doing that.

 

The script tries to calibrate the left imagers of both sensors to find the relation between the point cloud from D435i and the pose frame from the T265. The script is using OpenCV fish-eye calibration tools. Here is a pair of sample images of the same scene from two cameras

 

image

image

Using multiple pairs of such images, OpenCV estimates the pose of these two cameras relative to each other.

 

Realsense SDK is offering API to get intrinsic parameters of cameras and also extrinsic of cameras in the same sensor. It is quite powerful. The script also using this feature of the SDK as well to get the final relative Pose.

 

Using this tool, it gives me the relative pose of

t265_to_d435 = np.array([[ 0.99998,  0.00667, -0.00037, -0.01005],
       [ 0.00667, -0.99986,  0.01551,  0.02636],
       [-0.00026, -0.01551, -0.99988, -0.01577],
       [ 0.     ,  0.     ,  0.     ,  1.     ]])

 

As you can see the rotation part make sense, but the translation part is a bit weird if you compare with the image that I had in the previous blog.

 

Actually using this matrix, the results did not change that much. It might be because it is using the fish-eye library of OpenCV and one of the cameras is not fish-eye. I did not create a new gif video, since it was not any better from what I've got at the beginning.

 

 

So let's try a different way.

 

Calibration from the CAD

 

Very simply, I can look at the datasheets and look at my CAD file to find the translation between these cameras. The origin of the T265 is stated as the middle of the two imagers and the origin of the D435i's point cloud is stated as the center of the left imager in the datasheet. So let's measure that in the CAD.

 

image

 

so I am using the following this time

t265_to_d435 = np.array([[1.,  0., 0., 0.00840],
        [0., -1., 0.,  0.029],
        [0.,  0., -1., 0.],
        [0.,  0.,  0.,  1.]])

 

again I did not get any better results and the point cloud is not any better than before.

 

Conclusion

So I guess the error in the sensors and also T265 is much larger than calibration issues. Or there is a terrible error in my code image which is very possible. I took a look and I did not find anything. You may also have a look. It is here: https://github.com/yosoufe/SelfStudyRobotics/blob/master/myApp/tests/test_reconstruct.py

 

Next Step

It seems that T265 does not give me any better results. Now my target is to learn about Point Cloud Registration methods, Point cloud feature extractors and feature descriptors.

 

Let me know what you think please. Thanks for reading this far. I hope these blogs are worth you time. Please let me know if there is any question.

 

Some Links:

  • Previous Blog:
    • My Path to Learn Robotics
  • Sign in to reply
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube