<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://community.element14.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/"><channel><title>Robotics</title><link>https://community.element14.com/technologies/robotics/</link><description>element14&amp;#39;s premier discussion group for all things relating to robotics. Follow this group and participate in events, blogs, and discussions.</description><dc:language>en-US</dc:language><generator>Telligent Community 12</generator><item><title>Forum Post: RE: Little Dewey (Drone:1) from the Silent Running movie</title><link>https://community.element14.com/technologies/robotics/f/forum/11610/little-dewey-drone-1-from-the-silent-running-movie/230225</link><pubDate>Thu, 28 Aug 2025 09:24:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:238cc6df-d724-43a7-a137-2d68f0a5a012</guid><dc:creator>RichBarr</dc:creator><description>That’s awesome! The drones from Silent Running are such iconic designs, and building one yourself must have been an incredible project. I’d love to hear more about how you approached the build and what materials or tech you used.</description></item><item><title>Forum Post: RE: I need a programming code for your PICAM 2 color tracking video.</title><link>https://community.element14.com/technologies/robotics/f/forum/40181/i-need-a-programming-code-for-your-picam-2-color-tracking-video/226586</link><pubDate>Mon, 27 Jan 2025 15:54:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:27449d44-a86b-4e25-8316-a3b55e0027a9</guid><dc:creator>GavinHarris</dc:creator><description>I don&amp;#39;t have direct access to the program from the video you linked, but I can guide you in creating an object-tracking system. You can use Python with OpenCV to track objects and capture images when they are detected. Start by using methods like background subtraction or color-based detection, and for more advanced tracking, employ algorithms such as SORT or YOLO. Let me know if you’d like help with a specific part of the code! [quote userid=&amp;quot;402694&amp;quot; url=&amp;quot;~/technologies/robotics/f/forum/40181/i-need-a-programming-code-for-your-picam-2-color-tracking-video&amp;quot;]I&amp;#39;m working on a project where I have to track object for taking images. kindly share your program of this video. https://www.youtube.com/watch?v=ljNE1D995Wo I shall be very thankful to you for this.[/quote]</description></item><item><title>Forum Post: RE: I need a programming code for your PICAM 2 color tracking video.</title><link>https://community.element14.com/technologies/robotics/f/forum/40181/i-need-a-programming-code-for-your-picam-2-color-tracking-video/223309</link><pubDate>Sat, 17 Aug 2024 10:16:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:da5aedc9-1c96-44ed-8c92-900cb1bf05aa</guid><dc:creator>beacon_dave</dc:creator><description>The supporting files can be found here: https://github.com/thebenheckshow/173-tbhs-auto-tracking-camera As linked from the original episode pages here: /challenges-projects/element14-presents/benheck/ben-heck-exclusive/w/documents/19280/ben-heck-s-auto-tracking-camera-part-2-episode----episode-174 /challenges-projects/element14-presents/benheck/ben-heck-exclusive/w/documents/19202/ben-heck-s-auto-tracking-camera-part-1-episode----episode-173</description></item><item><title>Forum Post: RE: I need a programming code for your PICAM 2 color tracking video.</title><link>https://community.element14.com/technologies/robotics/f/forum/40181/i-need-a-programming-code-for-your-picam-2-color-tracking-video/223306</link><pubDate>Sat, 17 Aug 2024 03:41:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:380d61e1-54ae-41bc-b2b1-ee90e0da7500</guid><dc:creator>CarlosQueens</dc:creator><description>Did you find something in the end? https://reddogcasino.bet/</description></item><item><title>Wiki Page: Featured Content Triptych Setup Doc</title><link>https://community.element14.com/technologies/robotics/w/setup/26672/featured-content-triptych-setup-doc</link><pubDate>Thu, 30 May 2024 19:11:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:fb6a2af6-f1ad-440c-9d30-17d44668fcb6</guid><dc:creator>pchan</dc:creator><description>Team HyperShock Multicomp Pro is proud to sponsor team HyperShock in this season&amp;#39;s BattleBots competition. Learn more on the behind the scenes footage and the AMA! Video 5 Recorded AMA Optical Sensors Quiz Optical sensors are used for many different tasks, including sensing distance, color, ambient light, and even pressure. Test your knowledge with our quiz on optical sensors.</description></item><item><title>Wiki: Setup</title><link>https://community.element14.com/technologies/robotics/w/setup</link><pubDate>Thu, 30 May 2024 19:11:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:0b98f076-aadf-43ac-843d-d0c97d1663ef</guid><dc:creator /><description /></item><item><title>Wiki: Quiz</title><link>https://community.element14.com/technologies/robotics/w/quiz</link><pubDate>Thu, 30 May 2024 19:02:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:c6bfa3f8-d0ae-4e0f-abc4-d0dfbddc7169</guid><dc:creator /><description>Quiz</description></item><item><title>Wiki Page: Quiz</title><link>https://community.element14.com/technologies/robotics/w/quiz</link><pubDate>Thu, 30 May 2024 19:01:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:0ef52b7c-1653-4a4c-9a97-ede09a64ae83</guid><dc:creator>pchan</dc:creator><description /></item><item><title>Wiki Page: Team HyperShock Competes in BattleBots</title><link>https://community.element14.com/technologies/robotics/w/documents/27920/team-hypershock-competes-in-battlebots</link><pubDate>Thu, 16 May 2024 12:41:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:3c5ae2c8-833b-4ecc-bb73-88101a56c339</guid><dc:creator>cstanton</dc:creator><description>– the engineer’s choice – is proud to sponsor Team HyperShock in this season’s BattleBots competition. See the power supply, thermal imager, and more that they used to build their bot. Watch the behind the scenes &amp;amp; exclusive interview videos with Team HyperShock and James Lewis . Recorded AMA (Ask Me Anything) with Will Bales, captain of Team HyperShock. Exclusive interview &amp;amp; Behind the Scenes Videos: Video 5 https://youtu.be/UG95J8-Jv2c Video 4 https://youtu.be/a_Nbf06Zy7k Video 3 https://youtu.be/8jMdjuBpIGc Video 2 https://youtu.be/27zdy0NGPoI Video 1 https://youtu.be/ZBiWTeZmaWE Have YOU Wondered How the HyperShock BattleBot Works? Find out in our Ask Me Anything webinar with Team HyperShock! Watch Recording of the event where Will Bales answered Community questions about HyperShock, from its controllers and drive trains to the deadly vertical spinner. The Teaser community.element14.com/.../multicomp-pro-at-BattleBots.mp4 Exclusive Interview with Will Bales of Team HyperShock and James Lewis James Lewis from WorkBench Wednesdays , AKA The Bald Engineer leaves his own lab to visit the Pits of BattleBots and joins Team HypberShock in their lab for the filming of The BattleBots World Championship VII. Join James and Team Captain Will Bales, as we take the armor off their bot Hypershock to get a better understanding what’s inside. Watch Videos Try the Multicomp Pro tools entrusted by Team HyperShock . Bill of Materials Product Name Manufacturer Quantity Buy Kit Soldering Re-Work Station, 900W MULTICOMP PRO 1 Buy Now Benchtop Soldering Fume Extractor, ESD-Safe MULTICOMP PRO 1 Buy Now Wide Range Bench Power Supply, 60V, 15A TENMA 1 Buy Now Handheld Digital Multimeter, True RMS MULTICOMP PRO 1 Buy Now Handheld Digital Tachometer MULTICOMP PRO 1 Buy Now Thermal Imager, Handheld, 80x60 Resolution MULTICOMP PRO 1 Buy Now Multi-Functional VDE Cable Cutter DURATOOL 1 Buy Now Electronic Cutters, Flush, 150mm MULTICOMP PRO 1 Buy Now Electronic Cutters, Diagonal, 180mm DURATOOL 1 Buy Now Electronic Cutters, Long Needle Nose, 150mm MULTICOMP PRO 1 Buy Now Cable &amp;amp; Wire Spool Rack PRO POWER 1 Buy Now High Temperature Masking Tape, 15mm MULTICOMP PRO 1 Buy Now High Temperature Masking Tape, 6mm MULTICOMP PRO 1 Buy Now Aluminium Foil Tape, Non-Conductive, 45m PRO POWER 1 Buy Now Tinned Copper Wire, 138m MULTICOMP PRO 1 Buy Now Heatshrink Tubing, Adhesive Lined, 6&amp;quot;, 3:1, Black PRO POWER 1 Buy Now Heatshrink Tubing, Adhesive Lined, 6&amp;quot;, 3:1, Black PRO POWER 1 Buy Now Heatshrink Tubing, Adhesive Lined, 6&amp;quot;, 3:1, Black MULTICOMP PRO 1 Buy Now Soldering Tip, Conical, 3.2mm MULTICOMP PRO 1 Buy Now Soldering Tip, Chisel, 4.6mm MULTICOMP PRO 1 Buy Now Soldering Tip, Chisel, 2.4mm MULTICOMP PRO 1 Buy Now Soldering Tip, Pointed, 0.5mm MULTICOMP PRO 1 Buy Now Soldering Tip, Knife, 15mm MULTICOMP PRO 1 Buy Now</description><category domain="https://community.element14.com/technologies/robotics/tags/hypershock">hypershock</category><category domain="https://community.element14.com/technologies/robotics/tags/battlebots">battlebots</category></item><item><title>Wiki: Documents</title><link>https://community.element14.com/technologies/robotics/w/documents</link><pubDate>Thu, 16 May 2024 12:41:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:05d48004-b3e9-42d4-a253-6118813750b9</guid><dc:creator /><description /></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog7?CommentId=aba87a8f-aa2b-46b5-a849-af268dd26b8e</link><pubDate>Sat, 13 Apr 2024 19:41:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:aba87a8f-aa2b-46b5-a849-af268dd26b8e</guid><dc:creator>crisdeodates</dc:creator><description>Haar Cascade can only be trained to identify the matching shape and size or similar features. It can&amp;#39;t be used for face recognition. But we could definitely use Deep Learning techniques to train a model to detect a specific person or multiple persons. As you noticed, a way is to extract the detected face and feed it into a trained model for recognition.</description></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog6?CommentId=6dcb783c-021d-4f3d-af3b-d30221aac411</link><pubDate>Sat, 13 Apr 2024 19:33:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:6dcb783c-021d-4f3d-af3b-d30221aac411</guid><dc:creator>crisdeodates</dc:creator><description>Yes. As long as the camera_stream_publisher node runs in the first terminal, it will continue to capture the camera images.</description></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog7?CommentId=5c99627a-5d30-458c-91a7-1a61b849dbaf</link><pubDate>Sat, 13 Apr 2024 19:31:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:5c99627a-5d30-458c-91a7-1a61b849dbaf</guid><dc:creator>DAB</dc:creator><description>So at this point it will just highlight that you have a face or object that you have preset. The next step could be to take the face and compare it to a specific face for ID or tracking.</description></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog6?CommentId=4d0465bf-cead-4192-a9c7-ce3d1e19db96</link><pubDate>Sat, 13 Apr 2024 19:26:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:4d0465bf-cead-4192-a9c7-ce3d1e19db96</guid><dc:creator>DAB</dc:creator><description>I assume you can continue to capture camera frames and store them in the background even when the display window is closed.</description></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog5?CommentId=f349840a-5818-466c-846c-8a9cf6ffaa55</link><pubDate>Sat, 13 Apr 2024 19:06:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:f349840a-5818-466c-846c-8a9cf6ffaa55</guid><dc:creator>crisdeodates</dc:creator><description>Usually we wrap these complex data types as custom messages in ROS which are used in those particular cases. I will definitely cover them in the future blogs. Thanks for the suggestion DAB</description></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog5?CommentId=74ae4d9d-d8a2-4b5e-b9cb-9543e3408fb3</link><pubDate>Sat, 13 Apr 2024 19:03:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:74ae4d9d-d8a2-4b5e-b9cb-9543e3408fb3</guid><dc:creator>DAB</dc:creator><description>Most projects use complex data structures mixing integer, floating point, boolean and character data. When you are running distributed and parallel computing applications, you need to send multiple data and types between computers for information and status transfers.</description></item><item><title>Blog Post: ROS2 Learning Series - Blog 7 - Advanced Programming - Part 2</title><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog7</link><pubDate>Sat, 13 Apr 2024 17:56:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:fc5e98d6-adcf-47e0-a9b9-a434a5852862</guid><dc:creator>crisdeodates</dc:creator><description>ROS2 Face Detection Project It is now time for us to go a bit more further. We will modify and upgrade the camera stream publisher node from our camera stream project to capture a video stream from a camera, detect the faces inside the stream, overlay a rectangle over each detected faces and publish the overlayed image to the video stream topic. We will also reuse the camera stream subscriber node to subscribe to the camera topic and display it in an OpenCV window. Create a new package with dependencies. $ cd ~/ros2_ws/src $ ros2 pkg create --build-type ament_python face_detector --dependencies rclpy image_transport cv_bridge sensor_msgs std_msgs opencv-python Create the face_detector_publisher.py file and populate it with the following code: import rclpy from rclpy.node import Node from sensor_msgs.msg import Image from cv_bridge import CvBridge import cv2 import os class FaceDetectorPub(Node): &amp;quot;&amp;quot;&amp;quot; Create a FaceDetectorPub class, which is a subclass of the Node class. &amp;quot;&amp;quot;&amp;quot; def __init__(self): &amp;quot;&amp;quot;&amp;quot; Class constructor to set up the node. &amp;quot;&amp;quot;&amp;quot; # Initiate the Node class&amp;#39;s constructor and give it a name. super().__init__(&amp;#39;face_detector_pub&amp;#39;) # Create the publisher. This publisher will publish an Image # to the video_frame_data topic. The queue size is 10 messages. self.publisher_ = self.create_publisher(Image, &amp;#39;video_frame_data&amp;#39;, 10) # We will publish a message every 0.1 seconds. timer_period = 0.1 # seconds # Create the timer. self.timer = self.create_timer(timer_period, self.timer_callback) # Create a VideoCapture object. # The argument &amp;#39;0&amp;#39; gets the default webcam. self.cap = cv2.VideoCapture(0) # Used to convert between ROS and OpenCV images. self.br = CvBridge() # Load the haar cascade classifier. self.haar_path = os.path.expanduser(&amp;#39;~&amp;#39;) + &amp;#39;/ros2_ws/src/face_detector/resource/haar_classifier.xml&amp;#39; self.face_cascade = cv2.CascadeClassifier(self.haar_path) def timer_callback(self): &amp;quot;&amp;quot;&amp;quot; Callback function. This function gets called every 0.1 seconds. &amp;quot;&amp;quot;&amp;quot; # Capture frame-by-frame. # This method returns True/False as well # as the video frame. ret, frame = self.cap.read() # Convert to gray scale image. frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ## Detect faces &amp;amp; returns positions of faces as Rect(x,y,w,h). face_rects = self.face_cascade.detectMultiScale(frame_gray, 1.3, 5) # Draw rectangles representing the detected faces. for (x, y, w, h) in face_rects: cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2) if ret == True: # Publish the image. # The &amp;#39;cv2_to_imgmsg&amp;#39; method converts an OpenCV # image to a ROS 2 image message. self.publisher_.publish(self.br.cv2_to_imgmsg(frame)) def release_camera(self): &amp;quot;&amp;quot;&amp;quot; Release the camera when done. &amp;quot;&amp;quot;&amp;quot; self.cap.release() def main(args=None): &amp;quot;&amp;quot;&amp;quot; Main function to initialize the node and start the face detector publisher. &amp;quot;&amp;quot;&amp;quot; # Initialize the rclpy library. rclpy.init(args=args) # Create the node. face_detector_publisher_node = FaceDetectorPub() # Spin the node so the callback function is called. try: rclpy.spin(face_detector_publisher_node) finally: # Release the camera. face_detector_publisher_node.release_camera() # Destroy the node explicitly. face_detector_publisher_node.destroy_node() # Shutdown the ROS client library for Python. rclpy.shutdown() if __name__ == &amp;#39;__main__&amp;#39;: main() For the face detection, we will use a machine learning based approach called haar cascade. Even though it is particularly popular for detecting faces, it can be trained to detect other objects as well. This approach has the ability to rapidly evaluate features at multiple scales and efficiently reject background regions. OpenCV includes support for haar cascades for object detection, including face detection. This is carried out using pre-trained Haar cascade classifiers for various objects, including faces. Trained face detection cascade values are provided in the form of an xml file. You can download the xml from here . Put this haar cascade xml file in the resource folder inside the package. Add the entry points to setup.py inside console_scripts section: &amp;#39;face_detector_pub = face_detector.face_detector_publisher:main&amp;#39;, Since we already specified our dependencies during the package creation itself, we do not need to edit the package.xml file. Build the package. $ cd ~/ros2_ws $ colcon build --packages-select face_detector Source the new package. $ source install/setup.bash Run the face detector publisher node to start publishing the camera frames. $ ros2 run face_detector face_detector_pub Run the camera stream subscriber node in a separate terminal. $ cd ~/ros2_ws $ source install/setup.bash $ ros2 run camera_stream camera_stream_sub The camera frames with the detected faces will be displayed in an OpenCV window. Press ESC on the active OpenCV window to exit.</description><category domain="https://community.element14.com/technologies/robotics/tags/Robot%2boperating%2bSystem">Robot operating System</category><category domain="https://community.element14.com/technologies/robotics/tags/robotics">robotics</category><category domain="https://community.element14.com/technologies/robotics/tags/ROS2">ROS2</category><category domain="https://community.element14.com/technologies/robotics/tags/ROS">ROS</category></item><item><title>Blog Post: ROS2 Learning Series - Blog 6 - Advanced Programming - Part 1</title><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog6</link><pubDate>Sat, 13 Apr 2024 17:35:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:51b7a202-4065-4f5b-ad6b-542deda5ee69</guid><dc:creator>crisdeodates</dc:creator><description>ROS2 Camera Stream Project Previously we got some basic insight on how to program ROS2 nodes and services. Now let us take a step further in ROS2 programming. We will create a publisher node to capture a video stream from a camera and publish it to a topic. We will also create a subscriber node who will subscribe to the camera stream topic and display it in an OpenCV window. Install the OpenCV dependencies. $ pip install opencv-python Create a new ROS2 package with dependencies. $ cd ~/ros2_ws/src $ ros2 pkg create --build-type ament_python camera_stream --dependencies rclpy image_transport cv_bridge sensor_msgs std_msgs opencv-python Create the camera_stream_publisher.py file and populate it with the following code : import rclpy from rclpy.node import Node from sensor_msgs.msg import Image from cv_bridge import CvBridge import cv2 class CameraStreamPub(Node): &amp;quot;&amp;quot;&amp;quot; Create a CameraStreamPub class, which is a subclass of the Node class. &amp;quot;&amp;quot;&amp;quot; def __init__(self): &amp;quot;&amp;quot;&amp;quot; Class constructor to set up the node. &amp;quot;&amp;quot;&amp;quot; # Initiate the Node class&amp;#39;s constructor and give it a name. super().__init__(&amp;#39;camera_stream_pub&amp;#39;) # Create the publisher. This publisher will publish an Image # to the video_frame_data topic. The queue size is 10 messages. self.publisher_ = self.create_publisher(Image, &amp;#39;video_frame_data&amp;#39;, 10) # We will publish a message every 0.1 seconds. timer_period = 0.1 # seconds # Create the timer. self.timer = self.create_timer(timer_period, self.timer_callback) # Create a VideoCapture object. # The argument &amp;#39;0&amp;#39; gets the default webcam. self.cap = cv2.VideoCapture(0) # Used to convert between ROS and OpenCV images. self.br = CvBridge() def timer_callback(self): &amp;quot;&amp;quot;&amp;quot; Callback function. This function gets called every 0.1 seconds. &amp;quot;&amp;quot;&amp;quot; # Capture frame-by-frame. # This method returns True/False as well # as the video frame. ret, frame = self.cap.read() if ret == True: # Publish the image. # The &amp;#39;cv2_to_imgmsg&amp;#39; method converts an OpenCV # image to a ROS 2 image message. self.publisher_.publish(self.br.cv2_to_imgmsg(frame)) def main(args=None): &amp;quot;&amp;quot;&amp;quot; Main function to initialize the node and start the camera stream publisher. &amp;quot;&amp;quot;&amp;quot; # Initialize the rclpy library. rclpy.init(args=args) # Create the node. camera_stream_publisher_node = CameraStreamPub() # Spin the node so the callback function is called. rclpy.spin(camera_stream_publisher_node) # Destroy the node explicitly. camera_stream_publisher_node.destroy_node() # Shutdown the ROS client library for Python. rclpy.shutdown() if __name__ == &amp;#39;__main__&amp;#39;: main() Create the camera_stream_subscriber.py file and populate it with the following code: import rclpy from rclpy.node import Node from sensor_msgs.msg import Image from cv_bridge import CvBridge import cv2 class CameraStreamSub(Node): &amp;quot;&amp;quot;&amp;quot; Create a CameraStreamSub class, which is a subclass of the Node class. &amp;quot;&amp;quot;&amp;quot; def __init__(self): &amp;quot;&amp;quot;&amp;quot; Class constructor to set up the node. &amp;quot;&amp;quot;&amp;quot; # Initiate the Node class&amp;#39;s constructor and give it a name. super().__init__(&amp;#39;camera_stream_sub&amp;#39;) # Create the subscriber. This subscriber will receive an Image # from the video_frames topic. The queue size is 10 messages. self.subscription = self.create_subscription( Image, &amp;#39;video_frame_data&amp;#39;, self.listener_callback, 10) self.subscription # prevent unused variable warning # Used to convert between ROS and OpenCV images. self.br = CvBridge() self.running = True def listener_callback(self, data): &amp;quot;&amp;quot;&amp;quot; Callback function to receive and display images. &amp;quot;&amp;quot;&amp;quot; if self.running: # Convert ROS Image message to OpenCV image. current_frame = self.br.imgmsg_to_cv2(data) # Display image. cv2.imshow(&amp;quot;camera_stream&amp;quot;, current_frame) # Raise SystemExit exception to quit if ESC key is pressed. if cv2.waitKey(1) == 27: self.running = False cv2.destroyAllWindows() raise SystemExit def main(args=None): &amp;quot;&amp;quot;&amp;quot; Main function to initialize the node and start the camera stream subscriber. &amp;quot;&amp;quot;&amp;quot; # Initialize the rclpy library. rclpy.init(args=args) # Create the node. camera_stream_subscriber_node = CameraStreamSub() # Spin the node so the callback function is called. try: rclpy.spin(camera_stream_subscriber_node) # Exit if SystemExit exception is raised. except SystemExit: print(&amp;quot;Camera stream output stopped&amp;quot;) # Destroy the node explicitly. camera_stream_subscriber_node.destroy_node() # Shutdown the ROS client library for Python. rclpy.shutdown() if __name__ == &amp;#39;__main__&amp;#39;: main() Add the entry points to setup.py inside console_scripts section: &amp;#39;camera_stream_pub = camera_stream.camera_stream_publisher:main&amp;#39; , &amp;#39;camera_stream_sub = camera_stream.camera_stream_subscriber:main&amp;#39; , Since we already specified our dependencies during the package creation itself, we do not need to edit the package.xml file. Build the package. $ cd ~/ros2_ws $ colcon build --packages-select camera_stream Source the new package. $ source install/setup.bash Run the camera stream publisher node to start publishing the camera frames. $ ros2 run camera_stream camera_stream_pub In another terminal, run the camera stream subscriber node. $ cd ~/ros2_ws $ source install/setup.bash $ ros2 run camera_stream camera_stream_sub The camera frames will be displayed in an OpenCV window. You can press ESC on the active OpenCV window to exit.</description><category domain="https://community.element14.com/technologies/robotics/tags/Robot%2boperating%2bSystem">Robot operating System</category><category domain="https://community.element14.com/technologies/robotics/tags/robotics">robotics</category><category domain="https://community.element14.com/technologies/robotics/tags/ROS2">ROS2</category><category domain="https://community.element14.com/technologies/robotics/tags/ROS">ROS</category></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog5?CommentId=9f5618d8-b74f-4a69-9f40-557de552300b</link><pubDate>Thu, 11 Apr 2024 20:46:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:9f5618d8-b74f-4a69-9f40-557de552300b</guid><dc:creator>crisdeodates</dc:creator><description>Yes DAB Could you give some examples? I am also working on publishing some advanced projects as part of this series. May be I can try to include your recommendations as well</description></item><item><title /><link>https://community.element14.com/technologies/robotics/b/blog/posts/ros2_2d00_learning_2d00_series_2d00_blog5?CommentId=46429628-516f-4c06-9da4-60d65fa02597</link><pubDate>Thu, 11 Apr 2024 20:26:00 GMT</pubDate><guid isPermaLink="false">93d5dcb4-84c2-446f-b2cb-99731719e767:46429628-516f-4c06-9da4-60d65fa02597</guid><dc:creator>DAB</dc:creator><description>I assume you can use more complex data structures?</description></item></channel></rss>