element14 Community
element14 Community
    Register Log In
  • Site
  • Search
  • Log In Register
  • About Us
  • Community Hub
    Community Hub
    • What's New on element14
    • Feedback and Support
    • Benefits of Membership
    • Personal Blogs
    • Members Area
    • Achievement Levels
  • Learn
    Learn
    • Ask an Expert
    • eBooks
    • element14 presents
    • Learning Center
    • Tech Spotlight
    • STEM Academy
    • Webinars, Training and Events
    • Learning Groups
  • Technologies
    Technologies
    • 3D Printing
    • FPGA
    • Industrial Automation
    • Internet of Things
    • Power & Energy
    • Sensors
    • Technology Groups
  • Challenges & Projects
    Challenges & Projects
    • Design Challenges
    • element14 presents Projects
    • Project14
    • Arduino Projects
    • Raspberry Pi Projects
    • Project Groups
  • Products
    Products
    • Arduino
    • Avnet Boards Community
    • Dev Tools
    • Manufacturers
    • Multicomp Pro
    • Product Groups
    • Raspberry Pi
    • RoadTests & Reviews
  • Store
    Store
    • Visit Your Store
    • Choose another store...
      • Europe
      •  Austria (German)
      •  Belgium (Dutch, French)
      •  Bulgaria (Bulgarian)
      •  Czech Republic (Czech)
      •  Denmark (Danish)
      •  Estonia (Estonian)
      •  Finland (Finnish)
      •  France (French)
      •  Germany (German)
      •  Hungary (Hungarian)
      •  Ireland
      •  Israel
      •  Italy (Italian)
      •  Latvia (Latvian)
      •  
      •  Lithuania (Lithuanian)
      •  Netherlands (Dutch)
      •  Norway (Norwegian)
      •  Poland (Polish)
      •  Portugal (Portuguese)
      •  Romania (Romanian)
      •  Russia (Russian)
      •  Slovakia (Slovak)
      •  Slovenia (Slovenian)
      •  Spain (Spanish)
      •  Sweden (Swedish)
      •  Switzerland(German, French)
      •  Turkey (Turkish)
      •  United Kingdom
      • Asia Pacific
      •  Australia
      •  China
      •  Hong Kong
      •  India
      •  Korea (Korean)
      •  Malaysia
      •  New Zealand
      •  Philippines
      •  Singapore
      •  Taiwan
      •  Thailand (Thai)
      • Americas
      •  Brazil (Portuguese)
      •  Canada
      •  Mexico (Spanish)
      •  United States
      Can't find the country/region you're looking for? Visit our export site or find a local distributor.
  • Translate
  • Profile
  • Settings
Month of Robots
  • Challenges & Projects
  • Project14
  • Month of Robots
  • More
  • Cancel
Month of Robots
Blog NVIDIA Jetson Nano: Collision Avoidance
  • Blog
  • Forum
  • Documents
  • Events
  • Polls
  • Files
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • More
  • Cancel
  • New
Join Month of Robots to participate - click to join for free!
  • Share
  • More
  • Cancel
Group Actions
  • Group RSS
  • More
  • Cancel
Engagement
  • Author Author: jomoenginer
  • Date Created: 14 May 2019 5:48 AM Date Created
  • Views 3996 views
  • Likes 10 likes
  • Comments 7 comments
  • nvidia jetson nano
  • robotics
  • jetbot
  • morobotsch
  • deep learning
  • computer vision
  • pytorch
Related
Recommended

NVIDIA Jetson Nano: Collision Avoidance

jomoenginer
jomoenginer
14 May 2019
image

Month of Robots

Enter Your Project for a chance to win robot prizes for your robot builds and a $200 shopping cart!  The Birthday Special: Robot Prizes for Robot Projects!

Back to The Project14 homepage image

Project14 Home
Monthly Themes
Monthly Theme Poll

 

Overview

A typical technique used with Rolling Robots to avoid obstacles is to use sensors such as Infrared Sensors, Ultra Sonic Sensors, Light Sensors and similar devices.  This is mainly due to the limited processing power of the microprocessor or microcontroller used.  To use some sort of Machine Learning technique, although not impossible,  could be either too compute intensive or over kill in these instances.  Also, the platforms that could support the processing needed for applications such as Computer Vision were quite costly and not in reach of the standard Maker or budding student; ex: NVIDIA Jetson  TX2 - $299-$749 US, Jetson AGX Xavier $1,099 US.  The NVIDIA Jetson Nano provides an AI platform that is both powerful and in a decent price range at $99 US.  At present time, the documentation for the Nano is still a bit lacking, the NVIDIA folks provided a nice set of Computer Vision examples for the JetBot by way of Jupyter Notebooks.  Jupyter Notebooks offer a means to both teach a subject such as Python as well as run examples within the Jupyter Notebook without the need to drop to commanline prompt.  Once example that is provided for the JetBot is Collision Avoidance example that demonstrates the power of the Nano to self navigate via a Camera module.

Related Posts

NVIDIA Jetson Nano

NVIDIA Jetson Nano: JetBot Intro

NVIDIA Jetson Nano: JetBot Assemble

 

 

NVIDIA Jetson Nano JetBot Jupyter Notebooks

https://github.com/NVIDIA-AI-IOT/jetbot/tree/master/notebooks

 

Collision Avoidance Notebook

https://github.com/NVIDIA-AI-IOT/jetbot/tree/master/notebooks

 

The Collision Avoidance Notebooks are broken down to the 3 sections.

  • Data Collection
  • Train Model
  • Live Demo

Due to the compute intensive nature of first two steps, it is best to power the Nano via the 5v 4A Barrel Jack.  The Live  Demo can be run via the battery pack since you would want it to wander around.

 

Install JetBot Jupyter Notebooks

 

Before getting started with the Collision Avoidance steps, download and install the JetBot Jupyter Notebooks from the NVIDIA GitHub.

https://github.com/NVIDIA-AI-IOT/jetbot/wiki/software-setup

 

    1. Login to the JetBot and run the following at the command line

git clone https://github.com/NVIDIA-AI-IOT/jetbot
cd jetbot
sudo python3 setup.py install

 

    2. If rsync is not already installed, the run the following to install it.

sudo apt-get install rsync

    4. If still in the jetbot folder, change directories the next level up.

cd ../

    5. To update and replace the existing Notebooks on the JetBot run the following.

rsync -rv   jetbot/notebooks/* /home/jetbot/Notebooks/

 

    6. To access the Jupyter Notebooks on the JetBot, with a browser, use the IP Address of the JetBot (This should be displayed on the PiOLED)

        and use port 8888 as in the following.

http://<jetbot_ip_address>:8888

 

Collision Avoidance - Data Collection

To start the Data Collection, navigate a browser to Jupyter Notebooks on the JetBot via port 8888 and select the data_collection.ipynb Notebook from the File Viewer on the left of the page.  The Jupyter Notebooks use triatlets and Widgets to display input and output options on the Notebook page.

 

The first page provides some description of the Data Collection Process.

 

image

 

 

Scroll down the page to get the Display live camera feed.

 

image

 

Click on the '[ ]' image next to the list code and click the run, arrow, button at the top of the page.  This will connect to the JetBot Camera and create a display widget on the page.

image

 

These widgets and be displayed in their own tab by right clicking the image and selecting "Create New View for Output".

image

 

The camera output should show in a new Output View tab.

image

 

The next step will create a 'dataset' folder under the collision_avoidance folder to hold the the free and blocked images that will be used to create the model used in the Live Demo.  Again, click the '[ ]' image to the left of the code and then run the code.

NOTE: The 'dataset' folder should appear in the File Viewer on the left side of the screen.  If not, then refresh the file list view.

image

 

To collect the images to process the model, an 'add free' and 'add blocked' button is used on the page which will allow the user to add images for the blocked and free paths.. Run the code to get the buttons to appear.

NOTE: at this step the buttons are not active.

image

 

As with the camera view, right click the button image and select "Create New View for Output" to get the image to appear in its own tab.

image

 

The 'uuid' Python package is used create a unique identifier by importing 'uuid1' and adding this to the image name so each is uniquely identified.

Scroll down the page and run the code.

image

 

At this stage, the buttons should be active and the data collection can begin.  First place an object in the view of camera representing a blocked state and then click the 'add blocked' button to add the image to the dataset folder.

NOTE: The counter to the left of the button should increment by one.

 

image

 

Move the object or the bot so the camera view is not blocked, or 'free', and click the 'add free' button to add the image.

image

 

Continue to add free states and blocked states using a variety of objects as well as different lighting to create a workable dataset.

NOTE: It is best to collect an even number of blocked and free states otherwise errors could be seen in the later processes.

 

Once the dataset collection is complete, scroll down the page to get to the code where the dataset is zipped and run the code.  The dataset.zip file should appear in the file viewer list.

image

 

Collision Avoidance -Train Model

 

The Train Model step uses PyTorch to process the images collected in the Data Collection step into a data model that will be used to load into the Nano GPU to identify the possible blocked and free states. PyTorch is an open source deep learning platform using tensor libraries.

 

PyTorch

https://pytorch.org/

 

To start the Train Model process, open the train_model.ipynb Jupyter Notebook in the file viewer listing.

image

 

Click the '[ ]' image next to the list code to import the torch and torchvision packages from PyTorch.  The torchvision package  has popular datasets, model architectures, and common image transformations for computer vision.

https://pytorch.org/docs/stable/torchvision/index.html

 

Scroll down the page to the "Upload and extract dataset" section and run the unzip code to extract the dataset images.

NOTE: The dataset folder was created in the Data Collection step, so if the dataset folder already exists, do not run this code otherwise it will hang.

 

Next, run the code listed under "Create data instance".  This will use the ImageFolder dataset class from torchvision.datasets. Run the code.

image

 

The next step will split the dataset into training and test sets. Click the "[ ]" image and run the code.

image

 

Run the code under "Create data loaders to load data in batches" to create two DataLoader instances.

image

 

 

The next step will "Define the neural network" and use the "alexnet" model to process the data sets and load the model into the GPU for processing.

 

image

 

Once the model is loaded into the GPU , the "Train the neural network" can be performed using 30 EPOCH.

NOTE: This process will take sometime and completes by creating a "best_model.pth" file.

image

 

When the Training is complete, each EPOCH step will show on the page and the "best_model.pth" file should appear in the file viewer.

image

 

This completes the Model Training.  It is best to reboot the Nano at this point, or shut it down and switch from the barrel jack power to the battery power so the bot can wander about in the Live Demo.

 

Collision Avoidance -Live Demo

 

The Live Demo takes the images that were collected in the Data Collection step along with the data model that was created in the Train Model step to control the JetBot via Computer vision.  The JetBot will avoid the objects that were collected as blocked images and navigate around them in a NASCAR manner, always turning left, based on the images collected as free.  This has been a hit and miss step in that sometimes it works and sometimes it does not.  I've collected and processed a data model that tends to work consistently so that will be used in this example.

 

To start the Live Demo, it is best to power the JetBot via the batter power bank so it can wander freely.  Once the bot is booted, connect to the Jupyter Notebooks on the JetBot via :

http://<jetbot_ip_address>:8888

 

Once connected open the live_demo.ipynb Notebook.

 

image

 

Run the code in under "Load the trained model" to load the alexnet model from PyTorch, load the 'best_model.pth' model created in the Train Model step and move the process data into the GPU.

 

image

 

Next, run the code under "Create the preprocessing function" to load the preprocessing code.

 

image

 

The next step will use the Jupyter Notebook traitlets and widgets to create an image box for the camera and a slider indicating when the bot is blocked or free to move.

image

 

Then run the code to create an instance of the jetbot Robot to drive the motors.  As shown previously, right click on the camera image widget and select "Create New View for Output" to open it in a separate Output view tab.

image

 

Next, run the code that imports "torch.nn.functional" which will Pre-process the camera image, execute the neural network, and cause the bot to move Left if blocked or straight if free. Run the "camera.observe" code to start the image capture from the camera.  At this stage the slider should move up and down indicating whether the bot is blocked or free.

 

image

 

At this point, the bot can be placed on the ground and will wander about avoiding the objects that were collected or close similarities.

NOTE: There was an issue when avoiding 3-d printed objects that were of a neutral color.

 

Video of walking through the Data Collection, Train Model and Live Demo steps of the Collision Avoidance example.

NOTE: The Live Demo was a fail and had to be redone.

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

 

 

Successful Live Demo that uses a data set that was collected previously.

You don't have permission to edit metadata of this video.
Edit media
x
image
Upload Preview
image

Conclusion

 

The NVIDIA Jetson Nano JetBot Collision Avoidance example is a fairly good way to get familiar with Deep Learning and how to take camera images to control the direction of a Mobile Robot.  The example does take a bit of work and at times the code would hang but there was no indication of the issue from the Jupyter Notebook.  The next step is to take this further and create a non Jupyter Notebook version so the bot can navigate autonomously without the need of browser access. The Nano is an impressive platform that has much potential for AI, Deep Learning and Machine Learning applications.

  • Sign in to reply

Top Comments

  • jomoenginer
    jomoenginer over 6 years ago in reply to shabaz +3
    Thanks. That guy with the Flame Lawn Mower looks like he is a Tim Allen 'Home Improvement' fan. Considering the Robotic Lawn Mowers go for over $1,500 in the US, the Nano might be a relatively cheaper…
  • jomoenginer
    jomoenginer over 6 years ago in reply to dubbie +3
    Thanks. I suppose there could be a way to restructure the 3D image of the chassis to fit your printer. However, you could use another type of material such as Expanded PVC to create the base and place…
  • shabaz
    shabaz over 6 years ago +2
    Hi Jon, That was very informative, and a lot of fun to watch the 'bot doing it's thing : ) Amazing how a simple robot can become extremely sophisticated with a camera, and the trained-up Jetson Nano. If…
  • dubbie
    dubbie over 6 years ago in reply to jomoenginer

    Jon,

     

    All things are possible, but splitting it would mean redesigning using TinkerCAD. I could probably do it but it would be a lot of effort. Whereas if my 3D printer was just 1 cm bigger I could print the existing design - much easier.

     

    Dubbie

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • danielw
    danielw over 6 years ago in reply to dubbie

    could you split the parts and glue them together with an overlapping piece maybe?  I follow the BB8 Builders club and there is a lot of gluing, filling and sanding on that.  I also watch some of the 3D print prop builds which show how people are sticking things together.

    I do know that a lot of the time if I use superglue I'm dissapointed.  I guess because superglue doesn't fill gaps. Epoxy can take ages to set.  And I don't have a 3D printing pen to try and use that for plastic welding / gluing.

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • jomoenginer
    jomoenginer over 6 years ago in reply to dubbie

    Thanks.  I suppose there could be a way to restructure the 3D image of the chassis to fit your printer.  However, you could use another type of material such as Expanded PVC to create the base and place the parts on there.  I also have seem someone even use a Pololu Romi chassis for their JetBot.

    • Cancel
    • Vote Up +3 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • dubbie
    dubbie over 6 years ago

    A great project and a worthy winner. If only my 3D printer was just that bit bigger I could make one of these as well! I think I might have 3D printer envy!

     

    Dubbie

    • Cancel
    • Vote Up +2 Vote Down
    • Sign in to reply
    • More
    • Cancel
  • shabaz
    shabaz over 6 years ago in reply to jomoenginer

    Weed-killing robot is a great idea. I hope another robotic project14 challenge (maybe for outdoor 'bots) comes again quickly : )

    • Cancel
    • Vote Up +1 Vote Down
    • Sign in to reply
    • More
    • Cancel
>
element14 Community

element14 is the first online community specifically for engineers. Connect with your peers and get expert answers to your questions.

  • Members
  • Learn
  • Technologies
  • Challenges & Projects
  • Products
  • Store
  • About Us
  • Feedback & Support
  • FAQs
  • Terms of Use
  • Privacy Policy
  • Legal and Copyright Notices
  • Sitemap
  • Cookies

An Avnet Company © 2025 Premier Farnell Limited. All Rights Reserved.

Premier Farnell Ltd, registered in England and Wales (no 00876412), registered office: Farnell House, Forge Lane, Leeds LS12 2NE.

ICP 备案号 10220084.

Follow element14

  • X
  • Facebook
  • linkedin
  • YouTube