RoadTest: BeagleBone AI
Evaluation Type: Development Boards & Tools
Did you receive all parts the manufacturer stated would be included in the package?: True
What other parts do you consider comparable to this product?: The most obvious competitor would be RPI4.
What were the biggest problems encountered?: The majority of the demo code on the TIDL (TI Deep Learning Framework) as of November 2019 is in C++ so there is not much Python support. There is no Tensorflow Lite support as of mid-December 2019. Using deep CNN and RNN networks coded on Tensforflow/Keras is not supported so you either have to try to convert them to C++ manually or use the caffe-jackinto framework.
I received a BeagleBone AI for review from E14 3 weeks ago. My initial idea was to build a smart camera application.
The story below shows some of the steps I took to bring this project to life.
The first thing to know is that you need a USB3 cable to get started.
The second thing to know is that the board gets hot, so active cooling is a must have requirement to avoid processor shutdowns in the middle of work.
The package came with a small fan which uses M3 screws.
These have to be at least 1.3mm tall. After soldering the fan on two male headers I snipped from a 0.1 inch header I connected it to the 5V SYS pin and GND pin.
The board comes with a 16GB eMMC so even if you don't have a class 3 SD card it should be good.
First we have to install some packages and update the OS.
As soon as you plug in the board it enumerates as an MSD device. All you have to do is to click on the start file which will open up a browser.
The BBAI configures itself as a access point server.
Once connected , simply connect to a WIFI network using the instructions provided on the cloud9 browser IDE.
Up to this point BBAI gets a 10/10 for ease of use.
Next step was to run the example program under the BBAI folder.
This application uses the TIDL (TI Deep learning ) framework. Make sure to read the README file since it requires one to install and update the gstream package and open-cv.
I used a generic USB camera. To test this, check if the cam shows under /dev by issuing:
Performance of the demo was almost acceptable. The demo takes frames from the camera and uses a highly trimmed version of imagenet.
It is capable of recognizing only a couple of categories.
It recognized me as a ping-pong ball and recognized the BBAI as a cell-phone which is marginally better. The issues as I mentioned is that it uses a trimmed out version with only a couple of categories.
Before I proceeded any further I had to install a bunch of apps on the eMMC card in order to update the design.
Let's first install VNC viewer.
sudo apt-get install x11vnc
Then on the cloud9 terminal issue:
su debian x11vnc -display :0 -forever
You can the use a simple VNC client like realVNC.
Updating the device tree
Out of the box the BBAI pins do not support the SPI bus. So why not? To fix this one has to edit the device tree.
Checking under boot/dtb where the compiled device tree file are shows there are 2 dtb for BBAI.
One is for the plain BBAI, the other for BBAI robotics cape.
To make sure I checked the dts file on github which confirms that the vanilla DTB on the BBAI does not have any configurations for the SPI bus.
After overwriting the BBAI with the BBAI robotics cape .dtb and rebooting there are now two SPI busses available as shown on the photo below.
Install a file manager:
sudo apt-get install pcmanfm
install guvcview for usb cam
sudo apt-get install guvcview
The BBAI has both python 2 and 3 installed. Python2 comes preloaded with OpenCV whereas Python3 does not have OpenCV installed.
I wanted to install PIL however I encountered a number of issues due to the use of old packages for setup tools , wheels.
sudo apt-get install build-essential python-dev python-setuptools python-pip python-smbus -y pip install imutils
After many tries I found the following working configuration.
This proved to be a challenge since the default pip and setup-tools are quite old and not updated.
After many trials and errors the following sequence works for PIL on python 2
pip install wheel sudo pip install setuptools --upgrade pip install --upgrade pip sudo python -m pip install pillow --no-cache-dir
Having updated the OS and installing the desired packages the next step was to test the AI capabilities of the board.
The TIDL framework
BBAI uses the concept of EO (Executor Objects). These can be DSP or EVE's. EVE stands for Embedded Vision Engine and BBAI has two of each.
The idea is the the main CPU offloads the tensor calculations to these cores on the SOC. All this is done via the TIDL framework.
Note that all examples under /usr/share/ti have to be executed with sudo
To test the framework examples change directory under:
This folder contains a couple of AI networks such as
To test imagenet TILD with custom images I copied two images on the /usr/share/ti/examples/tidl/imagenet folder.
Again, you have to use sudo to copy files to this location.
To test the networks you have to select the number of EO and pipe the image to it :
sudo ./imagenet -d 2 -e2 -i cat.jpeg
This worked fine. FYI, the -d flag stands fro DSP units while the -E flag stands for the EVE units.
The next step I tried was to process live video from the camera
sudo ./imagenet -i camera0
Running this from the cloud9 IDE, did not work. So I tried to run it from the VNC shell.
This did not work either. The issue seems to be some assertions with the LVVM compiler.
I tested the rest of the examples and they work with the provided test vectors or by using input images but the same problem is encountered with the video feed,
Next I tried to download the TIDL from Github and compile it myself. This did not fare any better so my next option was to work around the limitation by calling the complied C program from a Python script.
The project i wanted to build was an AI camera that recognizes objects and sends an email with a description to the user. I used a generic USB camera for image acquisition. The camera shows up under /dev as video0.
The next step was to implement the AI pipeline. After some consideration I decided to simplify the project by
implementing a simple image recognition pipeline that consists of determining the class object and identifying the object on the camera frame.
To do so one has to call the inception network and then call the SSD object.
Since there are no Python bindings offered on the documentation , I tried to call the compiled C programs via Python.
The complete application is shown below. Basically the application works as follows:
1) Acquire image
2) Pre-process frame using PIL library and save it as image
3) Pass saved frame to imagenet to obtain class
4) Pass saved frame to SSD to obtain location
5) email description of the object to user if class fits a category
Since BBAI does not have much Python support at the moment I am trying to get around this by figuring out a way to call the complied C programs from Python.
This is a work in progress at the moment.
I managed to install Pillow on python3 on the BBAI using the command below:
pip3 install pillow
Good information about your configuration success. The BB AI has a lot of potential. As was seen in the “Vision Thing” competition, there’s steps that need to be done to get all the features to work.