TABLE OF CONTENTS
|
PREFACE
I was happy to be provided a BeagleBone AI to experiment with for this Project. So, my son and I scratched our heads hard on what we could do that would be intriguing to a reader while testing nearly all functionality of the BBAI. The Debian Linux operating system works great with the BBAI and provides many project opportunities on its own. To take it further we wanted to make something original that would exploit all its capabilities: GPIO Input/Output, PWM, I2C, Visual AI Classification with TIDL, Open CV for a GUI, Ad Hoc WiFi Streaming, IoT API communication, and Audio. We also thought of how an educator could use it as a classroom project - dig out those Matchbox cars!
So, we dreamed up the Seein' Around Corners, Talkin', IoT Exploitin' BBAI Backup Car Cam. It's a novelty project intended to provide example code to get one going with the game changing BBAI - not as another Linux box with a web cam - but as a GPU accelerated, embedded micro controller to control your own inspired Vision AI robotics projects.
Of course, the BBAI can readily showcase existing Raspberry Pi project repositories. Our mission was to showcase the BBAI's hardware advantage. This project leverages TI's Vision AI accelerated hardware to max out the BBAI capability at frame rate speeds. With the BBAI being brand new hardware to the market, this project led to greater than 90 hours of research, design, coding, and documentation as we went down a rabbit hole to understand the BBAI architecture. We eliminated much clutter from this blog by documenting the research into a companion blog that is sure to get you going with the BBAI. We hope you enjoy our virtual backseat buddy and gain some nuggets of your own to make for this great platform.
-Sean and Connor Miller
Companion Blog: BeagleBone AI Survival Guide V3.11: PWM, I2C, Analog/Digital Read/Write, Vision AI, Video Text Overlays, Audio, & Hardware
PROJECT INTRODUCTION
Even with today's 170 degree rear cameras and backup ultrasonic sensors, parking lot fender benders still occur. This is because the people behind you don't leave room enough for you to back in to a spot. If you inch past it, they are likely to take the spot out from under you or at least ride your bumper such that you can't back up. So, you are forced to pull straight in to the spot.
The dilemma comes when you are ready to leave and need to back out into the steady flow of mall dwellers like yourself. No matter how much you rubber neck, you can't see on either side due to the big vehicles around you. Even with the wide angle lens of modern cars, the field of view can't see all the way down the aisle as to what is coming. If you're lucky, you have someone in the back seat that can say "Clear" or "Woa!" If not, some texting-oncomer could clip you in the blink of an eye. Often, we nudge out, wait a second, then nudge again and finally proceed with hope.
Backing Up on a Wing and a Prayer
In this project, we are going to use the new hot-off-the-shelf Beaglebone AI's advanced vision and frame manipulation hardware to eliminate the problem altogether. We'll design a back up camera that mounts under our trailer hitch.
It won't be your average, every day backup camera, though. It is an Visual AI driven back seat partner! Look at these features:
- Turret controlled from the drivers seat allowing you to peek around corners and cars to see what is coming
- Streamed Video to your phone
- Display Backup Reference Lines
- Realtime Turrent Rotation Angle Indication on the Display
- Realtime Object Distance on the Display
- Realtime Object Recognition (vehicles, people, pets) on the Display
- Realtime Weather Forecast and Temperature
- Realtime Human Voiced Shout Outs - "Woa!", "Car behind us", "Car on the left", "Car coming on the right", "Wampa!" (just kidding about the Wampa - that's for future releases)
- Remote viewing to keep an eye on your car's surroundings.
Let's get to it!
Beaglebone AI Board Combined with our Custom Turret Camera
BILL OF MATERIALS
Beaglebone AIBeaglebone AI
TFMini Infrared Time of Flight SensorTFMini Infrared Time of Flight Sensor
5KOhm Potentiometer5KOhm Potentiometer
Micro ServoMicro Servo
Mobile Smart Phone
DESIGN
This backup camera is unique in that it is a turrent camera controlled by a potentiometer and has smart Vision AI processing by the BeagleBone AI microcontroller. Custom, 3D printed components were designed to fit the trailer hitch on our Jeep.
Here are the key components:
1. Weather Resistant Case
2. 3D Printed Servo Arm
3. TFMini Plus
4. Precision Bearing
5. Micro Servo for Precise Targeting
6. Camera Turret
7. Camera
Exploded View of the Backup Camera
REVERSE ENGINEERING THE BEAGLEBONE AI
After designing this project with the brand new BeagleBone AI in mind, we hit a snag - although the card is available - its GPIO controlling libraries aren't! It doesn't have its popular "bonescript" nodejs library working for GPIO, yet. The Adafruit Python library doesn't work, yet. BeagleBone Black Device Tree Source files brick the bootup process. No analog reads or writes are possible from the literature and the utilities to show pins give funky pin IDs.
Project Learning: At first, discovering there were no high level libraries to get the GPIO pins working, I thought maybe I'd use another microcontroller instead. But, I thought I'd first post to the community: Accessing GPIO and PWM on BeagleBone AI When I did, I quickly got a lot of leads and references to chase down on Github which ultimately led me to a chat room that had folks that used the BeagleBone at their work. They bestowed their device source tree mojo enabling the GPIO on the BBAI. After that, I had digital/analog read/write, PWM, and I2C communication! I spent the next two weeks probing and writing c code to sort it all out. Now, the BBAI is my favorite embedded solution in its form factor!
To not let all that research go to waste, I posted my discoveries as a project companion blog under the contest Vision Thing. This allowed me to not clutter this one up and help the rest of e14 community and contestants get their ideas off the ground. It has an actual fully functional device tree source file for the BBAI and a collection of tested working code examples in C, javascript, and Python:
In order to use my code for this project, you'll need to first follow the BBAI Setup Checklist at the beginning of that blog, which I won't bother repeating here. My goal was to eliminate the need for a supplemental micro controller in our design and the research paid off!
LOCAL CODE
Below I include most code, but some of it is huge, so I just show a snippet or two. To get the full package, type the following to a terminal screen:
git clone https://github.com/RaisingAwesome/BeagleBoneBackupCam.git
Follow the repository's readme for installation instructions on your BBAI. Post any questions in the comments below or at the repository.
My software approach is to have four services running on the BeagleBone AI (which will be embedded in my vehicle). They are as follows:
- one service handles control of the camera turret
- one logs the current distance from an object behind the vehicle
- one updates IoT data- in particular, the weather forecast and temperature
- one handles the hardware accelerated vision AI including both object classification and text/graphic overlays on the video display. It also is where the intelligence is for audio warning shoutouts from the backseat ("Woa!", "Car on your left", etc.)
I took the multiple service approach so I can easily develop the code for each in modular fashion and so I can readily leverage the resulting services to other projects.
Another Project Learning: I remember the days of the Amiga and its RAM disk. Those were the good ole' days. Doing this project, I found that Linux has a virtual disk located at /sys. I know I'm late to the party on this, but this is really cool. It allows device communication to the kernel so your code can communicate with it as you would to the file system. You can even set GPIO pins high and low as well as turn on PWM. This approach easily exposes the BBAI GPIO pins for assessing state by an infinite number of programs. In turn, the /sys directory structure is basically the light speed central nervous system of your robot. That led me to further learn about Linux FIFOs that I'll talk about in a minute.
Turret Code
The turret for the backup camera is controlled by a potentiometer from the drivers seat. Using pin P9.33 as an analog in, this code translates the users input to a PWM out to the servo.
//////////////////////////////////////// // servoPot.c // reads a pot and translates it to // a servos position. // Author: Sean J. Miller, 11/3/2019 // Wiring: Jumper P9.14 to a servo signal through a 220ohm resistor // Hook a potentiometers variable voltage to P9.33 (analog in) // See: https://www.element14.com/community/community/designcenter/single-board-computers/next-genbeaglebone/blog/2019/10/27/beagleboard-ai-brick-recovery-procedure //////////////////////////////////////// #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <sys/stat.h> #include <errno.h> int analogRead(){ int i; FILE * my_in_pin = fopen("/sys/bus/iio/devices/iio:device0/in_voltage7_raw", "r");//P9.33 on the BBAI char the_voltage[5];//the characters in the file are 0 to 4095 fgets(the_voltage,6,my_in_pin); fclose(my_in_pin); //printf("Voltage: %s\n", the_voltage); sscanf(the_voltage, "%d", &i); return (i); } void pwm_duty(int the_duty_multiplier) { FILE *duty; int duty_calc; duty = fopen("/sys/class/pwm/pwm-0:0/duty_cycle", "w"); fseek(duty,0,SEEK_SET); duty_calc=(600000 + (1700000*(float)((float)the_duty_multiplier/4095))) ; //printf("Duty: %d\n", duty_calc);//1ms fprintf(duty,"%d",duty_calc);//1ms fflush(duty); fclose(duty); } void setupPWM() { FILE *period, *pwm; pwm_duty(2000); period = fopen("/sys/class/pwm/pwm-0:0/period", "w"); usleep(20000); fseek(period,0,SEEK_SET); usleep(20000); fprintf(period,"%d",20000000);//20ms usleep(20000); fflush(period); fclose(period); pwm = fopen("/sys/class/pwm/pwm-0:0/enable", "w"); usleep(20000); fseek(pwm,0,SEEK_SET); usleep(20000); fprintf(pwm,"%d",1); usleep(20000); fflush(pwm); fclose(pwm); } void recordRotation(int the_rotation){ char buffer[64]; //printf ("In rotation: %d\n",the_rotation); the_rotation=(int)((((float)the_rotation)/(float)4440)*180); // printf ("Rotation: %d\n",the_rotation); int ret = snprintf(buffer, sizeof buffer, "%d", the_rotation); int fp = open("/home/debian/ramdisk/bbaibackupcam_rotation", O_WRONLY|O_CREAT,0777 ); if (fp>-1){ write(fp, buffer, sizeof(buffer)); close(fp); } } int main() { int ii=0; //printf("Setting up\n"); setupPWM(); while(1) { ii=analogRead(); if (ii>1310) ii=1310; if (ii<140) ii=140; //printf("ii:%d\n",ii); pwm_duty(ii*2); recordRotation(ii*2); usleep(20000); } }
*See my GItHub for the latest, optimized code
Vision AI Code
Part of the Cloud9 examples installed on the BBAI per the Quick Start Guide, there is a file named classification.cpp in it. It makes use of the hardware acceleration for vision AI by using the TIDL libraries. I used it as my base code for object recognition. I added additional code for the graphic overlays for the streaming backup camera video. Of course, this took getting a working device tree source file first as discussed in "Reverse Engineering the BBAI" section above.
To start, I needed to narrow the hundreds of trained pictures that come with the TIDL example to just ones that I would expect and care about - vehicles, people, and pets. So around line 106 of classification.cpp, I did the following substitution to the selected_items:
selected_items[0] = 609; /* jeep */ selected_items[1] = 627; /* limousine */ selected_items[2] = 654; /* minibus */ selected_items[3] = 656; /* minivan */ selected_items[4] = 703; /* park_bench */ selected_items[5] = 705; /* passenger_car */ selected_items[6] = 779; /* school_bus */ selected_items[7] = 829; /* streetcar */ selected_items[8] = 176; /* Saluki */ selected_items[9] = 734; /* police_van */
*See my GItHub for the latest, optimized code
The indexes, such as 609=jeep, are found in by typing the following:
nano -l /usr/share/ti/examples/tidl/classification/imagenet.txt
The index value is equal to the line number minus one.
I replaced the DisplayFrame routine around line 310 with the one shown below. I discuss this in detail in the section "Making the GUI" below. This allowed for rendering text for what the BBAI sees as well as backup assistance reference lines and the distance to an object in the line of sight. I was also able to render a target that indicated the extent of turret rotation. Note, there is the functions custom distance_message and rotation called. They are provided in the next section discussion Distance Tracking.
void DisplayFrame(const ExecutionObjectPipeline* eop, Mat& dst) { static time_t timer=time(NULL); static string my_message; static std::string * static_message=new string(""); static float the_temp_distance=10; if(configuration.enableApiTrace) std::cout << "postprocess()" << std::endl; int is_object = tf_postprocess((uchar*) eop->GetOutputBufferPtr()); if (the_distance>the_temp_distance+.5) the_temp_distance=the_distance; if (the_distance<8&&the_distance<(the_temp_distance-.5)&&time(NULL)>timer) { system("sudo -u debian aplay /var/lib/cloud9/BeagleBone/AI/backupcamera/woa.wav"); the_temp_distance=the_distance; timer=time(NULL) + 1; } if(is_object >= 0) { my_message=*labels_classes[is_object]; if (time(NULL)>(timer)) { if (rot<30) { system("sudo -u debian aplay /var/lib/cloud9/BeagleBone/AI/backupcamera/car_on_right.wav"); } else if (rot>58){ system("sudo -u debian aplay /var/lib/cloud9/BeagleBone/AI/backupcamera/car_on_left.wav"); } else { system("sudo -u debian aplay /var/lib/cloud9/BeagleBone/AI/backupcamera/car_behind.wav"); } timer=time(NULL) + 4; } }else { my_message=*static_message; if (time(NULL)>timer&&the_distance<1){ if (rot<30) { system("sudo -u debian aplay /var/lib/cloud9/BeagleBone/AI/backupcamera/something_on_right.wav"); timer=time(NULL) + 4; } else if (rot>58){ system("sudo -u debian aplay /var/lib/cloud9/BeagleBone/AI/backupcamera/something_on_left.wav"); timer=time(NULL) + 4; } } } cv::putText( dst, my_message.c_str(), cv::Point(220, 420), cv::FONT_HERSHEY_SIMPLEX, 1.5, cv::Scalar(0,0,255), 3, /* thickness */ 8 ); //Header cv::rectangle( dst, cv::Point(0,0), cv::Point(640,130), cv::Scalar(255,255,255), CV_FILLED,8,0 ); cv::rectangle( dst, cv::Point(0,130), cv::Point(640,170), cv::Scalar(0,0,0), CV_FILLED,8,0 ); cv::putText( dst, "BACKUP ASSISTANCE", cv::Point(60, 165), //origin of bottom left horizontal, vertical cv::FONT_HERSHEY_TRIPLEX, //fontface 1.5, //fontscale cv::Scalar(255,255,255), //color 2, /* thickness */ 8 ); //backup distance meter left side cv::line( dst, cv::Point(104, 420), cv::Point(50, 480), cv::Scalar(0,0,255), //color blue 5, //thickness 8, //connected line type 0 //fractional bits ); cv::line(//inward line dst, cv::Point(104, 420), cv::Point(134, 420), cv::Scalar(0,0,255), //color blue 5, //thickness 8, //connected line type 0 //fractional bits ); cv::line( dst, cv::Point(158, 360), cv::Point(114, 411), cv::Scalar(0,255,255), //color yellow 4, //thickness 8, //connected line type 0 //fractional bits ); cv::line(//inward line dst, cv::Point(158, 360), cv::Point(178, 360), cv::Scalar(0,255,255), //color yellow 4, //thickness 8, //connected line type 0 //fractional bits ); cv::line( dst, cv::Point(212, 300), cv::Point(168, 351), cv::Scalar(0,255,0), //color green 2, //thickness 8, //connected line type 0 //fractional bits ); cv::line(//inward line dst, cv::Point(212, 300), cv::Point(222, 300), cv::Scalar(0,255,0), //color green 2, //thickness 8, //connected line type 0 //fractional bits ); //backup distance meter right side cv::line( dst, cv::Point(536, 420), cv::Point(590, 480), cv::Scalar(0,0,255), //color blue 5, //thickness 8, //connected line type 0 //fractional bits ); cv::line(//inward line dst, cv::Point(536, 420), cv::Point(506, 420), cv::Scalar(0,0,255), //color blue 5, //thickness 8, //connected line type 0 //fractional bits ); cv::line( dst, cv::Point(482, 360), cv::Point(526, 411), cv::Scalar(0,255,255), //color yellow 4, //thickness 8, //connected line type 0 //fractional bits ); cv::line(//inward line dst, cv::Point(482, 360), cv::Point(462, 360), cv::Scalar(0,255,255), //color yellow 4, //thickness 8, //connected line type 0 //fractional bits ); cv::line( dst, cv::Point(428, 300), cv::Point(472, 350), cv::Scalar(0,255,0), //color green 2, //thickness 8, //connected line type 0 //fractional bits ); cv::line( //inward line dst, cv::Point(428, 300), cv::Point(418, 300), cv::Scalar(0,255,0), //color green 2, //thickness 8, //connected line type 0 //fractional bits ); //Footer cv::rectangle( dst, cv::Point(220,480), cv::Point(420,440), cv::Scalar(0,0,0), CV_FILLED,8,0 ); cv::putText( dst, (distance_message()), cv::Point(255, 475), //origin of bottom left horizontal, vertical cv::FONT_HERSHEY_TRIPLEX, //fontface 1.5, //fontscale cv::Scalar(255,255,255), //color 2, /* thickness */ 8 ); int rot=rotation(); if (rot>44){ cv::drawMarker( dst, cv::Point(320+((44-rot)*5), 250), cv::Scalar(0,0,0), //color MARKER_CROSS, 20, 5, 8 ); cv::drawMarker( dst, cv::Point(320+((44-rot)*5), 250), cv::Scalar(255,255,255), //color MARKER_CROSS, 20, 1, 8 ); } else if (rot<44){ cv::drawMarker( dst, cv::Point(320-((rot-44)*5), 250), cv::Scalar(0,0,0), MARKER_CROSS, 20, 5, 8 ); cv::drawMarker( dst, cv::Point(320-((rot-44)*5), 250), cv::Scalar(255,255,255), MARKER_CROSS, 20, 1, 8 ); } else { cv::drawMarker( dst, cv::Point(320, 250), cv::Scalar(0,255,0), //color green MARKER_DIAMOND , 20, 1, 8 ); } if(last_rpt_id != is_object) { if(is_object >= 0) { std::cout << "(" << is_object << ")=" << (*(labels_classes[is_object])).c_str() << std::endl; } last_rpt_id = is_object; } }
*See my GItHub for the latest, optimized code
Another Project Learning: To date, I thought OpenCV was all about motion detection and object recognition. I found it's well beyond that. I learned it has some easy to call functions to draw overlay graphics on the screen at the speed of the frame rate. Amazing! Here is a great resource for learning how to draw with OpenCV as I have down above: https://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html?highlight=line#
Distance Tracking Code
For the BBAI Backup Camera to add some further assistance to backing up, I also mounted a TFMini Plus for distance sensing. In daylight, it can sense up to 6 meters and communicates with the BBAI using the I2C serial protocol.
Presently, the BBAI out of the box software won't be able to pull this off. You'll have to customize a Device Tree Source File as I mentioned in Reverse Engineering the BBAI section. In that section I provide a link to my parallel blog that has a good working DTS file. With it, I hooked the TFMini Plus to P9.19 and P9.20. I used a Linux FIFO (virtual /tmp file) to store the current distance. This allowed me to run the following code as a service to constantly update the distance sensed to the /tmp file for the classification.cpp code to read.
/* ----------------------------------------------------------------------- * * Title: tfmini.c * * Description: C-code for TFMini Plus * * Tested on BeagleBone AI * * 11/6/2019 Sean J. Miller * *References: * *https://stackoverflow.com/questions/8507810/why-does-my-program-hang-when-opening-a-mkfifo-ed-pipe* *https://stackoverflow.com/questions/2988791/converting-float-to-char * * Prerequisites: apt-get libi2c-dev i2c-tools * *------------------------------------------------------------------------ */ #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <unistd.h> #include <linux/i2c-dev.h> #include <fcntl.h> #include <sys/ioctl.h> #include <string.h> #include <errno.h> #define I2C_BUS "/dev/i2c-3" // I2C bus device #define I2C_ADDR 0x10 // I2C slave address for the TFMini module int debug=0; int i2cFile; void i2c_start() { if((i2cFile = open(I2C_BUS, O_RDWR)) < 0) { printf("Error failed to open I2C bus [%s].\n", I2C_BUS); exit(-1); } else { if (debug)printf("Opened I2C Bus\n"); } // set the I2C slave address for all subsequent I2C device transfers if (ioctl(i2cFile, I2C_SLAVE, I2C_ADDR) < 0) { printf("Error failed to set I2C address [%s].\n", I2C_ADDR); exit(-1); } else { if (debug) printf("Set Slave Address\n"); } } float readDistance() { //Routine to output the distance to the console int distance = 0; //distance int strength = 0; // signal strength int rangeType = 0; //range scale unsigned char incoming[7]; //an array of bytes to hold the returned data from the TFMini. unsigned char cmdBuffer[] = { 0x01, 0x02, 7 }; //the bytes to send the request of distance write( i2cFile, cmdBuffer, 3 ); usleep(100000); read(i2cFile, incoming, 7); for (int x = 0; x < 7; x++) { if (x == 0) { //Trigger done if (incoming[x] == 0x00) { } else if (incoming[x] == 0x01) { } } else if (x == 2) distance = incoming[x]; //LSB of the distance value "Dist_L" else if (x == 3) distance |= incoming[x] << 8; //MSB of the distance value "Dist_H" else if (x == 4) strength = incoming[x]; //LSB of signal strength value else if (x == 5) strength |= incoming[x] << 8; //MSB of signal strength value else if (x == 6) rangeType = incoming[x]; //range scale } float the_return = distance / (12 * 2.54); //convert to feet. return the_return; } void recordDistance(float the_distance){ char buffer[20]; int ret = snprintf(buffer, sizeof buffer, "%f", (the_distance)); if (debug) printf("About to open for writing...\n"); int fp = open("/home/debian/ramdisk/bbaibackupcam_distance", O_WRONLY|O_CREAT,0666); if (debug) printf("About to write...%d\n",fp); ret=write(fp, buffer, sizeof(buffer)); close(fp); if (debug) printf("Written %d\n",ret); } void main() { float my_distance=0; debug=0; //change to 1 to see messages. if(debug) printf("Starting:\n"); while (1) { i2c_start(); my_distance = readDistance(); if(debug) printf("the_distance: %f\n",my_distance); recordDistance(my_distance); close(i2cFile); if(debug) printf("Looping.\n"); usleep(1000000); } }
Last, I needed to add the custom distance_message and rotation functions talked about at the Vision AI code to classification.cpp. This reads the distance so it can add it to each frame. Rotation provides the camera rotation so it can show a target on the display to feedback how far it has rotated.
char *distance_message() { static char buf[20]; static char suffix[4]=" Ft"; static time_t timer=time(NULL); if (time(NULL)>timer) { int fd = open("/home/debian/ramdisk/bbaibackupcam_distance", O_RDONLY ); if (fd>-1) { int result=read(fd,buf,sizeof(buf)); if (result>-1){ close(fd); memcpy(buf+3,suffix,4); } sscanf(buf, "%f", &the_distance); } timer=time(NULL)+.5; } return (char *)buf; } int rotation() { static char buf[20]; static time_t timer=time(NULL)+.02; if (time(NULL)>timer) { int fd = open("/home/debian/ramdisk/bbaibackupcam_rotation", O_RDONLY ); if (fd>-1){ int result=read(fd,buf,sizeof(buf)); if (result!=-1) { close(fd); int i; sscanf(buf, "%d", &i); rot=i; } } timer=time(NULL)+.02; } return (rot); }
Speech Code
Since we can have the BBAI sort out what it sees, measure how far away it is, and keep track of which way the camera is looking, we might as well make it talk, right? With a USB hub added to the mix of components, we can have both a web cam and a speaker attached.
To enable sound from the BBAI USB port, I simply typed the following at the terminal:
nano ~/.asoundrc
I pasted the following in the nano editor:
pcm.!default { type plug slave { pcm "hw:1,0" } } ctl.!default { type hw card 1 }
Then, from my code, I applied some If..then's to fire corresponding audio with this:
system("sudo -u debian aplay /yourpath/yourfile.wav &");
IoT CODE
The BBAI has the power of past linux microcontrollers such as the Raspberry Pi and BeagleBone Black with the added benefit of hardware that accelerates Vision AI routines as described in the previous section. However, it's difficult to store data for every trained object visual known to man. It's also not possible for it to predict the future such as weather and news. However, because the BBAI has onboard WiFi and the ability to overlay content on its streamed video with OpenCV, it can have all this too!
To make use of this, you must first create a WiFi Hotspot on your mobile phone. You then config your beagle bone to connect to the internet through it with these commands:
sudo connmanctl scan wifi #(wait for response) services# (copy the long string representing your wifi) agent on# connect <paste yourstring> quit#
Microsoft Azure Machine Vision
So, for objects recognition beyond what we have stored on the card, we can simply do a "system" call to a python script that runs an image up through Microsoft Azure's Machine Vision Service like so:
import os import requests from picamera import PiCamera import time # If you are using a Jupyter notebook, uncomment the following line. # %matplotlib inline import matplotlib.pyplot as plt from PIL import Image from io import BytesIO camera = PiCamera() # Add your Computer Vision subscription key and endpoint to your environment variables. subscription_key = "YOUR KEY HERE!!!" endpoint = "https://westcentralus.api.cognitive.microsoft.com/" analyze_url = endpoint + "vision/v2.0/analyze" # Set image_path to the local path of an image that you want to analyze. image_path = "image.jpg" def spidersense(): camera.start_preview() time.sleep(3) camera.capture('/home/debian/ramdisk/backupcam.jpg') camera.stop_preview() # Read the image into a byte array image_data = open(image_path, "rb").read() headers = {'Ocp-Apim-Subscription-Key': subscription_key, 'Content-Type': 'application/octet-stream'} params = {'visualFeatures': 'Categories,Description,Color'} response = requests.post( analyze_url, headers=headers, params=params, data=image_data) response.raise_for_status() # The 'analysis' object contains various fields that describe the image. The most # relevant caption for the image is obtained from the 'description' property. analysis = response.json() image_caption = analysis["description"]["captions"][0]["text"].capitalize() the_statement="espeak -s165 -p85 -ven+f3 \"Connor. I see " +\"" + image_caption +"\" --stdout | aplay 2>/dev/null" os.system(the_statement) #print(image_caption) spidersense()
You'll first need the espeak package to handle the dynamic speech. To get it, type the following in the terminal:
sudo apt install espeak
Dark Sky Weather API and Beyond
Since we are going through a personal hotspot, we have every IoT API in the world at our disposal. For the BBAI Backup camera, we added icons for the weather forecast and the current temperature for our home town through DarkSky.net.
When I think IoT, I think websockets and JSON. So, your code needs to be able to open a websocket to an HTTPS site that returns a string of JSON. JSON stands for JavaScript Object Notation. It is basically a curly bracket text syntax that describes a database of related tables. The package rapidjson allows you to read the table fields with arrays to do what you need to do with the info - like show a rain or a snowflake icon.
The code is quite involved because its not a Python script wrapped around a C++ library - it is some lower level C++ opening web sockets. You can get the complete code and includes from my GitHub.
int main(int argc, char *argv[]) { if (DEBUG) cout << "Starting..." << endl; BootUp(); SetWallyWeather() ; time_t timer=time(NULL)+1; while (1){ std::cout<<forecast<<std::endl; std::cout<<wally.getTemperatureOutside()<<std::endl; char buffer[20]; int ret = snprintf(buffer, sizeof buffer, "%f", wally.getTemperatureOutside()); if (DEBUG) printf("About to open for writing...\n"); int fp = open("/home/debian/ramdisk/bbaibackupcam_temperature", O_WRONLY|O_CREAT,0666); if (DEBUG) printf("About to write...%d\n",fp); ret=write(fp, buffer, sizeof(buffer)); close(fp); if (DEBUG) printf("Written %d\n",ret); sleep(3); } return(0); //will never get here actually }
BBAI Road Test:
Text and Graphics are OpenCV
Forecast and Temperature are IoT
Vehicle Recognition is TIDL Vision AI
Target Marker is PWM & Analog Read
Distance Tracking is I2C
You will see the name Wally in my code. That is the name of my Home Automation Engine which I ported for the BBAI. So, if you really wanted to show off, you can have this code take Alexa voice commands as well...but this project is probably big enough. I'm impressed if you're still reading this far as it is (10 points if you comment that you did!) You can learn more on Alexa end points from my blog from earlier this year:
GUI CODE
There are a number of approaches that can be taken to show a Graphical User Interface (GUI). One would be to draw it in another paint program, read it in, and overlay it on the video image. For our project, we wanted to be able to control every pixel at every frame. Learning this would allow us to get the maximum from a GUI. Let's look at the picture and discus how we pulled it off:
BBAI Backup Cam Road Test - A car just zoomed by
In the final demo, you will see a car just passed by the camera which is why the word vehicle popped up. When you see this, you are seeing the TIDL libraries in action. The distance distplay is using I2C to communicate with the TFMini Distance Sensor installed on the turret. Distance is what is triggering the audio with the code execution changing the audio played based on what the TIDL object classification resolves. Using the OpenCV Library, we were able to place every desired pixel and color on the screen, exactly when we desired to show it.
Header Text
cv::putText( dst, "BACKUP ASSISTANCE", cv::Point(60, 165), //origin of bottom left horizontal, vertical cv::FONT_HERSHEY_TRIPLEX, //fontface 1.5, //fontscale cv::Scalar(255,255,255), //color white 2, /* thickness */ 8 );
The putText function of OpenCV is pretty straight forward. By changing the literal text to a variable, we can change the text on the fly. This is seen with the Distance Text.
Distance (Footer) Text
cv::putText( dst, (distance_message()), cv::Point(255, 475), //origin of bottom left horizontal, vertical cv::FONT_HERSHEY_TRIPLEX, //fontface 1.5, //fontscale cv::Scalar(255,255,255), //color 2, /* thickness */ 8 );
Rotation Indication
In the final demo, you will notice a target that moves around the screen as the camera rotates left to right. You can see it stationary in the picture below:
GUI Display during Desktop Demo: Note the Plus Sign (Rotation Indicator)
As the user rotates the camera from the driver's seat, the target tracks it. This feedback helps the user keep their bearing as to which way they are looking. To pull this off in OpenCV, we have a routine that stores the current servo position making it available when the frame is rendered.
cv::drawMarker( dst, cv::Point(320+((44-rot)*5), 250), cv::Scalar(0,0,0), //color black MARKER_CROSS, 20, 5, 8 ); cv::drawMarker( dst, cv::Point(320+((44-rot)*5), 250), cv::Scalar(255,255,255), //color white MARKER_CROSS, 20, 1, 8 );
The snippet above shows how the rotation indicator was rendered. I did a trick to make it visible regardless of the background. I first render a thicker black cross and then render a thinner white one over it. This gives the cross a contrast regardless of the color behind it one way or the other. Also, you see some math happening to locate the cross in relation to the servo position. This all happens with each frame. Pretty awesome!
Guide Lines
The red/yellow/green guide lines are a great example for my son as to why they teach him coordinates and "Rise over Run" ever year in school. A linear relation is the very common in every day life. To draw it, I first took a picture of my wife's backup camera as reference to get the look-and-feel. I then drew my first line:
cv::line(//inward line dst, cv::Point(482, 360), cv::Point(462, 360), cv::Scalar(0,255,255), //color yellow 4, //thickness 8, //connected line type 0 //fractional bits );
I then used good ole' Rise over Run to get the rest of the segments. I used simple arithmetic to mirror the coordinates for the opposite line.
CAPE CIRCUIT DESIGN
To get reliable wire-to-board connectors to the backup camera components, I needed to design a cape. A cape is the BeagleBoard name for a small board that plugs into its headers. Based on the pins selected in the code, here is the mini-cape design:
BBAI Backup Camera Mini Cape
Another Project Learning: You may notice a piezo circuit in the schematic. In the last week of the project, I laid in bed and thought, it would be neat to have the backup camera simulate someone helping back out into traffic with voice shoutouts. I hit the Google the next morning and saw how the BBB pulled off audio. So, I ordered me a USB to Audio cord and grabbed an old USB hub. Low and behold, I could hook the camera and the audio up at once! This then allowed me to code in voice responses to the realtime data to simulate a back seat partner calling out what he sees. So, bye-bye piezo bleeps and bloops - we have speech!
I made the backup camera mini-cape with a proto-board. It fits nicely into a 3D printed enclosure.
BBAI Mini Cape to Handle our Backup Camera Peripherals
Another Project Learning: As I was building my own wire-to-board cables for the peripherals, I was having mixed results with crimping pins. I developed some workarounds I was proud of, but I still was getting less than production quality work. This led me to make a post: My pin crimping tips -- what are yours? After the healthy discussion with the e14 community, I found two major issues I had with crimping - I didn't really understand my crimping tool and my crimping tool wasn't for the small pins I was attempting to crimp. Now I have the correct tools for the job and am crimping like a machine!
DESKTOP DEMO
One last test before we printed the components and headed out to the garage was our desktop demo. Our program on the BBAI streams its video to a web page. That's how we get the display on our iPhone. In the video we test the control of the rotation of our backup camera and view the rotation indication on the screen. Also, we test out the distance sensor and its display on the screen.
Another Project Learning: Earlier I mentioned I had learned about named pipes called FIFOs. I first used them to pass data between the above scripts using memory instead of the SD file system. However, with the desktop demo, I found that they have a drawback that causes a low frame rate for streaming video. FIFOs expect that when one program is ready to write, the other one is ready to read. In turn, I got a lot of latency in my streaming video as it waited for the distance to be measured and piped over. So, I switched to a RAM disk instead. This allowed the main program to read the current value in memory regardless of when it was written which eliminated the hesitation in frame rate. Here's the post that taught me how to do it: https://www.linuxbabe.com/command-line/create-ramdisk-linux .
3D PRINTED COMPONENTS
You can get these 3D Models in their basic Autodesk Fusion 360 format at my Github repository: Github If you don't have Autodesk Fusion 360, it's free and it will absolutely change your world.
BBAI Case
The BBAI itself will be installed behind a panel in the back of our '99 Jeep Grand Cherokee. To save material and allow the case to breathe, I designed it with plenty of slots. Since it won't show to the vehicle passengers, only "function" was of concern versus form. It will be installed with 3M Mounting Tape.
Autodesk Fusion 360 Design Rendering BBAI Car Backup Camera Brains
BBAI Car Backup Camera 3D Printed Case: the real thing
BBAI Turret Cam Case
For the turret cam, I bought a $9 web cam. It had a tapered cylinder allowing me to model my 3D print for a snug push fit. Above it, the TFMini Plus is firmly pressed into the turret body.
Left: Actuator Case, Right: Turrent with Cam and TFMini, Back: BBAI Case
ASSEMBLY AND ROAD TEST
In the final demo video, we take you through assembly, tuning of the audio responses, vehicle installation, and a vehicle backup test.
Pressing the Bearing to Assemble the Turret
Turret Cam Testing of Servo Limits
Tuning the Audio Timing With a Matchbox Test Rig
Installation Location - Back Driverside Quarter Panel
BBAI Cam Installed
BBAI Cam Road Test
Demo Video
PROJECT SUMMARY
As stated in the Preface, we sought out to stretch our Maker limits as newcomers to the BeagleBone platform. For the project, we ended up creating two blogs: One blog to contribute to the community every little thing we learned about the BBAI and the other, this blog, to provide an all inclusive literal road test covering GPIO Input/Output, PWM, I2C, Visual AI Classification with TIDL, Open CV for a GUI, Ad Hoc Streaming, IoT API communication, and Audio. Although our project taught us a lot and achieved our objectives, it did make us think - what would we do for a permanent solution that would be better than any solution on the market today?
This is what we would change for a permanent Backup Assistance BBAI Device:
- Eliminate streaming video: The streaming resulted in motion blur and delays. Often, the Visual Classification would occur in <.2 seconds of the object coming in view, but the streaming video would show the video with over a second delay. For this application, we need an immediate, clear response.
- Add multiple cams to have hands free surveillance: we learned that with a USB hub, we could have additional backup cameras. Since they are just $8, I prefer moving the cams inside the vehicle and pointing them out the back right, left, and hatch. This then could verbalize blind spot detection as well as provide backup assistance and even help with parking straight between the lines when backing in a spot. In addition, you could use streaming video to monitor your car within WIFI range on a multi view page.
- Replace the TFMini Plus with aftermarket backup assistance. You can get those buttons on the bumper for $15 and they have a wide detection angle and installs directly in the bumper. No concern of theft. The TFMini Plus is a great component and was fun to showcase I2C, but at $45 with a narrow beam, its best applied to our next adventure.
- Add Parking lot line sensing. This would could assist with backing in straight between the lines.
- Add additional safety: Add a linear actuator to the brake pedal...just kidding.
Well, we hope you enjoyed our blog and learned the BBAI along with us. We'd love to hear your ideas for an epic backup assistance solution in the comments, too!
See you next time,
Sean and Connor Miller
REFERENCES
- BeagleBone AI Survival Guide V3.11: PWM, I2C, Analog/Digital Read/Write, Vision AI, Video Text Overlays, Audio, & Hardwa… - my parallel blog that captured everything I learned about the BBAI as I made this project
- https://github.com/beagleboard - Main BB github repository
- BeagleBoard.org - latest-images - Latest Debian Images
- https://github.com/beagleboard/bb.org-overlays - BeagleBone Kernel References
- https://www.hackster.io/175809/tidl-on-beaglebone-ai-1ee263#toc-make-sure-the-tidl-library-and-examples-are-installed-2 - Vision AI example
- https://www.elinux.org/EBC_Exercise_41_Pin_Muxing_for_the_AI - shows how to set up pins on board, but always bricks mine!
- https://groups.google.com/forum/embed/?place=forum/beagleboard&showsearch=true&showpopout=true&showtabs=false&hideforumt… - BeagleBone Forum
- https://training.ti.com/texas-instruments-deep-learning-tidl-overview - How to train the vision AI for the BBAI's Texas Instruments chipset
- http://BeagleBoard.org/chat - live chat with the board creators
- https://github.com/jadonk/bonescript - BoneScript API
- https://docs.google.com/spreadsheets/d/1fE-AsDZvJ-bBwzNBj1_sPDrutvEvsmARqFwvbw_HkrE/edit?usp=sharing- spreadsheet showing pin IDs and other useful info for pinmux configuration
- Accessing GPIO and PWM on BeagleBone AI - e14 Discussion that is trying to sort out GPIO use for the BBAI
- https://github.com/beagleboard/cloud9-examples/issues/18 - issue created on beagleboard github on cloud9 examples not working
- https://github.com/adafruit/adafruit-beaglebone-io-python/issues/317 - Adafruit Github Issue created concerning python examples not working
- https://github.com/mvduin/bbb-pin-utils/tree/bbai-experimental#show-pins - a good utility for viewing pin configuration of your board.
- https://cdn-learn.adafruit.com/downloads/pdf/introduction-to-the-beaglebone-black-device-tree.pdf - all about device tree source files
- https://www.youtube.com/channel/UCXAfHSB_IhaKhntZ4bWfmbw - my youtube channel of projects
- https://github.com/beagleboard/beaglebone-ai/wiki/System-Reference-Manual- the under development reference manual on GitHub
- BeagleBone AI BB-AI Photos for Documentation Purposes- excellent share byshabaz
- https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/- great article on the history of AI, Machine Learning, and Deep Learning
- A Beginning Journey in TensorFlow #1: Regression
- Introduction — TIDL API User's Guide
Top Comments