Hi, this will be my final update blog for the Pi Chef Design Challenge. It's my first time being a part of a competition like this, and I really like the form of the competition. I set out to build a smart shelf as my project. The idea was to have a shelf which would measure the weight and recognize the items on it, and then send that data online where it could be viewed either by a smartphone or a computer. I had past experience working with Arduinos but never really had any project with a Raspberry so that was a first for me. In this blog I'll compile all of the other blogs into this one, with explanations of how and why I did things, and also how I would upgrade certain parts.
First of all let's start with the features that the project needs to have:
- Real life model
- Weight measurement
- Item recognition
- Viewing data online
- Connecting all of that
Real life model
Firstly let's start of with a really basic and necessary step, and that is building the real life model/ the test bench. There were really two options for this part, it was either buying one or making one from scratch. I went with the latter. There are a few reasons to why I did this, firstly I could make it into any shape and size I want, I can choose my materials and make it noticeably lighter than a store bought one. For the material I went with 4mm thick plywood, the one that is used in schools, because I find it very easy to work with, anyone can get it easily, it's light and with supports it can be made pretty sturdy. Besides plywood i used 10x10mm slats which i utilized as reinforcement throughout the build, and of course to connect it all together I used standard wood glue. Here are some of the pictures of the shelf being made:
In the end I think it turned out pretty well, I was using a table saw, so the edges were a little rough and needed some sanding down. If you check the older blogs you will see that there was a plan on making it a two part shelf, which would have been good for weighing the lighter stuff, but I simply didn't have time to make it all work, so in the end i used only this part of the shelf. Some of the upgrades I would do on this part of the build would probably be, adding LED strips to it, previously mentioned second part of the shelf, and making at least the front rail easily detachable so it would be easier to clean the shelf if something is spilled on it. The shelf should be of course mounted to the wall, but for the sake of easier testing it i rigged up a test bench using clamps and two pieces of wood to hold it to a table, which proved to work pretty good for the testing. Here is a picture of the test bench:
Weight measurement
For this part I had a couple of variations based on the same idea. It was apparent that I would be using load cells, They were the easiest and least costly method to use. In the end I used two 10kg load cells on either side of the shelf. Now to successfully connect them to the Raspberry I used two hx711 amplifiers. The amplifiers I had were dual channel, but I couldn't get the second channel working properly. I had problems from completely inaccurate readings to readings that were all over the place while the load didn't change. So in the end I went with using two amplifiers which proved to be extremely simple to set up, and I didn't have any issues while using this method. First of all the load cells needed to be calibrated, on how that is done is by putting a known mass on the load cell, getting the reading, dividing that reading by the known mass, and putting that value as the reference value in the code, on how to wire up the load cell I have an explanation in my 7th blog (S.H.E.L.F. - The one item shelf - Pi Chef Design Challenge - Blog post #7 ), and a guide for calibrating load cells can be found in the libraries example code. I went with tatobari's hx711 library which can be found here (https://github.com/tatobari/hx711py). It's a great and simple library which can easily be modified with an example (first try) program which shows you how to calibrate the load cell itself.
In the end, through comparing the apparatus to a store bought scale I found maximum difference to be within 5 grams. Which is awesome for the use I intended. One thing how I would upgrade this part would be with different sections of the shelf, where one would be aimed towards bulkier heavier stuff while the other one would be a much finer part for measuring the mass of spices and stuff like that, but again nevertheless these load cells proved to be very good.
Item recognition
This is the part where I had many ideas. The goal was to in the simplest and most user friendly way possible detect the item that was placed on the shelf. The first idea was having fixed positions for things on the shelf with switches which would detect if something was placed on top of them. This idea while pretty simple and basic, but it was pretty limiting for a user, and would not be the most elegant thing when finished. Second idea was going with specially designed NFC containers/ stickers. While this idea sounds promising, I didn't go with it because of availability of NFC stickers and how it would all have to be executed. The final idea with which I went was to have stickers which could be recognized with an overhead camera. I liked this idea a lot for a few reasons.
- It didn't require any hardware besides a camera
- The stickers could be easily printed at any copy place, or at home
- Stickers for new items could easily be made
- The stickers could easily be upgraded to contain more data (container weight, how long something can sit on a shelf,...)
- With OpenCV the whole thing turned out to be exceptionally easier than i first thought
For this part I used a piCamera module, but a USB webcam can be also used, and would probably be easier due to the longer cable. Another big reason why I liked this method is because of the direction home automation is taking. There is a rapidly increasing number of cameras all around houses, if the trend continues like this, there will be cameras monitoring everything, which could also be ingredient levels on the shelfs. Before submitting this idea and and starting with the label part I didn't have a clue about Amazon Go. I was impressed when I saw how based on the same idea like this could be implemented on an extremely huge scale professionally. Here is a short recap of the label designs and how I used OpenCV to recognize them (for this part I went intro great detail in my sixth blog which can be found here S.H.E.L.F. - Labels - Pi Chef Design Challenge - Blog post #6 ).
The goal was to design to design a label which could easily be read, and by that I mean, without the need for a lot of coding as well as not having to have super high resolution cameras. Another important thing that i wanted to ensure was that the labels were intuitive, and that it was all in a somewhat nice looking package. I could have went with a QR code like label with a picture on the side or a name, but I just didn't like the idea. So by reading a book and just reading online about CV i found that it's rather simple to detect certain color ranges as well as circles. So knowing that I set of to design the labels. I wanted to use the least number of words possible on the labels, so the language wouldn't be a barrier, Here are some of the designs I made for different items:
Now for the technical part of the stickers. Knowing of course that there would be more items on the shelf at once meant that the each label had to be read separately. This is where the bright pink outer circle comes into play. The reason for the color is that I feel that is the least used color in packaging so while looking for the labels I wouldn't run into issues with a lot of additional noise from other stuff. When finding that circle, the next thing was to crop out the square that contained the circle so the image can go on to be processed further. When that was done it was time to detect what was on the label. That was done by a unique combination of green and yellow circles on the label. I made a list which is stored on the Raspberry which makes the connection between certain codes and items, as mentioned before this can easily be further upgraded to contain more data, some of the ways that can be done is by adding new shapes, new colors, new label, shapes and so on. All of the code will be shown later in this blog but the detailed explanation of the code of how what works can be found in the sixth blog. This part of the project is the one I am most proud of. I never before worked with image processing and really didn't know what to expect. So I planned and left a lot of time for this part from reading to watching tutorials on OpenCV, and am extremely happy on how it turned out it the end. On the downside though this meant I had less time for things that I thought were going to be easier, but turned out that they can be a real hassle.
Viewing the data online
This was a purely software based problem. There are a lot of different ways to go about doing this, but in the end I went with the simplest of them all, thingspeak. Thingspeak is a great free service which enables people to easily upload data and get data from. This is where the time limit had an effect. The plan was developing my own app, but looking online and considering how much time I had left, I decided to go with an existing app called ThingView for Android. This app proved to be extremely easy to use where the only thing needed for setting it up was the API key that can be found on the created thingspeak channel. On the Raspberry side of things one new library was needed, and a really rather simple code to upload the data online, which could than easily be accessed by any device from anywhere. This method of course being so fast had its limitations. Firstly there was a limit of 8 fields per channel, but this can be avoided by having multiple channels. Also it didn't provide the flexibility I wanted, but it was a made as a general use tool not specifically for this project, but all in all it worked like a charm. Upgrading this part of course would be making a custom Android app which could have indicators when something is going low to a much bigger number of channels to controls for the shelf itself. For the description and code for thingspeak here is the link to my tenth blog (S.H.E.L.F. - The First Working Prototype - Pi Chef Design Challenge - Blog post #10 )
Connecting everything together
This is the part where I'll talk about all of the other stuff, mainly the code that tied everything up together. Let's begin with a feature which to me makes a big difference. It is an extremely simple and easy to do thing but changes a lot, that would be a buzzer and LED indicators. This is so useful because it makes another connection between the project and the real world beside the Android app. I programmed this part to be useful when placing more than one thing on the shelf. A successful label read would be indicated with a double flash of a green LED with the buzzer making short two burst sound, while if something went wrong during the label read, a long single sound would come from a buzzer synchronized with a red LED. Some of the things I really wanted to add would be a Calibrate button and an Ignore button. The calibrate button as the name suggests would be used to calibrate the shelf when we wanted to set that reading to zero. This would be useful when adding more electronics on board or just simply hanging stuff underneath the shelf. The ignore button would work in a similar way where it would be used when we wanted to ignore the next that is placed on the shelf, let's say a cookbook for example. Finally we come to the code that connects it all. I used Python for everything because I found it be very easy to learn with a lot of online libraries which greatly helped along the way.
#These are the libraries we are using import RPi.GPIO as GPIO import time from hx711 import HX711 import numpy as np import cv2 import picamera import picamera.array import sys import os import urllib2 #This part just helps us get out of the program easily def cleanAndExit(): print "Cleaning..." GPIO.cleanup() print "Bye!" v.close() n.close() quit() #Setup our API and delay myAPI = "1F4TL5MIORXMCCI9" myDelay = 5 #how many seconds between posting data baseURL = 'https://api.thingspeak.com/update?api_key=%s' % myAPI print baseURL #This part right here shows how I setup the two load cells #using the two amplifiers hx = HX711(5, 6) hx.set_reading_format("LSB", "MSB") hx.set_reference_unit(-215) hx.reset() hx.tare() hx1 = HX711(23, 24) hx1.set_reading_format("LSB", "MSB") hx1.set_reference_unit(-204) hx1.reset() hx1.tare() #This part here is just the initial setup of the pins buzzerPin=21 redledPin=26 greenledPin=20 k=0 q=0 t=0 p=0 val = 0 val1 = 0 GPIO.setup(buzzerPin,GPIO.OUT,initial=GPIO.LOW) GPIO.setup(redledPin,GPIO.OUT,initial=GPIO.LOW) GPIO.setup(greenledPin,GPIO.OUT,initial=GPIO.LOW) #The main body of the program starts here while True: try: #The first part here shows how I tracked if the onboard #weight changed val1 = val val = max(0, int(hx.get_weight(5))) + max(0, int(hx1.get_weight(23))) hx.power_down() hx.power_up() hx1.power_down() hx1.power_up() time.sleep(0.5) if val - val1 > 5: time.sleep(1) val = hx.get_weight(5) + hx1.get_weight(23) #From here the OpenCV kicks in with the label detection #It would be better if the label detection was placed in a function, but #since I am only using it once I put it in this way with picamera.PiCamera() as cam: raw=picamera.array.PiRGBArray(cam) cam.start_preview() time.sleep(3) cam.capture(raw,format="rgb") cam.stop_preview() frame=raw.array frame=cv2.cvtColor(frame,cv2.COLOR_RGB2BGR) hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV) image_mask=cv2.inRange(hsv,np.array([130,50,50]),np.array([160,255,255])) #Detection of the of the bright pink color blur = cv2.blur(image_mask,(5,5)) circles = cv2.HoughCircles(blur,method=cv2.cv.CV_HOUGH_GRADIENT,dp=1,minDist=700,param1=50,param2=13,minRadius=15,maxRadius=300) #Detection of the bright pink circle image_mask=cv2.cvtColor(image_mask,cv2.COLOR_GRAY2BGR) if circles is not None: for i in circles[0,:]: cv2.circle(image_mask,(i[0],i[1]),i[2],(0,255,0),2) cv2.circle(image_mask,(i[0],i[1]),2,(0,0,255),3) x=int(i[0]) y=int(i[1]) r=int(i[2]) p=1 if p==1: new= frame[(y-r):(y+r),(x-r):(x+r)] #This line cuts out the square around the label if the circle is found else: print('No label found') #And here we signalize if there is an error finding the circle GPIO.output(buzzerPin,GPIO.HIGH) GPIO.output(redledPin,GPIO.HIGH) time.sleep(0.5) GPIO.output(buzzerPin,GPIO.LOW) GPIO.output(redledPin,GPIO.LOW) GPIO.cleanup() quit() hsv1=cv2.cvtColor(new,cv2.COLOR_BGR2HSV) image_mask1=cv2.inRange(hsv1,np.array([40,50,50]),np.array([80,255,255])) #Finding the green color blur1 = cv2.blur(image_mask1,(5,5)) circles1 = cv2.HoughCircles(blur1,method=cv2.cv.CV_HOUGH_GRADIENT,dp=1,minDist=20,param1=50,param2=13,minRadius=10,maxRadius=100) image_mask1=cv2.cvtColor(image_mask1,cv2.COLOR_GRAY2BGR) if circles1 is not None: for i in circles1[0,:]: cv2.circle(image_mask1,(i[0],i[1]),i[2],(0,255,0),2) #This part counts the green circles cv2.circle(image_mask1,(i[0],i[1]),2,(0,0,255),3) k=k+1 image_mask2=cv2.inRange(hsv1,np.array([15,50,50]),np.array([35,255,255])) #Finding the yellow color blur2 = cv2.blur(image_mask2,(5,5)) circles2 = cv2.HoughCircles(blur2,method=cv2.cv.CV_HOUGH_GRADIENT,dp=1,minDist=15,param1=50,param2=13,minRadius=10,maxRadius=50) image_mask2=cv2.cvtColor(image_mask2,cv2.COLOR_GRAY2BGR) if circles2 is not None: for i in circles2[0,:]: cv2.circle(image_mask2,(i[0],i[1]),i[2],(0,255,0),2) #This part counts the yellow circles cv2.circle(image_mask2,(i[0],i[1]),2,(0,0,255),3) q=q+1 #These are the files used for storing data on the stickers v = open('stickervalues.txt','r') n = open('stickernames.txt','r') d=10*k+q #This is how the unique code was made print(d) i=0 k=0 q=0 m=101 j=0 a=[] b=[] while True: c=v.readline() if not c: break a.insert(i,c) i=i+1 while True: c=n.readline() if not c: break b.insert(j,c) j=j+1 i=i-1 j=j-1 while True: if q<=i: if int(a[q])==d: m=q break else: q=q+1 else: break q = 0 if m!=101: print(b[m] + ' ' + str(val) + 'grams') try: #Uploading data online f = urllib2.urlopen(baseURL + "&field1=%s" % val) sleep(int(myDelay)) print f.read() f.close() except: print 'exiting' #This is the sequence for when everything works GPIO.output(buzzerPin,GPIO.HIGH) GPIO.output(greenledPin,GPIO.HIGH) time.sleep(0.2) GPIO.output(buzzerPin,GPIO.LOW) GPIO.output(greenledPin,GPIO.LOW) time.sleep(0.1) GPIO.output(buzzerPin,GPIO.HIGH) GPIO.output(greenledPin,GPIO.HIGH) time.sleep(0.2) GPIO.output(buzzerPin,GPIO.LOW) GPIO.output(greenledPin,GPIO.LOW) else: print('Either error or new sticker') #This is the sequence when something goes wrong GPIO.output(buzzerPin,GPIO.HIGH) GPIO.output(redledPin,GPIO.HIGH) time.sleep(0.5) GPIO.output(buzzerPin,GPIO.LOW) GPIO.output(redledPin,GPIO.LOW) except (KeyboardInterrupt, SystemExit): cleanAndExit()
Here is the list of all the PiChef blogs:
- S.H.E.L.F. - Plans and Ideas - Pi Chef Design Challenge - Blog post #1
- S.H.E.L.F. - Decisions - Pi Chef Design Challenge - Blog post #2
- S.H.E.L.F. - 3D model, OpenCV - Pi Chef Design Challenge - Blog post #3
- S.H.E.L.F. - Line detection, additional features and Android - Pi Chef Design Challenge - Blog post #4
- S.H.E.L.F. - Real Life Model - Pi Chef Design Challenge - Blog post #5
- S.H.E.L.F. - Labels - Pi Chef Design Challenge - Blog post #6
- S.H.E.L.F. - The one item shelf - Pi Chef Design Challenge - Blog post #7
- S.H.E.L.F. - Dual load cell and Board - Pi Chef Design Challenge - Blog post #8
- S.H.E.L.F. - Short update - Pi Chef Design Challenge - Blog post #9
- S.H.E.L.F. - The First Working Prototype - Pi Chef Design Challenge - Blog post #10
Thank you note
In the end I would just like to thank the organizers for this excellent competition, as well as all of the people who commented and left feedback on my blogs throughout the competition. It was my first time being part of a competition like this, where I have to prototype, make and document everything in a certain time frame. Really didn't know what to expect in the beginning. Sadly I didn't manage to finish what I intended to make, mostly due to bad time management, it's my first year at university so I could usually only work on weekends (though with better time management everything could have been done), but in the end I'm honestly pretty happy with what turned out in the end. Even though all of that I still think I learned a lot from this competition, from doing something with a Raspberry, to OpenCV, to really needing to do some better time management, as I said in this blog, I planned out a lot of time for the labels expecting that to be extremely time consuming, reading a whole book about OpenCV, going through tutorials and so on, while leaving a lot less time on things that I thought would be easy, which proved not to work as I've expected and hoped them to. But again in the end, I am really happy I was part of this competition and as I've said, I;m really happy with what I've made in the end. I followed other peoples blogs along the way, and was amazed with what they were doing, some of their blogs helped me along the way. I wish everyone all the best luck! Hope you liked my project, thanks for reading!!!
Milos