This week i was planning on getting into the production of the robot arm. Unfortunately as I hadn't grabbed the 3D printer i was planning to use I couldn't get started on it.
I decided instead to implement motion detection to start the voice recognition software. The plan i had for the flow of control is illustrated in the following diagram.
After doing a bit of digging i decided that using motion would be the easiest way to implement motion detection. So i ran the install via apt. Once installed I played with motion using the default settings to make sure it was working. I had a bit of trouble here as i was trying to use the Raspberry pi camera as the source, this doesn't work out of the box but you can get it running by executing the following command (which needs to be rerun after each reboot):
sudo modprobe bcm2835-v4l2
Once i had that sorted I then got into configuring the settings. The settings can be found in /etc/motion/motion.conf, I ended up changing the following settings:
output_pictures off
ffmpeg_output_movies off
stream_port 0
on_motion_detected /usr/bin/python3 /home/pi/on_picture.py
Initially I tested using a simple script which wrote a file with the datetime it was invoked. Once i had that working i moved onto implementing the flow above.
I started off by adding the code to listen to on a socket (8888 seemed reasonable), I then changed the on_motion_detected setting to "telnet loaclhost 8888". This didn't work initially but I quickly realised that was only because telnet wasn't installed by default (sad times). I tested this using a simple script which listened to the port and wrote to a file as before. The code for this can be found below:
from datetime import datetime
import socket
import sys
#create a socket to listen on
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.bind(('', 8888))
except socket.error as msg:
print('Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1])
sys.exit()
print('Socket bind complete')
s.listen(10)
print('Socket now listening')
while 1:
#wait to accept a connection
print("waiting for connection")
conn, addr = s.accept()
#write a file with datetime when a connection is made
file = open("/home/pi/test.txt","w")
file.write("called from motion detection " + str(datetime.now()) + "\n")
file.close()
conn.close()
Combing the motion with the voice activation wasnt too hard. The end result can be found on my github as usual.
*Other items of note from the week are that I actually ordered the servos I'm planning to use in the robot arm here is a link. I also picked up the 3D printer off my friend today (thanks Ed), its a Micro 3D and has a very small print bed but I think I can make it work.
Top Comments