Art is often made to appeal to a specific set of viewers. This is the 8th blog in a series exploring the idea of using AI to recognize a viewer and adapt art to their age or interest. The intent is to encourage early art appreciation while being attractive to all household members.
Image recognition has advanced and the build deemed acceptable for a prototype.
Tidying the Build
For the time being I have simplified the enclosure on the back side of the display. This leaves wiring exposed if viewed from the back but at the same time it enables easier access while getting things sorted and does not detract visually from the front side of the prototype. A new mount for the second Raspberry Pi (which runs the image recognition) was printed and screwed into place. Clips and tabs were printed to secure the faceplate to the monitor and stand. Everything is now glued and screwed together and acceptably sturdy.
With the mount, tabs, clips and restraints in place it looks like this from the backside:
Working with a second Raspberry PI has made things more complicated than originally imagined. For example there are mechanical complications to strapping a second Pi to the back and resulting space / access issues. Power is an issue since two USB supplies are required. Software development is more tedious. As will be discussed below however, the Pi doing the image recognition is fully loaded and trying to do that and run art displays with a single Pi would bog things down.
Image Recognition
The training set currently consists of 8 persons and 24 photos. The tradeoff is frame rate Vs. accuracy with the frame rate at a bit more than 4 frames per second with 24 photos. Recognition is limited to head on and deteriorates if lighting changes or for all sorts of other things like my granddaughter's hair is hanging straight instead of in a ponytail. It is best at recognizing me, I suspect because I am the only one that wears glasses. When it errs with children, it tends to identify them as my youngest granddaughter. With the other adults in the dataset, they are usually "unrecognized" faces mostly because the training set is sparser on them. I will continue working on this but suspect it will continue to be a weakness.
Software Development
Slogging through the software is the least fun part. Not being a Linux guy, I haven't been able to get the Pi running the vision recognition to run the Python script at startup. I have tried modifying rc.local and creating a unit file in systemd without success using the directions in this link. If anyone has a good tutorial to run a Python script continuously from startup then I would appreciate the link. Otherwise I can always do it from the command line. Anyway, pretty sure I am just doing something boneheaded and need to work through it more carefully. The other software chore is to get the Pi on the video side doing the displays with a bit more panache.
The Ballerina
This is the fun part . I knew my granddaughter would want a box like the robot box I gave her brother. She said she wanted a ballerina and here is her drawing.
I like the free flowing lines and simplicity.
For this print, 4 colors will be used. Since my printer only has one print head, each color will have its own layer, and subsequent colors laid down on top of it. The printer can do layer heights of 0.1, 0.2, or 0.3 mm. A minimum of 2 layers (and maybe more) is needed to get good color. I elected to use 0.2 mm layer height while printing and 0.6 mm layer thickness for each color, so 3 layers. The total thickness of the figure is 2.4 mm.
Her pencil sketch was scanned in and outlined in Photoshop and then converted to a SVG file at online-convert.com. I have been using Photoshop due to familiarity but I am familiar enough with Fusion 360 now that I will trying going straight to it next time. The trick is to simplify the drawings and give adequate thickness to the lines such that the 3D printer can handle it.
The SVG file was pulled into Fusion 360 and the color layers extruded as described above. The white layer covers the entire print, then the gold layer is on top of it followed by the black layer and the red layer.
When printing, I manually pause the print at each new color layer and change the filament before restarting. And this is the result.
The boxes are kind of peripheral to the project but have been well received by the kids. It made me think that a music box might be a fun project as well. My granddaughter told me that she had done "stop motion" movies at school. Maybe we can use a Pi to make a stop motion movie with "The Ballerina". Another idea is to print something that can pop up when the baby is in view - either driven by the pi face or maybe a servo. Doing art with the kids is so much fun I may not get around to doing anything for adults.
Upcoming Work
- Improve image recognition
- Write Python script to drive the monitor and control the art output
- Continue developing art
Thanks for reading - comments and suggestions are always welcome - especially on how to get a Python script to run when a Pi is booting.
Other Posts in this Series
PiCasso Adapting Art to Viewers: Introduction Blog #1
PiCasso Adapting Art to Viewers: OpenCV Face Detection, Blog #2
PiCasso Adapting Art to Viewers: Cabinet Design, Blog #3
PiCasso Adapting Art to Viewers: Pi talks to Pi, Blog #4
PiCasso Adapting Art to Viewers: Grandpa Shark, Blog #5
PiCasso Adapting Art to Viewers: More on Cabinet Design, Blog #6
Top Comments