In the previous parts of this series we've setup a shared network folder and some network nodes. Now we can actually get on with installing and using Blender.
Installation
To install blender the following is needed.
sudo apt-get install blender
Running Blender
As Blender is a graphical program, it made sense to attach a screen to my controller node and launch the application. It takes a while to launch but eventually it returned the default scene of a cube and a light. Even on the Pi3, it's pretty slow to use from the graphical interface so I'd not want to have to use this for creating the scenes on the Pi. The menus are unresponsive and even just navigating the file structure is a challenge.
I downloaded some sample files and rendered the first one. A couple of minutes later it appeared.
Command line
It is also possible to run Blender from the command line to render either single frames or animated sequences. You'll need to use the UI to design the models and animation first and you can set the output parameters here but some of the output details can be changed at the command line.
The command line returns a strange error which I've not worked out yet.
AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"
I repeated the rendering from the command line with the following.
blender -b /mnt/network/Samples/AtvBuggy/buggy2.1.blend -o /mnt/network/Samples/AtvBuggy/buggyrender -f 1
The parameters are:
-b scene to load in background
-o output file
-f number of frame to render
On the Pi3 that generated the file in 01:00.38.
the Pi2 takes a little longer 01:15.89.
Animation
I picked a model helicopter animation to test out the rendering on the cluster. I created a simple shell script to render different frames on each of the nodes.
ssh cluster1 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 1 -e 25 -a & ssh cluster2 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 26 -e 50 -a & ssh cluster3 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 51 -e 75 -a & #Render rest locally blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 75 -e 100 -a
Then ran the script with
./BatchRender.sh > render.log
This was perhaps a little optimistic as it was hard to tell what was going on and at least one of the nodes failed to find the network drive.
I had to remount the drives using to following command. It should be possible to schedule this at boot but I have yet to configure that.
sudo mount -a
I then created a ssh session to each of the nodes and started rendering. The first few frames appeared after about 30 minutes, the helicopter turned out to be a photo-realistic Meccano one!
3 of the nodes were producing one frame every 30 minutes the last was estimating 10 hours per frame. When I check that node was a B+ so the extra power of the Pi3 really makes a difference here. So, best that the other 3 nodes take some of the work-load from this node.
After a few frames, I realised that this was not actually animated so all my nodes had produced the same image! My blender skills are fairly limited so rather than animating this I tracked down some demo examples with animation at https://download.blender.org/demo/old_demos/demos/ .
I decided to use hothothot.blend from the 220 zip file. Results below.
Producing a video
Once you have a series of frames you need to turn these into a video. Blender does have a built-in video editor for this but an alternative is the command line tool FFMPEG.
This can be installed by following Jeff Thompson's instructions to build FFMPEG, note that this could take a few hours.
Creating the video took a few seconds with the following command:
ffmpeg -r 60 -f image2 -s 320x240 -i Render%05d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p Render.mp4
Summary
So in summary, the blade does a good job of providing a platform and power to the blades. As has been seen, the setup of the network can be challenging, perhaps I should have stuck to DHCP! The sharing of the disk in comparison was straight forward. The suggested use case of a Blender render farm is quite achievable although you'd want to use the Pi3 rather than earlier models. I think if you had a big project you want to look into how the allocate of frames to nodes could be automated, there are some commercial solutions available but it should also be possible to code something.
Reference
https://docs.blender.org/manual/en/dev/render/workflows/command_line.html
https://www.blender.org/download/demo-files/
Installing FFMPEG for Raspberry Pi – Jeff Thompson
Top Comments