Introduction
One of the most interesting aspects of the PONF project is that aims building an ecosystem based on the multi-back camera; A tool supporting both the shooting and post-production. This last implies the availability of several classes of applications:
- Image archiving, locally and remotely easy to manage
- Backing up and catalogue features
- High resolution image processing
The scenario depicted in the three points above can be reached with the Raspberry Pi Compute Module 3 accepting some few compromises. The design of an efficient digital dark room has been focused by progressive approximations. This is the subject of this CM3 Desktop Mode Road Test.
This article is part of the Road Test Raspberry Pi Compute Module 3 Development Kit
Resolution and performance
We need a sufficient screen resolution to manage the photo archive, directly operating on the native 24 Mp RAW DNG files; every file is between 20Mb and 30Mb. The CM3, as well as the traditional Raspberry PI3 is able to manage the HDMI out in Full HD format 1920x1080 px. The video below shows the results obtained with different screen sizes and resolutions tested with the same applications and a set of typical DNG file samples.
The application used for testing cover most of the needs of image archiving and post production: Digikam for image archiving - comparable to Lightroom and Aperture - including SQL relational database and many advanced features and Gimp. The most recent version of this image processing application can replace without problems the well known Photoshop. Both the applications can work natively with RAW DNG files.
The first tests has been done on a 21" HDMI screen connected to the CM3, set in Full HD resolution (settings can be changed with just a reboot editing the /boot/config.txt configuration file). Most of the tested graphic programs crashed apparently due the reduced available memory as well as the two mentioned above. The worry was that just because the Raspberry Pi has the limit of 1Gb of RAM the idea of an image post processing desktop included in the camera had to be abandoned; I have also tried to increase the shared memory over the 128 Mb via the raspi-config command without any new improvements in the performances.
We should also consider that the desktop environment was set to minimal trying to optimise as much as possible the resources:
- Installation of the raspbian jessie lite on the eMMC internal flash memory of the CM3
- Installation of the minimal X11 server by hand based on the Mate (used by Ubuntu Mate) that seems to be one of the most lightweight
- No extra drivers than the bare essentials are running on the system
Anyway this scenario presented really bad performances to be considered unusable.
The strange aspect is that also with only 1Gb or available memory Linux will work well in a lot of complex cases so I tended to exclude the memory limit as the most blocking factor. Then I tried on two other different kind of screen: a 800x400 HDMI touch screen from Seed and the original 7" Raspberry Pi touch LCD screen . In both cases with the same conditions, just setting the new resolutions programs worked perfectly, at least there was different issues: small screens and low resolution made impossible to use proficiently the applications but these worked fast and without particular problems.
The HDMI screen sends its own resolution to the board that adapt the graphical resolution (with or without the overscan); so I forced different resolutions on the HDMI 21" LCD screen I tested first getting different interesting results. Depending on the resolution I set the performances of the graphic applications changed significantly and the best working resolution is 1024x768. Programs (almost) never hangs and also the most complex filters works fine directly on the native large RAW DNG files.
A couple of tips
Making further tests in all the possible conditions I have found some settings that helped for better stability of the system:
- Stop the VNC server if already set; VNC is very useful if a screen connected to the Raspberry is not available but consumes resources and the entire graphic memory is replicated to the remote computer through the network. Better if the VNC server service is not stopped but disabled on boot
- The reason is not well known yet but setting the overscan to the default 16 pixel values graphic performance are better.
Testing a well known workflow
To manage my 18Mp and 24.5Mp RAW images I follow a workflow that I experienced gives me the best results with the minimum effort. Typically I use two applications on OSX: Aperture and - only for some images - Photoshop. As a matter of fact Photoshop is rarely used only when the final images should be processed for some kind of special usage like graphic creation, multi-layer process etc.
- Different native camera RAW (format depends on the camera) are converted and unified to the RAW DNG format; this is just a preparation task that I execute using batch script, nothing to do with the pros-processing.
- DNG files are imported in Aperture that is also used to manage the images archives through the application proprietary database.
- Excluding the wrong images, out-of focus, shooting errors etc all the images are kept in the Aperture database. The application automatically create large jpeg preview images to easily see the catalog.
- Post-processing: I select the images of the series that I want to use and apply where needed some correction filter. It is very important to use an application that can manipulate directly the RAW images containing parameters like exposure, white balance, lenses information and more that are lost when the original shootings are converted in other formats (Jpeg, TIFF, png etc.)
- Images are grouped in collections, depending on their usage are rated (1 to 5) and additional informations are added as well (tagging, location, category etc.)
- Images are exported to the desired resolution to a final format; I usually use uncompressed Jpeg or png.
DigKam do the same on the CM3. So it was possible to make a series of tests based on a real workflow from the source file up to the image ready for the photo agency.
A note on the images database
One of my bigger worries is that Aperture has its own database organised as a single file including all the information of the images included in the metadata. This makes difficult to manage this file, growing fast to considerable sizes (10 Gigabytes and more), keep secure backups and if needed manipulate its content externally without corrupting.
DigiKam database instead is based on SQLite database giving a lot of advantages, including the option to manage systematically with simple SQL batches the image metadata without too much risk.
What we are interested in this article is how a photographic application impacts with the hardware performance and limits of the CM3. Using the popular SQLite database there are several advantages under this point of view:
- The database is separate by the image processing engine and all the applications involved in the workflow are open source.
- Many boring features like updating the metadata of a large quantity of images can be done off-line with simple SQL batches
- The database processes are separated by the graphic processes
- The architecture of the program can be easily decomposed making most of the features with simple scripts (SQL, Python, Bash etc.)
The mentioned factors and other contributes to a better flexibility of the system that can run well also on a relatively small embedded Linux machine like the CM3
Test examples
DigiKam has been setup to point to the default Pictures folder creating one album every subfolder. Then a series of test images has been loaded. The update of about 100 files, including the time to create the thumbnails was less than a minute.
The image gallery below show the screenshots of some test image when applying filters.
{gallery} |
---|
DNG loaded in the editor |
Denoise filter loading |
Zooming and image navigation is fluid |
Full size preview after filter applying |
Outdoor image color correction |
Applying filters |
Image ready to save and export |
Loading outdoor image for Black & White conversion |
B&W is one of the most complex filters |
Black and White film type selection |
Zoomed preview to check film grain |
Test results
I have done a screencast of some tests on about 20 samples images. The table below shows the average results (with the 1/10 sec resolution). A DNG image (20-30 Mb, 6000x4000 pix) loading for editing needs 12-14 seconds.
Action | Task | Time sec. |
---|---|---|
Filters manage controls | Parameters change/setting inside the UI | 3 |
HUE/Saturation filer, save settings | Apply filter changes | 10 |
Editor file save | Exit from image editor | 16 |
Black & Wite filter | Film simulation | 9 |
Editor file open | Load image | 10 |
Brightness / Contrast filter, save settings | Apply filter changes | 8 |
Considering that the zooming and image preview navigation is fluid (as shown in the video) these values are comparable the performance of an average workstation.
Previous article
PONF Project: Prototyping stage 1
Next article
PONF Project: Sony 24Mp Sensor Vs Raspberry PI Compute Module 3
Top Comments