Foreword
I guess my approach to this challenge was a bit different from other competitors. I didn't set out to solve specific problems, instead I wanted to implement my current system the Way I Want To. In the meanwhile, it's not a bad idea to add sensors like the ones provided in the EnOcean sensor kit.
Even if I didn't achieve all that I wanted to, I sure did accomplish a lot. And best of all, with hours to spare compared to the deadline
Hardware
From the get go, my plan was to create the hardware I needed in my current systems. This meant designing a add-on board for the Pi. The design criteria are as follows:
- Reliability
- MCU to work as a watchdog for the Pi, resets it if "heartbeat" isn't received in a set amount of time
- All outputs go to a known state (same state as without the system or power) when reset or powered down
- This allows me to set relays by selecting NC/NO behavior
- Interfaces
- 1-Wire
- Been the heart of my solutions this far
- EnOcean Pi
- Extra long headers needed for this, the EnOcean module stacks on top of the new add-on
- 1-Wire
- Power supply
- 12-24 V
- No need for multiple supplies, the same voltage is used to power the Pi, add-on board and outputs
- Reduces wiring clutter and interference caused by DC/DC converters
- Real-time clock
- For those (hopefully rare) cases when no Internet-connection is present
These functions are (mostly) designed to be independent, so the whole board doesn't need to be assembled. Also, the board can be used as a 1-Wire IO-board without RasPi. This allows the same board to be used without full assembly when needed; as additional IO-board for the whole system, IO-board for other platforms etc.
I have designed the next revision of the board and the total cost for one (fully assembled) unit comes to about 44 euros (VAT 0%). This brings the total cost of a system with Raspberry Pi B+, EnOcean Pi and the add-on to about 95 euros. In my opinion, it isn't that much for a fully capable home automation system
Schematic
You can download the schematic for RevA2 here. I haven't ordered the PCBs for it yet, nor have I tested it completely. The main thing that has changed is the way outputs are controlled; now they use MOSFETs instead of a darlington array. This is to allow more current to be sent to the outputs. Now it should handle up to 1.2A per channel.
The things that are tested are the RTC, 1-Wire connections, power supply (including resetting of the Pi), I2C level converters etc.
PCB
As I mentioned, I haven't ordered the new PCB yet. The following images show the system running with the first revision PCB and some renderings of the RevA2. Once I feel confident from the software side that I have everything I need, I'll be ordering the next batch of PCBs. So far, I've found a lucky "mistake" in the RevA2 PCB design: what were supposed to be outputs can be quite easily be converted to inputs with both pull-ups and pull-downs by a simple solder bridge
Image 1: RevA1 running, with EnOcean Pi attached
Rendering 1: RevA2 PCB from top
Rendering 2: RevA2 PCB from bottom
Software
As I've always done, I'm developing my own software. This is partly because I'm too stubborn to learn the stuff coded by others, partly because I want to do things the way I want. Also, when the code is done by myself, I know who to blame. Of course, this isn't to say I do all by myself, of course I use libraries, frameworks etc written by others. But only the parts I like and need
EnOcean for Python
I started the project by writing a Python interface for the EnOcean serial protocol. This shouldn't be limited to EnOcean Pi, but so far it's the only one I've tested. The library itself was quite easy to develop, thanks to a very good documentation. The only thing I have a problem with is that the XML file containing the EnOcean Equipment Profiles isn't public. Without it, I can't create a library to handle all the device profiles, at least not without a huge effort.
I'm not that happy with the library at it's current form though, changes are coming. I've already done some of them, but haven't released the new version of the library just yet. I'm waiting for me to be happy with it, this way there won't be so much changes in the future...
One of the main changes is to divide the library to subclasses, so one can easily determine by class type what to expect. Also, this would allow the receiver to know what fields to expect from the message. Another major change is the change to a event-based solution for handling the messages. At the moment, messages must be fetched from the queue by the application using the library. Instead of this, I'm planning on implementing a "handler" structure, this way there could be multiple handlers listening for the packets and all handlers would receive the packet, instead of just one getting it from the queue. The listeners will be attached with code something like this:
s = enocean.SerialCommunicator()
s.attach(packet.Radio, radio_listener)
s.start()
This would attach the function "radio_listener" to radio-packets. Of course, there can be multiple listeners for one packet type and each of them will receive the packet. After this, is up to the code in the handler to determine how to handle it.
Backend
I chose the excellent Django for the base of this project. On top of that, I chose Django REST framework to provide API functions, django-websocket-redis for real-time communication to clients and Celery to act as task queue.
Following is the current UML of the models, created with the help of excellent django-extensions.
Image 3: UML of current software
As one can see, the whole software at the moment is Device -based. Every object is a Device. After this, it's divided into two main groups; Device is either a Sensor or an IO. After this it does get a bit complicated, but basically that's all you need to know, if designing an user interface etc. There's of course still a lot to come; I'm not even close to finished. I haven't even implemented the most basic functions of the current system, which includes cameras, AD-converters, tasks, home/away -states etc. Most of the stuff will still be derived from Device, but some additions will come; Devices will have different states depending on the system state etc.
Per my initial plan, everything had to be "automatic". Devices are found once they're first seen etc. I uploaded a brief video of this action in an earlier post here: Forget Me Not - 8 - It's alive!
As the Pi is, rmmm, "fairly modest" in it's processing capabilities, software is designed to be as fast as possible. This means that basically every action is event-based and nothing is done if there isn't a need for it. A basic IO change will take less than 300ms from the pressing of a button in UI to the actual change in hardware state and reporting back to the UI. At the moment, it takes about 600 ms to fetch all IO (11), TemperatureSensor (2) and TemperatureSensorLog (~1800) -objects through the API. This is all that is needed to create a full UI, with log from the last 24 hours. If log is dropped, it takes about 150ms.
API
At the moment, only three of those models are visible outside of the program via the web API; IO, TemperatureSensor and TemperatureSensorLog. This is about to change though, as I noticed a design flaw. There's no need to expose TemperatureSensor etc, just use Sensor and SensorLog. After this, it'll be up to the UI to decide how to group them.
This is because a sensor is always one and exaclty one Device. If a sensor includes multiple measurements, they are added as different Devices.
Following is a few screenshots of the development API view, which is provided by Django REST framework.
Image 4: TemperatureSensor API
Image 5: IO API
User interface
User interface of a home automation system basically consists of two interfaces. A physical (easy, thanks to EnOcean) and a virtual. Additional physical user interfaces could be developed with the help of Raspberries, Arduinos, Launchpads etc. So far, I've focused on the virtual UI.
Keeping with the idea of API, the user interface is basically a separate project, which can be deployed on practically any device. This can be a PC, laptop, tablet, mobile phone or even another Raspberry Pi. Thanks to the UI being based on HTML/JSS/JavaScript, it is very easily ported and looks roughly similar on every platform.
I actually set up the UI today to run on basically every mobile device out there with the help of Intel XDK and all I needed to do was to add 6 lines of code. No removals or replacements, just add 6 lines. Following is the UI running on a desktop browser and the emulator inside the XDK (Nexus 4 and iPad). All systems are fully functional and the UI is updated simultaneously when I press a button (physical or in the UI), thanks to websockets. I have also tested the application on my phone and it works as planned.
Image 6: UI running on Chrome
Image 7: UI running on an emulated iPad
Image 8: UI running on emulated Nexus 4
Final summary
This has been Awesome.
For once, I haven't had to worry about a thing when developing something. This provides a great opportunity to create something Awesome. For this opportunity I'd like to thank all parties involved; not only sponsors, but other competitors as well, for providing ideas and different kinds of use cases to which the system should adapt to.
I calculated the "bare essentials" to do a project like this. It added up to about 300 euros. The challenge supplied covered 260 of those euros by supplying components and devices. Without this challenge, I never would have even thought about testing something like this. Thanks to the challenge, I'm sure I'll use the same setup in the future.
Most of all, I'd like to thank my friends, who have not only listened my worries, but also provided me with useful information and much needed help and ideas.
--
I learned a lot during this challenge. The most important thing was that there are reliable wireless sensors out there, and even better, with an open protocol. Thanks to the oscilloscope, I finally learned to take measurements which are useful to me, the developer (after maaaany years at a university). Most of the exercises focus on "how to measure", not "what to measure and why". Also, finally I have the tools to measure affects of power failures etc and can design the circuitry to handle it.
At the moment, my plan is to release sources and KiCAD project files "once I feel ready to".
For the time being, the following is released:
Everything is released "as is", without installation instructions etc. I haven't had the time to perfect and optimize the code or instructions.