Hello!
Busy times here in the household! Wife is very "excited" to finally have a range hood after almost a year without
Edit: Updates on scope of project -
Goal: a smart, connected range hood to send images of what's cooking and to alert the user if something doesn't seem right.
First post - Smart Range Hood - Pi Chef Challenge Blog post #1
I am planning on integrating this with my home automation system but that brings a lot of questions as to how exactly it ties in; which components have the intelligence, and how things work during different failure modes.
My automation system started with a simple Arduino Uno about 5 years ago when I stuck a DHT22 sensor outside to see the temperature. It was running on a stand-alone C# application with a SQL database. Now it looks something like this:
The server in the middle runs OpenHAB and my own C# application which are the heart of the system. I've recently added Node-Red and Homebridge on a Raspberry Pi. Now with the addition of another appliance with greatly expanded functionality, I need to consider a lot more about how things work during failure modes.
My friend at work complained on Friday about his Google Fiber going down for 16 hours and he couldn't use Alexa, couldn't turn his lights on or off, and couldn't control his TV. He said that his girlfriend was very upset about this... The critical piece here is that they are all running as cloud services. Even the Nest Thermostat works this way to reach your phone. Each individual device may not do a lot of its own processing - it just reads the sensors, then sends data off to the cloud and hopes for the best. To do an action, it must receive a command from the cloud.
In Node-Red flow terms; it would look something like this:
The device reads data in from its sensors, and reports them up to the cloud service.
To carry out an action, it may require cloud connectivity -
And if you have a mobile device, it could further complicate issues since the phone needs to reach out to the cloud in order to get back to the device which is located 15 feet away from you; on the same WIFI network:
What would be ideal is that the device can function with a high level of autonomy, and just report back to the server with certain data like on/off status or sensor reporting for use in other systems (don't run the furnace if presence detection shows that no one is home). This means that actions should not be feed back to a server for interpretation before receiving a command to action. A good example is a smart light switch. What is the exact action that happens when the switch is activated? Some would recommend sending the MQTT broker a command that "Button # 76 has been activated" then doing basically nothing. Then any time the module hears a request from the server to activate the relay, it obliges. In other words, the action of hitting the switch does not directly control the relay for activating the light. Granted, that using an abstraction layer can be very beneficial, it can also reduce the ability of some critical device to function autonomously if, say, it can't reach the server. For the end user, this looks like - "I hit the switch and nothing happened" or "my phone still says 'connecting' and i can't get it to work" or "there is a 2 second delay from hitting the switch until the lights come one".
Rather, for something more mission critical, it seems that the device needs some level of autonomy as per the Node-Red flow shown here -
The action of hitting the switch flows through the basic logic in the middle, then directly to the relay output. Along with firing the relay, the controller /also/ reports back to MQTT with the current status. No waiting for a server connection; no delays in communication - nothing. Just input --> logic --> output. Additionally, if a server request (MQTT in this example) does arrive, it hits the logic block, then goes out to the relay. So we clearly still have the ability to use external sources and report statuses but aren't a slave to server connectivity. The flow shown above is the general plan that I intend to use for the lights on the range hood.
For the fan control, it is logically very similar, but physically a little different. The fan has multiple windings. The switch on the doner unit requires "LOW" to be activated for "MED" or "HIGH" fan speeds to work. This will be implimented in the logic block in the middle. I still haven't decided on the exact number of physical buttons I will bring out to the end user.
All of that being said, Here is the proposed method for the system to work:
The range hood will hold the Raspberry Pi, and all sensors to function by itself. When the user presses a button on the front of the unit, it will immediately respond as commanded. In the background, it will tell the server (MQTT Broker) that a status update has occurred. If the user wants to activate the system via the cloud or their mobile device, then the flow is reversed where OpenHab gets the request, publishes to MQTT, then the range hood logic block gets the request and obliges.
There was a good Q&A session with Jon Oxer from SuperHouse recently where he talked about how this all works and what the end user experience ends up being in complex systems where there are many things which can go wrong. He was pushing towards having once central server for everything mission critical, then there is only one component which can fail; versus having distributed intelligence and many things which can fail. I think that for non-mission critical items, and wherever possible, devices should be able to function on their own as well as within a complex ecosystem.
https://www.youtube.com/user/SuperHouseTV/featured
Physical connections of components:
This may be the most work controls-wise of the operation. I plan to prototype first on breadboard, then if needed use perfboard to create a prototype. If all goes well, I would like to have a custom board actually be created through an online service.
As far as the suite of sensors and how they physically connect to the Pi, here is what I have to date. I have specifically not researched this before the start of the contest.
Sensor / module | Data | Communication protocol | Input/Output location | Receiving method |
---|---|---|---|---|
DHT 22 | Temperature/humidity | One-Wire | GPIO (pin TBD) | Node-Red plugin |
MQ Air Quality Sensors | Flamable gas, Methane, Hydrogen | Analog | Requires ADC over I2C bus | TBD |
Grid Eye | 2D Heat map | digital | SPI Bus (1 GPIO required) | TBD |
Pi Cam | JPG Image | digital | I2C (proprietary) | Node-Red Plugin |
4 channel relay board | o/p | digital | GPIO (4 required) | n/a |
Pi Hat touchscreen | video output, five button inputs | digital | SPI (1 GPIO required) |
Design Phases:
I am planning this general outline to ensure completion by the due date. The longest lead item will be the sheet metal fabrication, so that must happen up front. I have the general design close to completion and the upcoming post will talk about that. High level design is mostly complete. I think I have enough GPIO and now that I know how each module talks, I can validate that they don't use up the same resources. I think I will be able to get most the information directly into Node-Red meaning less work to do on the back end. Once the sheet metal is ordered (or perhaps as part of that planning) I will start the fun part of mocking up all the components. I plan to make sure that they can all talk. I'll get NOOBS running, install Node-Red and all required plug-ins, the start trying to pull data from the various sensors. After that, I'll have to do some layout of the internal hood to have places to mount all the stuff. This will result in a 3D printed mounting plate or series of mounts. Hopefully at this point I will have all the required circuitry decided and can work on getting one fabricated. Once all the final stuff is here, I can work on full implimentaiton and begin isntalling everything. I may get into this early if I put the doner hood up into location while waiting on the new sheet metal.
Top Comments