Today's Cooker Connector post is all about the Android App. Building a solid smartphone app is crucial to my project, since it is what allows me to monitor and control the whole show remotely. This post outlines my process and discusses the decisions I made while making the app. It does not go into technical detail about how to write an Android app -- I feel that level of detail is not relevant in this format, but I will post my code to GitHub before the end of the contest so you can take a look if you are interested.
Software Development Process
The first incarnation of the app was a simple graph that would display the temperature information that the Sensor Hub was collecting. As I designed new capabilities for the system, I gradually made adjustments, refinements, and new features to the app over time, and it has evolved considerably over the course of this challenge. This approach has been beneficial because it allows me to make a short-term plan for each component in the system, and I can test every new thing as soon I build it. Doing multiple, incremental iterations this way means that if I make an incorrect assumption, I will find out sooner rather than later, saving me valuable time that I might otherwise spend undoing wasted work.
I have learned from past experience that having a plan before writing code saves you time. When you are working with technologies like Raspberry Pi and Firebase that are intended to get you coding quickly, it is tempting to take off running before you know where you are going. So, whenever I add a new screen to the app, I first do a sketch on paper of what I want it to look like. This first step is essential, since it forces me to consider where everything goes, what the data model needs to include, and how all of the pieces of UI will interact before I actually start writing code. This road map also helps keep me on track and avoid adding unnecessary scope as I go along.
Figure 1. A few rough UI sketches.
Temperature Graph
The first feature I implemented was a temperature graph. This is a view that is central to the app, and I wanted it to work well and be flexible. In particular, I wanted these key capabilities:
- A line graph showing temperature data
- Real-time updates to the graph as new data is collected
- Zoom and pan behavior
- Ability to show multiple graphs on screen at once
Writing a component with all of these capabilities would be a large project in itself, so I decided to use an open-source library. I did some research and decided to use Android GraphView because it met all of my feature requirements and seemed to be a popular choice among Android developers.
Figure 2. Screenshot of an early prototype using GraphView.
The GraphView component showed promise, but I wasn’t happy with how GraphView treated realtime data. Its documentation claims the ability to append data to the graph one point at a time, but it wasn’t very clear about how to make the rendering respond to this event. I couldn’t figure out the proper way to scroll and scale the graph when new data was added. I may have just been using the component incorrectly, but again, the documentation and examples weren’t clear how to make it work for my use case.
I decided that it wasn’t worth my time to figure out the right way to use GraphView for this project. I may give GraphView another try some time in the future, but I ultimately decided to go with another graphing library called MPAndroidChart. I found MPAndroidChart to be more intuitive and and capable than GraphView, and I was especially pleased with its options for scrolling, zooming, and automatically scaling my graph to fit my data. As you zoom horizontally and scroll left and right, the graph automatically optimizes the vertical zoom for the data that is currently visible. It's pretty slick.
Figure 3. Screenshot of some data using MPAndroidChart.
Integrating with Firebase
If you have read my post on the Firebase Realtime Database, then you may remember that it doesn’t operate like a typical database. When you use a SQL database, you usually receive one fixed-size set of results from a query. When you query Firebase, you pass it a callback, and it gets called once for every object in the result. And, when new data is appended, the callback is later called again for each new object as soon as one is inserted. This callback mechanism is what gives Firebase its “realtime” capability.
This convenient callback mechanism is powerful, but it sacrifices efficiency. New realtime data objects will come in every few seconds, so this efficiency hit isn’t noticeable for those sparse occurrences. However, Firebase makes no distinction between new “realtime” events and existing “historical” events, so if you have a large data set already in the database, and you query it all at once, you are going to get lots and lots of callbacks in quick succession.
This problem manifested itself in my app once I started viewing hours of temperature data at a time. Since my Sensor Hub uploads a new data point every 15 seconds, these quickly stack up. A 10-hour cook generates 2,400 data points per data channel, and each one of these has to be processed by my app. When I loaded a graph of all of these objects, the Firebase callbacks were freezing my app for several seconds.
I did some code profiling and discovered that each data point I added to the graph caused the graph to measure and compute the position of every other previously-added data point. In computer science terms, this algorithm has a complexity of O(N2) because as the data set grows, the time required to process the data grows exponentially. This was obviously a problem that I needed to solve, since it caused the app to be unstable and unusable while data was loading.
Figure 4. Result of code profiling showing severe blocking of the main thread. The short, periodic vertical stacks at the beginning and end of the timeline are frames that are rendered. The wide segment of blue and green stacks in the middle each represent one data point being processed. Since no frames can be drawn during this period, the app is frozen until all of the data has been processed.
I had to continue to use the Firebase callbacks, but I didn’t have to process them exactly as soon as I got them. Instead of updating the graph for every new data point, I instead created a short window of time and combined the data from any callbacks that came in within the window into a single batch. Then, I gave the entire batch to the graph at once, and it could perform its calculations once per batch, instead of once per data point. This still causes the time to grow exponentially, but based on the number of batches, instead of the number of data points. I experimented with the size of the batch, and 100 events per batch seemed to work well over a 12-hour time period.
I also offloaded the deserialization work to a background thread. When the Firebase callback comes in, the actual data is still in a raw JSON string format somewhere internal to the Firebase SDK, and it needs to be converted into Java objects in order to be sent to the graph. I added this work to the batch processor and saw great performance improvements right away.
Breaking the work up into chunks like this allows the app to draw frames in between, so the app doesn't freeze. In order to provide a smooth framerate, I had to ensure that the time the app took to process a batch was less than the duration of a single frame. Android’s target frame rate is 60fps, which means each batch could take no longer than 16 milliseconds. Once I implemented these optimizations, the graph loaded much more quickly, and it no longer froze. I also got a kind of animated loading effect, which looks good.
Figure 5. Result of profiling after chunking the work. Notice that the frame rate is now smooth because the work is done between frames.
Cooking Session Management UI
Once I got the graphing portion working to my satisfaction, I needed the ability to separate one cooking session from another. Up until this point, I had been manually manipulating my database through the Firebase console in order to create and modify sessions. Creating sessions through the app required two new screens: one to see a list of all existing sessions, and one to create a new session.
To create the session list, I used RecyclerView, which is a standard Android component that efficiently renders a collection of items and provides scrolling and other animation behaviors. To hook up the RecyclerView to my database, I used FirebaseRecyclerAdapter, which is a component that comes in the Firebase SDK. You simply give it a query specification and attach it to the RecyclerView, and it pulls the data from the database as needed.
Figure 6. Screenshot of the session list.
The screen that creates a new session is a little more complicated. I wanted any session to be able to collect data from any number of sensors connected to any number of Sensor Hubs, so before showing the UI, I had to query the database for all known Sensor Hubs and their capabilities. (My next blog post will discuss how the Sensor Hubs advertise themselves.) Then, I display a checkbox for each available channel and let the user give a custom name to each channel, such as “Air” or “Meat”, depending on what probe is plugged into which channel.
Figure 7. Screenshot of the Create Session screen.
Alarms
Next was alarms. A key feature of my Cooker Connector is automatically monitoring a temperature and alerting me when it goes outside of the desired range. I designed my system so that any input channel can have multiple alarms associated with it, so that I can set an upper and lower threshold.
I needed the ability to create and display the alarms for every channel, so I set up another RecyclerView to pull from the alarms objects in the database. I also created a screen to modify an alarm. There's a temperature value input with a slider and two simple alarm trigger options: Exceeds and Falls Below.
Figure 8. Screenshot of the alarms list and edit dialog.
When the Save button is clicked, the alarm specification gets written to the database, where other components like the Sensor Hub and Cloud Function can read it. The app also saves the Firebase Instance ID in the alarm. This ID is the unique address for the specific installation of my app on my mobile phone, and the Cloud Function will send push notifications to this address.
{
"active": false,
"calibratedThreshold": 250,
"pushToken": "eUERpJqJOf4:APA91bF1g...pPD8YCq",
"type": "GREATER_OR_EQUAL"
}
Figure 9. An example of an alarm configuration object.
Cloud Function
When the Sensor Hub detects that an alarm condition has been triggered, it will change the "active" status of the alarm in the database. Because the phone is not constantly waiting for changes to alarm states, I will use a Cloud Function to do this instead and only push alerts to the phone when necessary.
My Cloud Function needs to listen to the alarms and run whenever an alarm changes from inactive to active. Cloud Functions are relatively new, and developing one is different from developing other software. Since the whole point of a Cloud Function is that it runs in the cloud, it feels strange to not write the code in the cloud as well. In practice, you write your code on your own computer in a file and then deploy it to Firebase. This introduces a delay of about one minute into the typical write/run/evaluate loop that software developers are used to, since the code has to be completely deployed to Firebase’s production servers before you can test it. Firebase has some instructions for testing your code on your own computer before pushing it up to the cloud, but if you’re using a Realtime Database trigger like I am, you have to trigger your function manually. You can’t test the full code path. However, my function for this project is pretty lightweight, so I didn’t need to do very many deployments in order to get it working. I’d like to see Firebase provide better support for testing in general.
I’m using JavaScript for my Cloud Function. It’s basically a tiny Node.js app with dependencies on the Firebase APIs. In my main "index.js" JavaScript file, I created a function called “alarmMonitor” and set it up with a Firebase Realtime Database trigger. When you set a database trigger like this, you specify a path to watch. You give it the path of the most specific object for which you want to receive updates. In this case, I wanted to receive updated alarm objects. However, I want to receive updates for all alarms for all data streams, for all sessions, and this would be a large number of paths to watch and maintain. Fortunately, this is the perfect case for path wildcards. Instead of using a concrete key in the path, you can name one or more wildcard variables. Then, when your function is triggered, the wildcards are filled in with the actual keys, and the variables you specified are passed to you in the params object so that you can use them. The path I'm using is 'sessions/{sessionId}/dataStreams/{dataStreamId}/alarms/{alarmId}/active', so the variables that will be filled in are sessionId, dataStreamId, and alarmId, depending on which alarm was triggered.
Because Firebase Cloud Functions does so much out of the box, my actual function doesn't have much work to do. When it is called, it looks at the new value of active and compares it to the value it had previously. If the state is changing from false to true, then it creates a message and uses the Firebase Cloud Messaging API to push the message to the mobile device using the push token. I decided to include the session title and data stream title in the message, so I added another step to look up this data before building the message.
When the device receives the message, it is displayed in the notification bar and on the lock screen. You can specify the text that appears on the notification, the colors, the icon, and many other things about the notification just by setting these parameters in the message object. In the future, I may add some fancier notification handling, but for now, this notification works well.
Figure 10. Screenshot of a push notification triggered by a temperature alarm.
Servo Control
The last piece of the app was the Sensor Hub controller. I created a screen that has a slider that lets the user pick a value for the control variable. When you click the Update button, this value is written to the database in the Sensor Hub’s “control” object. The Sensor Hub observes this value and responds by telling the servo to move to the corresponding position. For now, the vent will be controlled manually through this screen. In the future, I may decide to add some kind of PID controller into the loop to make the vents open and close automatically.
Figure 11. Screenshot of the control screen.
I did a test cook the other night to see how my servo setup would perform when the smoker was hot. I cooked some small pieces of chicken at a high temperature, and the vent opened and closed nearly flawlessly in response to my app! The only slight issue was that when the air intake vent is in the fully-closed position, it seems to have more friction, which the servo had trouble with. When I tried to instruct the servo to move from 0% open to any position below about 50%, the servo struggled to overcome this friction, and the vent didn’t fully move to the desired position. However, if I instructed it to move to 100% open, the servo seemed to move with more speed and was able to overcome the friction. After moving it to the fully-open position, I could then tell it to return to the near-zero position, which seemed to work. I think I'm going to add some logic to the Sensor Hub code that will do this automatically if the position is less than 50% open.
Figure 12. Photos of my smoker with the attached Sensor Hub.
Below is a a video recording of the app in action! Watch for a demo of:
- Loading the graph of the internal air temperature of the smoker during my test cook
- Creating a new cooking session
- Adding alarms to the data channels
- Starting the cook and seeing data come in from the Sensor Hub
I’m sure that I will make a few more modifications to my app over the next few weeks, but for now, it’s looking feature-complete! My next post will be out shortly. It’s about the Android app that runs on the Raspberry Pi that powers the Sensor Hub. Almost done!
Top Comments