Raspberry Pi 2: We're Giving Away 50 Units!!! - Review

Table of contents

RoadTest: Raspberry Pi 2: We're Giving Away 50 Units!!!

Author: ejohnfel

Creation date:

Evaluation Type: Independent Products

Did you receive all parts the manufacturer stated would be included in the package?: True

What other parts do you consider comparable to this product?: Wandboard Quad, Udoo Quad, Raspberry Pi B+, Banana Pi Pro

What were the biggest problems encountered?: Mostly the software stack, in particular Logstash.

Detailed Review:

When I heard about the Raspberry Pi 2, at first I was excited... then a bit annoyed... I had searched around for a Quad-core device for about a month and had already settled on the Wandboard Quad, literally the Pi 2 was announced about a week after I purchased the Wandboard. Of course, the project I was working on, ran into a minor glitch on the Wandboard, in essence, one piece of the software in the stack I was using did not have an ARM v7.1 version of a library it was dependent on. I originally had the project working on a Raspberry Pi Model B+, but, the software stack I was using was heavily overloading the B+.

 

What to do?

 

Well, a Quad-Core Pi might certainly help!

 

Thanks to Element14's RoadTest program, I got a chance to find out if the Pi 2 could fit the bill.

 

When I got the board, I started out the way most hobbyist's would, I simply ran the Pi 2 through the paces of the important things I had my B+'s doing.

 

To wit, lets face it, when in Rome, you have to visit the Coliseum, when trying out a more powerful Pi, you have to run OpenElec at least once (or whatever your favorite XBMC derivative happens to be).

 

While I was not expecting too much of a difference, I was pleasantly surprised. The B+ I was using has hesitations and freezes, while the Pi 2 was clear sailing on video. I was quite happy about that.

 

But, onto the real project.

 

Basically, my project involves taking in data, all kinds, but mostly temperature data, system load factor and some syslogs. The syslogs of course aren't just the run of the mill syslogs. Basically, I have a series of programs that monitor an array of ASICs, 24 to be exact, that are crunching numbers between 384M and 1.6B vector calculations a second (depending on the ASIC). They are controlled by another Raspberry Pi in a custom built chassis with more fans then I care to admit for keeping the whole rig cool. The ASIC's basically process large sparse matrices (collections of vectors). They do vector addition, subtraction, multiplication, division, row-echelon reductions, some hashing and occassionally some cryptography work. The purpose of which is (mostly) running simulations on a set of engineering problems. The ASIC rig isn't mine, I just manage it.

 

Needless to say, the ASICs get hot, they aren't cheap and temperature matters. In addition, sometimes an ASIC will go brain dead, requiring a complete power down to get it back up and running.

 

Performance information regarding the ASIC's power usage, temperature and calculations per second are periodically sampled and entered into the syslogs at various facility codes (local5 through local7) and those logs are streamed via UDP across the network to a multitude of servers, and also originally, a Raspberry Pi B+.

 

The B+ of course, doesn't just store the logs... it pulls the data from the logs and transfers it to a indexing database which provides the data backing store for a visualization front end. The front end, allows me to get alerts and watch the performance over time.

 

So half the project runs the ASIC's, manages rebooting when needed and power down the array if it gets too hot, while the second half essentially gives me and a few others visuals to look at to judge the systems performance.

 

In the end, the Pi B+ couldn't handle getting the data, consuming the data, indexing the data and then visualizing it.

 

Perhaps the best way to describe the monitoring part of the project as a web based performance dashboard.

 

The software used in creating the dashboard(s) come as a stack, in particular, the E.L.K. stack, also know as the Elasticsearch, Logstash and Kibana projects.

 

ELK is a wonderful stack, I use it on a number of other projects. However, if you are not distributing the individual pieces of the stack across several pieces of commdity x86 or x64 servers with a good amount of memory, processor and storage... it can be a bit tricky to rig up, particularly if you plan on placing all of it on one small board computer, like I am.

 

For the RoadTest, I took the whole process in steps.

 

  1. Do some basic performance testing on the Pi 2 (ala OpenElec and Octane).
  2. Rig up the existing log streams to point to each Pi in the test set and adding a new indicator (system load for each test Pi)
  3. Get Elasticsearch nodes running on two test Pi's, the Model B and the Pi 2.
  4. Get Logstash running on the same Pi's
  5. Get Kibana up and running with a dashboard including the appropriate metrics on the same Pi's

 

So, here are the results...

 

Step 1 : Performance

 

As I mentioned, the OpenElec test went well, it was awesome. I have seen a few other Raspberry Pi 2 performance test. For some reason everyone overclocks the Pi's. I chose not to do this. In addition I did not use sterile testing conditions. While each Pi had the same exact software setup, what I mean by "sterile" is that I wasn't too concerned about using up extra machine cycles during testing (ie. I left X running, I had TightVNCServer running in the background, etc). My only goal here was to eyeball the performance differences and even with my less then sterile conditions, the difference was noticable. But, perhaps Google's Octane test using Chromium can put some numbers to it... first, the Model B+...

 

Pitiful really, my PC rated at over 7.5K

 

Next, the Pi 2...

 

Slightly more then double the performance. I have seen overclocked Pi 2's under sterile testing conditions hit slightly over 750.

 

That kind of makes the Pi 2 the overall winner this round.

 

Step 2 : Rigging up the Logs

 

This was easy, I merely pointed the ASIC grid system's syslogs to the two Pi's and checked the logs to see if they were being recieved. No problem there...

Temp checked out...

Performance monitor checked out too.

 

Step 3 : Elasticsearch

 

One thing for certain, I have never had much trouble with Elasticsearch. About the only gotcha with it is making sure that before you execute it, issue a "ulimit -l unlimited". Elasticsearch uses a lot of file handles. Anyhow, as the output shows, done and done...

Venus is the primary Elasticsearch server, alphapi is the Model B+ and xenapi is the Pi 2.

 

Step 4: Logstash

 

Oh boy.. this one took a while and is not so stable. While I am not generally a fan of Java (sorry Java fans, I'm like old school C/C++, nothing personal) both Elasticsearch and Logstash are Java based, in fact, so is Apache Lucene, which Elasticsearch is based on. While I do not wish to debate the efficiency of Java code over native compiled code, I must say, both Elasticsearch and Logstash do nicely as they are, no complaints here from the old school corner of the room.

 

However, I have a bone to pick with Logstash. While I have found Logstash to be, well, better then its other open source counterparts, one fact about it has stymied me. If you use the latest version, it doesn't work so well in the OpenJDK 7 environment on ARM architectures. The solution, is to move to the Oracle Java 8 SDK, ok, no problem there.

 

Logstash is... basically, Java and JRuby. I've never understood the need to run Ruby in a Java environment... why just not run Ruby? BUT, and this is a big but, there is no JFFI under Oracle Java 8 or with the ARM versions of JRuby (as of this writing).

 

After a fashion and many nights of banging my head againt the table and with no JRuby or Ruby experience, I managed to get an alpha version of JFFI working by heavily modifying the JFFI code for ARM. From a pratical point of view, I am not a Ruby or JRuby expert, nor am I what you would call a Java programmer. So, I took many shortcuts and liberties with the JFFI code. It works... mostly, but it is prone to crashing.

 

I am now seeking the help of a JRuby expert to iron out the JFFI code. But for the purposes of the RoadTest, the proof of concept is sufficient for now.

 

Step 5: Kibana

 

Really... it just isn't my month.

 

The latest version of Kibana no longer requires Apache and is an application of its own. Which is great, but at the moment its an x86/x64 application. No ARM counterpart as yet.

 

Again, long term, seems like I will have get the source from GitHub and hammer away at making an ARM version. However, for the purposes of the RoadTest, I can skip Kibana and just use the install on Venus.

 

In the end, the two items dragging down the Model B's performance were Elasticsearch and Logstash... mostly Logstash. Kibana/Apache didn't add much load to the Model B so I figured the differences between the Model B+ and the Pi 2, with regards to Kibana, would be minimal.

Here you can see that the Kibana instance now seens all the data from the two test Pi's. Yay! This is the raw graph, it needs some adjusting. Here we are looking at temperature spikes over time. The final tweaked Kibana dashboard will have 26 temperature graphs, 24 ASIC scales, one CPU temperature scale and one ambient air temperature scale from a sensor inside the chassis.

 

 

The Marvel console, a monitoring tool in itself for Elasticsearch, shows the three nodes of the cluster RoadTest are operating... we have lift-off!

 

 

And finally, we have the usage graph on today's general event index (as of this writing anyway).

 

Of course, the the last big item is, how are the two Pi's fairing under load? The Model B+, as I expected, hit a load average of 1, meaning it was busy all the time. The Pi 2 surprisingly peaked at 1.43, I was expecting it to be closer to 2.0. At any rate, the Pi 2 could max out at 4 since it has 4 cores. At 1.43 it means one core was busy and another was busy less then half the time. Not bad.

 

Conclusions:

 

This project is really just the beginning of a larger project to collect, analyze and then visualize data from many sources. For example, I have a weather station I'd love to feed into this. I have microcontroller projects that record GPS location, acceleration/deceleration, changes in magenetic fields and even a lightening sensor.

 

Yes, I am a bit of a nerd; but honestly, I work at a University and while I am not a researcher and my real job is an IT person, I have the great fortune to work with some of the researchers on campus and help them out with their research.

 

The whole point of the project was to have one small low cost computer (or two for redundancy) that was relatively low power to collect the data, store, analyze and then visualize it. But also to provide essentially, a small cloud server, complete with file and web services.

 

The analytics are merely a way for me to keep tabs on all my running projects and equipment. The ELK stack, mainly Kibana, provides for a great "eyeball" experience. Its not to say that it is eye-candy. However, for example, I have worked with many system admins that never check their logs. Lets face it, there is a lot of useless data in them.

 

Kibana has a wonderful way in which to show an observer the "gist" or pattern of normal operation, as soon as the pattern gets broken, it shows as a change in the visualization that an admin can easily see when "eyeballing" the graph. I've used this realizaton to help me with my work as well (actually, I setup the visualizations for the admins at one point out of sheer frustration to make sure I didn't end up doing their jobs and tracking down every single problem after something had broken).

 

The Pi's were my first choice, but as mentioned, the Model B and B+ were just a little underpowered.

 

But the Pi 2 definitely has the main memory and CPU power to make the project work.

 

And... incidentally, a decent OpenElec Media Center... ughh... guess I have to get another one. :)

Anonymous