The Experimenting with Sensor Fusion Design Challenge, sponsored by AMD Xilinx, and featuring Spartan-7 SP701 FPGA Evaluation Kit has officially concluded. We had 4 participants, including the Grand Prize Winner and Runner-up. Our judges have read each blog and tallied up the final scores, and element14 is ready to announced the winners. In this blog, I'll review the program (for newcomers) and announce the winners with a summary and links to their work.
A Quick Review: What is Sensor Fusion?
Sensors are an extension of the five human senses. They allow us to perceive the world and often observe details to a degree that our human senses cannot. However, in some situations, they still fall short of the user requirements, regardless of how well they perform. For example, in an automobile, a LIDAR sensor can determine whether there is an obstacle ahead. But if you want to know the exact nature of the obstacle, you also need an on-board camera. Moreover, if you want to sense the motion state of this object, you'll also need a millimeter-wave (mmWave) radar.
When multiple pieces of information on the features of the object are integrated, a more complete and accurate picture can be derived for system operation. This method of integrating multiple sensors is called “sensor fusion.”
By definition, sensor fusion is the use of computer technology to automatically analyze and synthesize information and data from multiple sensors or sources under certain criteria to conduct the information processing required for making decisions and estimations.
Two common types of sensor fusion are image and motion sensor fusion, used in automotive surround view and navigation applications, respectively. Other uses could include (a) determining the orientation of a system in three-dimensional space, or (b) trackers where their data is fused with data from wearable heart rate monitors, temperature sensors, etc. as part of telehealth services or remote monitoring of patient conditions
What is the Experimenting with Sensor Fusion Design Challenge?
element14's Experimenting with Sensor Fusion is a hands-on competition for electronic engineers. The participants had the opportunity to receive a sensor fusion dev kit from our sponsor FREE of charge. They were challenged to conduct experiments and blog about what they learned. Their blogs would be judged for technical merit and creativity. The top two participants would receive some great prizes.
Who are the Winners of the Experimenting with Sensor Fusion Design Challenge?
Our four participants conducted experiments and produced 13 technical blogs. Our judges have made their decisions, so let's meet the winners!
- Grand Prize Winner of the Experimenting with Sensor Fusion Challenge: javagoza
javagoza is currently a back-end software developer for payment solutions in the payment card industry, specializing in PCI and EMV compliance. He had said on his application that the main reason he was interested in the Experimenting with Sensor Fusion program was to "learn more about FPGA design and image processing applications."
He experimented with building a prototype for a portable alert monitor that provides heads up display information for firefighters. He built the system in stages and experimented with new sensors and integrated them into the system. The sensors included, the Pcam 5C camera, 5MP fixed focus color camera module, various HDMI displays, the Pmod NAV module, 9-axis IMU plus barometer, the Pmod HYGRO module, digital humidity and temperature sensor, the Pmod CMPS2 Module, 3-Axis Compass, the SparkFun Environmental Combo Breakout Module(CCS811 Equivalent CO2 and Total Volatile Organic Compounds and BME280, Humidity, Temperature and Barometric Pressure) and finally the SparkFun IR Array Breakout: 110 degree FOV, MLX90640, Sensor Array Thermopile 32x24.
He also experimented with the different functionalities of the AMD Xilinx Vivado development environment taking advantage of the hardware accelerators with the Spartan-7 FPGA. The system combines a thermal array sensor, real video image, a magnetometer compass, and the IMU to detect movement and orientation of the head and a time-of-flight (ToF) sensor to estimate the distance to nearby objects. With respect to the SP701 development board, javagoza said, "The SP701 board makes prototyping solutions based on the Spartan-7 FPGA really easy." He designed the hardware, a heads display block IP, using high level synthesis tools (Vitis HLS). He used different IP blocks provided by Xilinx within the Vivado Block Designer and built other software drivers with Vitis IDE. One of our judges called javagoza's blogs as "an excellent set of posts."
You can read all of his blogs here.
- Runner Up Prize Winner of the Experimenting with Sensor Fusion Challenge: _david_
_david_ is a 2021 graduate with a background in computer engineering and a concentration in robotics. He's worked with Xilinx FPGA's for about 2 years. He started out by interfacing MIPI CSI cameras in bare metal, then progressively switched over to embedded Linux where he learned how to use XRT to deploy accelerated computer vision applications for drones.
His goal for this challenge was to implement some kind of sensor fusion application. Initially, he wanted to do experiments around Visual-Inertial Odometry (VIO), a type of sensor fusion which uses image sensors and IMU data to compute the position and orientation of an object. But due to time constraints, he chose instead only to take on part of this problem, namely the inertial odometry part.
He planned on collecting data from an IMU and accelerate the computation of an object's pose. Once computed, he would generate a visualization of a set of unit vectors which will be fused with a live camera feed. In essence, this is an augmented reality (AR) application which will attempt to project the pose of an object in real time. He calls it a "drone pose" because drones are probably the best example of a rigid body robot that experiences linear and angular accelerations. At a high level, he wanted to be able to collect data from an IMU in order to calculate the pose of a rigid body and then represent it as a coordinate frame on a live camera feed.
One of our judges said, his blogs were "well-explained and he didn't try to do an excessive amount, and his two blogs were clear. The bulk of the content was in the second blog, which actually explains a lot to people new to the technology. I'd rate his content very high."
You can read all of his blogs here.
I'd like to thank all the element14 members who participated in this challenge:
jwr50 - FGPA-based VSLAM for Indoor Navigation
He explored Visual Simultaneous Localization and Mapping (VSLAM) for indoor spaces. VSLAM is a class of algorithms that combines images sequences with pose information to construct a map of a device’s surroundings and at the same time estimate the location within that map. This technique is well suited for indoor environments where GPS is unavailable and in the absence of other positioning markers, beacons, etc. You can read his blogs here.
guillengap - Sensor Fusion Bird Detector
He attempted to develop a bird detector prototype with the Spartan-7 SP701 FPGA Evaluation Kit. He was motivated to take on this challenge because there is a great diversity of migratory birds in his area, so he wanted to experiment with pigeons, ravens, swallows and hummingbirds, since they have different types of behaviors. You can read his blogs here.
Last Word: A Big Thank You to Our Judges!
We'd like to thank Top Members Don Bertke and Shabaz for judging the Experimenting with Sensor Fusion Challenge! Their input on the projects was invaluable to our final decisions.
Top Comments