Return to Tech Connection
Sensor technology has evolved to the point where there are now many physical single sensors available to measure temperature, pressure, position, location, humidity, moisture, chemical composition, smoke, gas, and many other environmental parameters, with each having the ability to provide data in analog or digital form. However, these single sensors have limitations, such as:
- Sensory Deprivation or Fault Tolerance: A sudden failure of sensor which causes loss of perception on the targeted object.
- Limited spatial coverage: Usually a single sensor only covers a limited region. For example, if we read the temperature of a big container using a single thermometer it will give the temperature value near the thermometer and fail to correctly measure the average temperature of the whole container.
- Limited temporal coverage: Some sensors have a time delay before they execute the process and transmit the measurement values, which may be detrimental to the response of a real-time system.
- Imprecision: If we take measurements from only individual sensors, the measurements will not be as precise for the employed sensing element.
- Uncertainty: This depends more on the object being observed rather than the observing device. If we use a single sensor, we cannot measure all relevant parameters of the perception, so the observation may be ambiguous and uncertainty will arise.
Sensor Fusion Technology
Sensor fusion technology is a solution to overcome the limitations of a single physical sensor. Here, the system acquires data from many different sensors and the overall data is analyzed using specific algorithms to provide the desired accurate and precise result.
We can define sensor fusion as a process (software) to combine data from multiple sensors that provide a more accurate picture of the object environment, rather than using information from a single sensor alone. A smart phone is the most popular example of sensor fusion. In order to determine its location, the smart phone processor acquires data from three different sensors, the accelerometer, gyroscope, and magnetometer and combines it to provide the accurate geographical location of the phone.
Figure 1: Sensor Fusion Block Diagram
Sensor Fusion Example: The Human Body
The human body’s method of analyzing the environment is analogous to sensor fusion technology. It detects the external environment by way of various senses, namely vision, hearing, touch, smell and taste. The various sensors in our body collect corresponding data about our surroundings and pass it to the brain through the nervous system. The brain works as a processor and makes calculations on the data in real-time, and transmits the results through the nervous system allowing the brain to further decide on its response to the change in environment.
Generally, the brain makes a decision on the basis of several sensory inputs to validate an event and compensate for a lack of information from any one sensor. For example, if there is fire in any corner of a building area, we may or may not be able to see it from our location, but we can smell it and sense the temperature and our brain makes a decision to leave the area.
Categorizations of Sensor Fusion
Sensors, used as data sources in a fusion process, are not generally identical. We can differentiate the types of sensor fusion as direct fusion, indirect fusion, and the combination of the outputs of the direct and indirect fusion. Direct fusion can be defined as the fusion of sensor data from a set of similar or different types of sensors. Indirect fusion is based on the different kinds of information about the environment already available with us, and is not real time.
Sensor Fusion can be categorized based on the methodology of implementation such as level and extent of fusion, types of Inputs and Outputs, and Sensor configuration methodologies.
Fusion processes are normally classified into three levels: low, intermediate, and high level fusion:
- Low-level or Data level fusion (also called the direct approach) is a process in which the raw data from several sources are combined and more informative data representing the physical environment is provided without removing predefined features.
- Intermediate or Feature-level fusion is a feature-based approach, which compresses raw data into predefined features like edges, lines, and textures to represent the images of objects in the physical environment.
- High-level or Decision level fusion combines the several decisive inputs and makes a decision, using methods like fuzzy-logic or statistical modelling.
Another categorization based on the three-level model is derived from the abstraction level of the input and output data and the typical pattern of fusion where the input and output belongs to different levels. For example, pattern recognition and pattern processing run between feature and decision level. These confusing fusion-patterns sometimes are allocated according to the level of their input data, and other times are allocated according to the level of their output data. This amounts to five fusion categories:
- Raw Data Output from Raw data input
- Feature Output from Raw data input
- Feature Output from Feature input
- Decision Output from Feature Input
- Decision Output from Decision input
On the basis of sensor configuration, sensor fusion is categorized in three categories, namely Complementary, Competitive, and Cooperative.
In the complementary configuration, sensors do not directly depend on each other, but are combined to give a more complete image of the data under observation. This rectifies the incompleteness of sensor data.
In the competitive configuration, each sensor conveys independent measurements of the same parameter. There are two possible competitive configurations: one in which fusion data comes from different sensors and the other where fusion of measurements from a single sensor is taken at different instants.
In the cooperative configuration, information is taken from two independent sensors to derive information that cannot be derived from a single sensor. A simple example of cooperative configuration is stereoscopic vision, where we use two cameras in different viewpoints for two dimensional images and combine the data of these two cameras, deriving a three dimensional image of the observed section.
Sensor Fusion Models and Algorithms
As the fusion sensor configuration depends heavily on the application, until now there has been no broadly accepted model of sensor fusion. It is questionable if there is any universal technique which will be a uniformly superior solution. However, there are standard architectures like JDL Fusion, Waterfall Fusion Process, Boyd, and the LAAS Model, which can be adopted as the application demands.
Sensors generally provide the data in the environment by taking measurements. Since these measurements can be noisy, we have to rectify it and reconstruct the parameters of observation. Sensor fusion uses some specific algorithms for smoothening, filtering, and prediction like the Central limit theorem, Kalman filter, Bayesian networks, Dempster-Shafer and Convolutional neural network to obtain the optimal result. Such algorithms are used in aircraft altitude detection, traffic situation analysis, and the orientation of systems in three-dimensional space.
Driverless cars, which require accurate information about their surroundings to make driving decisions, are one of the most discussed applications today that can use sensor fusion. Various consumer and industrial applications such as industrial robots, automotive, traction control, smartphones, tablets, IoT and fitness bands require sensor fusion capability.
Modern Trends
Silicon technology has evolved to the point where sensor fusion can be achieved by the integration and fabrication of multiple sensors into a single MEMS device. It may be accompanied by a Sensor Hub using an on board microcontroller which integrates and processes data from different sensors and reduces the load and power consumption of the central processor.
There are sensor fusion chipsets available like the SSC7102-GQ-AA0 Controller from Microchip that has sensor fusion firmware featured with self-contained 9-axis sensor fusion, sensor data pass-through, fast in-use background calibration of all sensors and calibration monitor, magnetic immunity: enhanced magnetic distortion, detection and suppression, gyroscope drift cancellation, ambient light sensor support hosted on a 32-bit embedded controller.
SSC7102 Sensor Fusion Hub More InformationMore Information
The FEBFIS1100MEMSIMU6D3X development Board from On Semiconductor is another complete solution for 3D motion tracking with optimized 9D sensor fusion library It incorporates FIS1100FIS1100 Inertial Measurement Unit(IMU with AttitudeEngine motion co-processor and sensor fusion library It has a 3-axis Gyroscope a 3-axis accelerometer and a 3-axis magnetometer
FIS1100FIS11006D Inertial Measurement Unit with Motion Co-Processor and Sensor Fusion Library More Information
Top Comments