FPGAs and SoCs are enabling IoT devices to analyze image and video without the help of a central server.
What is Edge Computing?
With their accelerating growth and increasing computing power, IoT devices are producing massive, sometimes unmanageable volumes of data, outpacing infrastructure and network capabilities. Sending this data to a cloud or centralized data center could result in latency and bandwidth issues. Edge computing offers an efficient alternative, allowing IoT devices in remote locations to rapidly process and act on data at the edge of the network. The device or local server would process the data and send only the most critical data to the local data center, minimizing latency. Edge computing has also become necessary for artificial intelligence (AI) and machine learning (ML) applications. Many edge devices even support various embedded vision applications such as surveillance, medical and industrial imaging. However, edge-supported embedded vision applications demand low power devices with high performance and high reliability, all in a small form factor. This Spotlight discusses edge computing in machine vision applications and low-power FPGAs for smart embedded vision solutions.
How does Edge Computing benefit a network?
With edge computing, data processing is done close to the point of data generation in order to reduce response times. This process offers lower latency and faster processing speeds, while still being more secure than a cloud network. The goal of edge computing is to accelerate innovation by simplifying daily operations. It shares similarities with cloud computing technologies in data storage and processing, however, edge computing offers several other advantages over cloud computing:
Improved performance: By positioning the computing nodes closer to the data source, edge computing achieves lower latency and higher bandwidth. This results in faster data processing and response times, even when dealing with large data volumes. Because of this, edge computing is also referred to as a distributed IT network architecture.
Scalability: Edge computing allows the addition of extra computing power when needed, permitting increased flexibility in resource allocation. Additionally, edge computing facilitates the integration of legacy and resource-constrained devices into advanced OT-IT system architectures when connected to edge devices.
Sustainability: Edge computing contributes to a low-carbon society, by reducing traffic to the cloud, minimizing cloud storage, and lowering the number of cloud operations. This reduces infrastructure and power consumption.
Minimum downtime: Edge computing ensures that applications run smoothly even during limited connectivity or network disruptions.
Security: Unlike cloud computing, edge computing minimizes bulk data transfer between devices and data centers, providing an added layer of protection. Sensitive data can be filtered, with only essential information being transferred, enhancing overall security.
Cost savings: Edge computing performs data analytics at the device location, reducing the need for bandwidth, storage, and computational power at centralized cloud facilities.
PIC
Figure 1: Edge computing and cloud computing (Source: AIMultiple)
What is Intelligent Embedded Vision?
Embedded vision refers to using computer vision in machines that visually comprehend their environment. A computer (or a machine) can capture and process visual information, performing tasks such as object detection and face applications on images and video. Embedded vision systems are compact and typically include a camera or other imaging sensors mounted on an image processor. Figure 2 explains the critical components of an embedded vision system. Some of the necessary components include a processor platform optimized for image and video processing, camera components encompassing imaging sensors like digital or thermal cameras, hardware interfaces, operating systems running on the processor, application software for tasks like image processing, and integration interfaces to merge components and communication modules.
PIC
Figure 2: Key components of an Embedded vision system (Source: Avnet)
Embedded vision supports various applications across industries, encompassing real-time object detection in surveillance, autonomous vehicles, and robotics. One widespread application is facial recognition, which provides secure authentication through unique facial features. Gesture recognition enables intuitive interactions in gaming consoles, smartphones, and interactive devices. Optical Character Recognition (OCR) supports tasks like document scanning and reading license plates, enhancing accessibility through text extraction. Object detection also supports Augmented Reality (AR), which overlays digital content in the real world, enhancing user experiences in gaming, education, and marketing.
In healthcare, embedded vision's AI and ML algorithms increase the accuracy of various imaging methods, such as X-rays, MRIs, and CT scans, allowing the identification of problem areas that the human eye could miss. Autonomous drones and robots can independently navigate surroundings and execute tasks without human supervision. In agricultural automation, embedded vision enables precision farming, process optimization, and minimizes resource waste. Embedded vision also helps in retail analytics, layout optimization, and inventory management. Its versatility continues to revolutionize industries, increasing efficiency and productivity.
Applications of Embedded Vision on the Edge
The integration of embedded vision into edge devices has opened a wide range of possibilities across various industries:
Healthcare: Edge devices with embedded vision can improve patient outcomes and reduce costs. Wearable devices with embedded vision can monitor patients for signs of disease or injury, enabling timely interventions and reducing hospital readmissions.
Manufacturing: Embedded vision algorithms can quickly analyze images of manufactured parts to detect defects or flaws. This allows manufacturers to address issues before products are shipped, enhancing quality control and reducing waste.
Smart Homes: Intelligent cameras with computer vision can detect human presence in rooms, adjusting lighting and temperature settings accordingly. They can also alert homeowners to potential intruders.
Agriculture: Embedded vision on edge devices can improve crop yields and reduce waste. Drones equipped with embedded vision can quickly analyze crop images to detect diseases, pests, or other issues, enabling timely action by farmers.
Public Safety: Cameras with embedded vision can swiftly analyze crime scene images and alert law enforcement of potential suspects, leading to quicker emergency response times and reduced crime rates. These devices can also provide real-time situational awareness during natural disasters or emergencies.
Challenges of transitioning to Edge Computing
For the hardware developer, supporting edge AI/ML can be challenging. Devices that support embedded vision applications at the edge must consume minimal power, have high performance and reliability, and have a small form factor. Embedded vision developers can add more sensors and/or cameras with higher resolution and faster frame rates to boost accuracy, as well as enable new applications. Designers must also be compliant with the MIPI (mobile industry processor interface) standard, which defines specifications for the design of mobile, connected car, and IoT devices.
Intelligent Embedded Vision with Microchip SoCs and FPGAs
FPGAs offer a well-suited platform for embedded vision systems in edge computing and edge applications. They are reconfigurable, can execute processes at blazing-fast speeds, and are cost-effective. FPGAs are also preferable for edge AI applications, such as inferencing in power-constrained compute environments, because they can perform more operations per second, with greater power efficiency, than a central processing unit (CPU) or graphics processing unit (GPU).
PIC
MPFS025T Series PolarFire FPGA SoC
PolarFire FPGA and SoC solutions offer customizable image sensor support for the highest dynamic range cameras and sensors with the highest resolution. This non-volatile FPGA/SoC portfolio delivers high power efficiencies for thermal and high-resolution applications with low power consumption in a tiny form factor. PolarFire FPGAs consume about 50% less power than similar devices for inferencing at the edge. They also offer 25% higher-capacity math blocks, delivering 1.5 tera operations per second (TOPS). Developers can use FPGAs for greater customization and differentiation due to their inherent upgradability and their ability to integrate a variety of functions on a single chip. The PolarFire FPGA are available in various sizes to match the application's performance, power, and package size tradeoffs, enabling customers to implement solutions as small as 11 × 11 mm.
PIC
MPF050T Series PolarFire FPGA
Summing up: Embedded Vision at the Edge
Integrating embedded vision into edge devices can transform various industries and pave the way for future advancements. An efficient edge computing system for smart vision systems requires flexible hardware with advanced image processing capabilities and low power consumption. PolarFire and PolarFire SoC FPGAs from Microchip deliver greater power efficiencies for thermal and high-resolution applications. The PolarFire family of FPGAs come in the industry's smallest form factors, making them ideal for a new range of compute-intensive edge devices.
What kind of applications will Embedded Vision in edge devices enable?
Please tell us in the Comments section below.