Introduction
The futurists have spoken: about eight million autonomous or semi-autonomous vehicles will drive to their destinations by 2025. Driver assistance technology not only protects the occupants of the car, it also saves the lives of pedestrians and other drivers. Future commuters will enjoy minimal traffic congestion, reduced vehicular pollution, the absence of parking issues, and lower transportation costs. Governments are on board when it comes to the introduction of autonomous vehicles, enticed by reduced infrastructures spends like roads and street furniture.
Autonomous driving is standardized across six levels of automation or capacity of the vehicle to operate on its own. The complexity of implementation at the lowest level requires the ability to run fully parallel algorithms with high data rate functionality. FPGAs are productive for processing autonomous driving applications, as they form heterogeneous systems with high parallel processing power, and offer customizable performing solutions up to the highest level of automation.
Overview of Driving Levels
The SAE (Society of Automotive Engineers) International classification system is used as a discussion reference point concerning vehicle automation. The SAE offers a detailed taxonomy and defines six driving automation levels, beginning from Level-0 (no automation and a fully engaged driver while driving), to Level-5 (full autonomy without any human driver). Lower systems warn you of an impending accident, while the highest two take proactive action to avoid such incidents actively. The engaged features determine the driving automation level at that instance.
Every drive has three primary actors: the (human) user, the driving automation system, and other vehicle components and systems. Driving automation levels are referenced to the particular role played by each of these three primary actors during the execution of the dynamic driving task (DDT). The automated vehicle system is a combo of software (onboard and remote) and hardware conducting a driving task, with or without any humans actively overseeing the driving environment. The SAE definitions separate vehicles into levels based on “who does what, when.” The following table shows SAE Level 0 and SAE Level-2 (monitored by a human driver) and SAE level 3-5 (automated driving system).
SAE Level |
Name |
Narrative Definition |
Executing of Steering and Acceleration / Deceleration |
Monitoring of Driving Environment |
Fallback Performance of Dynamic Driving task |
System Capability (Driving Modes) |
Examples |
Human Driver oversees driving environment |
|||||||
0 |
No Automation |
The full-time performance by the human driver controls all aspects of DDT, even when supported by warning or intervention systems |
Human Driver |
Human Driver |
Human Driver |
N/A |
Blind spot Detection / Surround View |
1 |
Driver Assistance |
The driver assistance system helps in driving mode-specific execution by either steering or acceleration/deceleration using driving environment information and expects the human driver to conduct all other factors of the dynamic driving task |
Human Driver and System |
Human Driver |
Human Driver |
Some driving modes |
Adaptive Cruise Control / Lane Keep Assist / Parking Assist |
2 |
Partial Automation |
The driver assistance system helps in driving mode-specific execution by either steering or acceleration/deceleration using driving environment information and expects the human driver to conduct all other factors of the dynamic driving task |
System |
Human Driver |
Human Driver |
Some driving modes |
Traffic Jam Assist |
Automated Driving System (“system”) monitors the driving environment |
|||||||
3 |
Conditional Automation |
The automated driving system conducts all aspects of a dynamic driving task with the expectation that the human driver will respond properly to an intervening request |
System |
System |
Human Driver |
Some driving modes |
Full-Speed Range Stop & Go-Highway / Self-Parking |
4 |
High Automation |
The automated driving system conducts all aspects of a dynamic driving task with no expectation that the human driver will respond to an intervening request |
System |
System |
System |
Some driving modes |
Automated Driving / Valet Parking |
5 |
Full Automation |
Complete performance by the automated driving system of all dynamic driving tasks, under every environmental and roadway condition managed by a human driver |
System |
System |
System |
All driving modes |
Fully Autonomous Driving / Driver-less Vehicle Operation |
Table1: SAE driving levels (Source: Xilinx)
Autonomous Driving needs: Scalability, Modularity, Portability, and Adaptability
Xilinx FPGAs' integrated connectivity and programmable architecture are optimal for automotive applications. Since the hardware is standard across multiple car models, the connectivity can be added or updated as network standards evolve. Xilinx products satisfy the dynamic application needs of various vehicle platforms in a scalable, cost-effective, and timely manner. Such increased system performance, combined with highly integrated architecture, reduces the overall power and bill of materials (BOM) cost of Xilinx All-Programmable SoCs. It imports flexibility with programmable interfacing choices for different standards.
The solution must be scalable in BOM costs adjustment in a wide range of systems, from simple to complex ones. This is vital, as its scalability makes it possible to re-program and add processing power to scale their systems needs to satisfy rising complexity, capabilities, or speed. The architecture facilitates programmable fabrics addition when the application demands it.
Portability is a must for design migration between generations of the device family over time. Algorithms and interfaces should be efficiently adopted across several product life cycles, calibrating to new sensing technologies. The modern automotive industry imposes new restrictions on automakers, championing greater efficiency, and highlights the need for adaptable processing engines. This is why modularity (the ability to adjust discrete processing performance elements) brings partitioning and functional safety advantages.
The ASICs and GPUs are essentially one-size-fits-all solutions. Automakers can take advantage of the FPGA's programmable nature and customize chips to run proprietary algorithms.
Click image to enlarge
Figure 1: Automated driving systems functional diagram (Image credit: Xilinx)
The primary aim of ADAS technology is to make drivers better aware of their environment for a safer drive. The technology assures a stress-free driving experience. Modern high-end automobiles incorporate ADAS products fitted with advanced rear-camera and fusion-sensor systems. The sensors simultaneously perform multiple tasks, like lane departure, blind spot monitoring and warning, automated cruise control, pedestrian and sign detection, drowsy-driver detection, and forward-collision warning.
Many vehicles have adaptive cruise control, collision avoidance, intelligent speed control, automated parking, and lane-keep assist. These revolutionary technologies represent baby steps in the automotive industry’s efforts to offer consumers fully autonomous and self-driving vehicles where the driver is, for all intents and purposes, a copilot.
The following sections describe how SAE levels of driving are implemented utilizing Xilinx FPGA devices.
Level-0 Surround-view example- The driver does all operations, like steering, braking, and acceleration. The vehicle has zero autonomous or self-driving control.
Click image to enlarge
Figure 2: Level 0 system
This system does not drive the vehicle, with the architecture limited to a single device. The Zynq UltraScale 28nm device is used to implement such systems. The block diagram expounds a surround-view example of four low-resolution camera sensors producing individual views for the driver. They occasionally help the driver, like when it comes to parking the vehicle.
The Zynq UltraScale device (ZU2) internal architecture exhibits an identical processing system as fitted in a standard ECU automotive microcontroller system. It has a dual R5 core bunched with quad A53 application processor cores, with the yellow portion being the FPGA fabric-programmable logic- where the camera signal sluices in via quad SERDES interfaced to the four lanes CSI-2 MIPI interface. Such an arrangement can manage 2 MP resolution inputs, 30 fps generating Frame capture core, and a frames capture image dewarpper. Distortion correction occurs as the system stitches together a comprehensive picture. The video controller driver uses internal cores to coordinate and provide surround views. The R5 core works as a real-time interface gateway like low power domain, vehicle status, CAN interface, or driver provided HMI vehicle inputs. The GPU core renders vehicle animation and graphics.
Level-1 Multi-Feature example- Level 1 covers driver assistance where the driver controls most vehicle functions but enjoys the occasional autonomous help, like parking assistance. In such cases, a beeping sound notifies the driver to an impending obstacle. Such Level 1 autonomy is present in most modern cars, packaged as an Intelligent Cruise Control feature.
These vehicles come with a few forward-looking cameras, some Radar sensors with Level-0 system, automated emergency braking, lane departure warning, and other features. The higher ZU4 or ZU5 Zynq UltraScale family device is used to implement the Level-1 architecture system. An additional interface for the forward-looking camera must be appended with an auxiliary sensor fusion accelerator section needed for lane marking and headlamp detection. The vehicle detection accelerator, with other various optimized accelerators, works in tandem with the application software to make emergency braking decisions. An AI engine crunches the numbers.
Vehicles in Level 2 have partial driving automation fitted with advanced driver assistance systems or ADAS. The Level 2 vehicles can assist with steering, maintaining speed, acceleration, braking, and other functions. Drivers, however, must rest both their hands on the wheel and assume control if required. Level-2 implementation is similar to Level-1 using the scalability feature.
Level-3 System example- A Level-3 automated system performs a few drive action components and monitors the driving environment in limited scenarios. The human driver must wrest back control when requested by the computerized system. The system has environmental detection capabilities and independently take informed decisions, like acceleration and leaving behind a slow-moving vehicle.
The presence of multiple sensors like an ultrasonic sensor, Radar sensor, surround-view cameras, driver monitoring, Lidar, In-Cabin camera, and forward/rearview cameras imply an increased processing power to perform additional tasks. Everything is integrated using PCIe, Gigabit Ethernet interface, which is supported inside the device natively. The safety control device commands the vehicle, with the ZU or Versal processor exclusively dealing with accelerating and compute functions. The administering application software aligns and service as DAPD collates, pre-processes and marshal the device data, put it in packets, and dispatches it off to either compute accelerates over AI or to the serial processor where it is initially handled.
Click image to enlarge
Figure 3: Level 3+ system
Level-4 ECU Architecture – All Level 4 vehicles can self-drive. Infrastructure and legislation, however, restrict them to a limited area. The crucial difference separating Level 3 and Level 4 automation is that the latter can intervene if things go haywire or if there is any system failure. It follows that these cars do not need human interaction most of the time. A human, however, still has the power to override the system manually.
This level can be done efficiently as compared to Level-2, which we explained earlier is modular and scalable, portable in terms of the IP created to do the surround-view processing ports to the level-4 platform. The Xilinx tools suites facilitate customer IP reuse across families and provide unmatched functional performance flexibility.
No human interaction is needed for Level 5 autonomous driving. Vehicles can steer, monitor road conditions, accelerate, and brake if there is a traffic jam. Level 5 automation, in essence, allows the driver to relax and ignore car functions. Artificial Intelligence (AI) drives these vehicles, and they will respond to sensor-generated real-world data points. Drivers, in level 5 autonomous vehicles, simply taps in the destination, and the car drives itself to that destination. Current regulations forbid Level 5 vehicles on roads.
Autonomous vehicle development is now in a higher gear, pushing many nations to write regulations and laws associated with this tech. These rules spotlight on safety, security, privacy, and liability related to the coming wave of automobiles. Many autonomous vehicles are already under test in sandbox environments, with test scenarios soon to be turned into reality.
Examples of Automotive Grade FPGAs and FPGA-based SoC Devices
Automotive grade XA Artix-7 FPGA |
Automotive Grade XA Spartan-6 FPGA |
Automotive Grade XA Zynq-7000 SoC |
Automotive Grade XA Zynq UltraScale+ MPSoC |
---|---|---|---|
NewarkNewark |
NewarkNewark |
NewarkNewark |
|