Nearly in every case of the accidents involving self-driving vehicles, unexpected circumstances not catered for by human programming was the cause of the problem —the Google shuttle in Las Vegas, some Ubers in Arizona, the Tesla car in Florida, and several others in California. If driverless vehicles are supposed to make safer roads, the bigger question is What should these cars or trucks do to reduce accidents?
Engineers are struggling to figure out the underlying safety challenge: even when autonomous vehicles are doing what they are supposed to, all nearby cars and trucks drivers are flawed and capable of making errors.
There are two principal crashing causes involving self-driving vehicles:
- The sensors are not detecting what is happening around the vehicle due to its quirks: cameras only work correctly with enough light, LiDAR struggles to operate through the fog, Radar is not exceptionally accurate, and GPS performs better with a clear view of the sky. Nowadays, engineers are defining the right mix of sensors to be implemented, as the solution is just not adding more of them on self-driving vehicles because both cost and computing power are limiting factors.
- The software is mishandling unexpected situations when the vehicle faces conditions that is not programmed to do. All self-driving vehicles have to make hundreds of decisions every second and adjusting the path using the incoming data from the environment, just like human drivers.
Engineers must combine the data from all the sensor inputs and create an accurate computerized model of the vehicle surrounding; then the code can interpret that representation to instruct the car or truck on how to navigate and interact with whatever might be happening nearby. Basically, the vehicle will not take the right decisions if the perception of the system is not accurate. It will not be enough for autonomous vehicles to drive safely to fulfill expectations of reducing crashes; they must become the ultimate defensive driver, ready to react when other vehicles nearby drive unsafely.
Some of the incidents with driverless vehicles show machines did not entirely understand the situation to define the correct action; those vehicles executed the rules they had but were not making sure their decisions were the safest ones —because of the way most self-driving vehicles are being programmed and tested. Engineers need to code autonomous vehicles with instructions on how to behave when some vehicle do something out of the ordinary; besides, testers should consider other vehicles as adversaries while developing plans for extreme situations. The basic standard for both is making driverless vehicles follow the laws of the road: obeying traffic lights and signs, knowing local transit rules, and behave as a law-abiding human driver.
However, what should an autonomous vehicle do if a car is driving in the wrong direction? Currently, self-driving vehicles totally stop and wait for the situation to change —definitely, no human driver would do this. A person would take evasive action and switch lanes without signaling, driving onto the shoulder, or speeding up to avoid a crash —even if those meant breaking a traffic rule. Engineers need to teach autonomous vehicles to understand not only the surroundings but the context. A truck approaching from the front means no harm if it is in the other lane, but is entirely different if it is in the same lane.
As car makers are becoming better at implementing self-driving technologies, they must rethink safety for autonomous vehicles and test them under complex tasks (like parking in a crowded lot or changing lanes in a work zone) to analyze and improve how they perform, not only on lonely unidirectional or multi-lane highways under good weather. Those driverless tests might be similar to human driving tests, but is exactly what it should be if autonomous vehicles and human drivers will have to coexist safely on the roads.
Top Comments