Artificial Intelligence (AI) experts are fixing their expectations about the availability of autonomous vehicles and how it might take several years before self-driving systems can genuinely avoid accidents. Full autonomy seems closer than ever as the self-driving vehicles keep getting better, but the reality is autonomous technologies are currently struggling with the real world. The delay could place fully self-driving vehicles further than we realize —putting them out of reach for an entire generation. Despite this, car makers are quite optimistic about autonomy. Tesla and Google forecasted available self-driving vehicles by the end of this year, and Nutonomy is having driverless taxis in Singapore the next year, together with GM putting fully autonomous cars into production and MobileEye deploying their Level 4 cars in the streets.
All of these is possible due to the tremendous progress in both AI and the tech industry by the application of Deep Learning —extracting structured information from Big Data, using layered complex Machine Learning algorithms. However, Deep Learning requires massive amounts of training data to work properly, incorporating almost every scenario into the algorithm. AI engineers get creative in where the data comes from and how it is structured while figuring out the reach of the algorithm. Reviewing all the images labeled “coyote” and decide if the new picture belongs to the group —this task is called interpolation. The same Deep Learning system might not be able to identify a coyote unless it has seen lots of coyote images, even if it has seen pictures of dogs and wolfs, and knows coyotes are somewhere in between —this process is named generalization, and demands a different skill set.
Scientists thought generalization skills could be improved with the right implementation of the algorithms, but they have noticed that Deep Learning systems are not entirely accomplishing it. The recent Why do deep convolutional networks generalize so poorly to small image transformations? study from Cornel University found that conventional Deep Learning algorithms struggle with generalizing and classifying objects on video frames because of all the small changes in the pictures (with the aggregated hundreds of minor shifts) that can completely change the judgment, something some researchers have taken advantage of in adversarial examples. The issue is a polar bear across different frames of a video can be labeled as a baboon, weasel, or mongoose due to minor shifts in the background.
AI engineers are stepping on image search, voice recognition, and other AI technologies to avoid interpolation and generalization issues. As they have never been able to automate cars and trucks at this level before, self-driving vehicles are a scientific experiment where some questions have no answer yet, some interrogations are still not revealed, and most of the challenges are unknown. Right now, is all about identifying objects and following rules, expecting new things to happen while providing some safe and secure outcome. Autonomous vehicles are confronting unexpected scenarios as if for the first time in edging cases like the Model S fatal crash (confused by the high ride height of the trailer and bright reflection of the Sun) or the self-driving Uber killing a woman cyclist (pushing her bicycle after she emerged from an unauthorized crosswalk), improving their ability to generalize. All data collected from implementations and accidents offers insights that scientists and researchers use to improve the Deep Learning systems and keep self-driving vehicles getting better. The semi-autonomous products (like the Autopilot from Tesla) are smart enough to self-drive and handle most situations, seeking human intervention if anything too unpredictable happens. The dilemma is when something does go wrong, will be hard to know whom to blame: the car or the driver.
Across the industry, companies are racing for more data to solve the problems, assuming the vehicles with the most miles will have the most reliable system. The Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? report estimated that self-driving vehicles would have to drive 275 million miles without a fatality for proving they are safe as human drivers; even when humans are not the best drivers: every year 1.3 million people die worldwide due to road accidents. As Deep Learning is now the primary way of how vehicles perceive objects and decide they way their respond, improving the accident rate is harder than it looks.
Top Comments