The drone was designed by the organizers and used in the race by AI teams and human pilots. (Image Credit: TU Delft)
An AI-powered drone developed by TU Delft recently participated in a drone race at the A2RL Drone Championship in Abu Dhabi, beating 13 AI teams and human competitors. The drone outpaced three former Drone Champions League (DCL) world champions in the final round that occurred on a challenging, winding course. TU Delft’s autonomous AI drone had a top speed of 59 mph.
The winning drone featured a forward-facing camera for visual input, matching the setup of human FPV (First Person View) pilots. Students and researchers from the MAVLab, part of the Faculty of Aerospace Engineering, developed the racing AI. “I always wondered when AI would be able to compete with human drone racing pilots in real competitions,” said team leader Christophe De Wagter in a press release. “I’m extremely proud that we were able to make it happen already this year.”
TU Delft’s AI drone flying through a gate at the race track. (Image Credit: TU Delft)
The team leveraged technology from the ESA, developed by the Advanced Concepts Team known as Guidance and Control. Instead of using a controller, a deep neural network transmits control commands to the drone’s motors.
Autonomous drones typically rely on advanced algorithms that consume significant computing power. However, those algorithms can’t be implemented directly on a drone as it has limited computational resources and energy constraints. ESA solved this issue by using neural networks that don’t consume as much power and work like control algorithms. This technology, intended for satellites, couldn’t be tested by the ESA. So they worked with the MAVLab for its autonomous drones.
Through trial and error, reinforcement learning trains these deep neural networks. If a strategy works, it gets rewarded, while unsuccessful ones are penalized. This brings the AI closer to the drone’s hardware limitations. “We now train the deep neural networks with reinforcement learning, a form of learning by trial and error. ”, says Christophe De Wagter. “This allows the drone to more closely approach the physical limits of the system. To get there, though, we had to redesign not only the training procedure for the control, but also how we can learn about the drone’s dynamics from its own onboard sensory data.”
Have a story tip? Message me at: http://twitter.com/Cabe_Atwell