Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as it is goofy.
Just found this and thought I would share it with the community, as well as start a discussion on how to possibly get the AI to walk and move more like a human, rather than looking like a piece of jello
!
Some ideas I've heard and though of include: Energy constraints so that there would be a penalty for the flailing that the AI currently does and attaching the sensors and data the AI is receiving to the head of the avatar so that it is seen as necessary to stabilize to get useful data.
What are your ideas?!