During a talk at VentureBeat’s Transform 2019 conference last week, Unity Technologies VP of AI and machine learning Danny Lange argued that game engines are perfect for creating what he called “real” computer intelligence — self-learning systems capable of producing complex behaviors after a short amount of time. With game engines (like the company’s own Unity engine), you can simulate the rules of the real world and test intelligent agents against it.
“If you think about [it], the game engine has three dimensions, time, physics … it has everything you need to play around with the core elements that led to [human] intelligence,” said Lange.
The company has been training agents in various scenarios through its Unity ML-Agents Toolkit plugin. The agents acquire new skills and behaviors via reinforcement learning, where the only thing it knows in any given virtual environment is what’s right (getting rewarded for accomplishing the task) and what’s wrong (getting penalized). Other than that, it’s a blank slate.
One example Lange showed involved a chicken trying to cross a busy road. The goal for the agent was to grab the presents (the reward) scattered around the level without getting hit by the cars (the punishment). The AI struggled at first as it learned the rules of the game, but after six hours of repeated training, Lange said it became “superhuman,” deftly dodging cars while collecting over 100 gifts in a row.
In another scenario, the agent had a spider-like avatar made up of eight joints and four legs. The AI had to figure out how to use and control those body parts so that it could move forward. The result is a bit janky (the spiders hop around more than they walk), but in the future, this kind of accelerated learning can help game developers save some time when creating non-playable characters.
“Imagine the programming here that I’d need to write — some Java, C#, C++ programming, Python, you name it — that tells which joint to move, when, and how much,” said Lange. “Or I can just let the spider wiggle around for an hour, and through trial and error, it figures out how to move four legs and eight joints in some pattern from left to right.”
Lange and his team took that idea a step further with Puppo, an agent in the shape of an adorable corgi. Using reinforcement learning and physics-based movement, Puppo learned how to walk, run, jump, and fetch a stick. The researchers even built a simple game (where you flick the stick with your mouse) to show how efficient the dog is at retrieving the stick.
In a different demo, Lange showed what happens when you put dozens of individually trained Puppos together. Their goal was to chase after a bowl full of bones on a track field. As they ran toward the bowl (which was constantly moving along the track), the dogs became competitive and started pushing each other over and made their own shortcuts by running on the grass.
Earlier this year, Unity partnered with Google to create a machine learning test with Obstacle Tower, a video game that only AI agents can play. It’s made up of 100 levels that challenge an agent’s ability to navigate over obstacles, including puzzles, complicated layouts, and dangerous enemies. Unity is currently running a contest to see which AI can make it the farthest (Lange said the leading contestant could only reach level 19).
With Obstacle Tower and other projects, the company is trying to prove that when combined with game engines, reinforcement learning can be a powerful method for making sophisticated AI. After all, Lange said, it’s the same process all intelligent life on our planet uses to survive.
“That’s how kids operate. That’s how we operate. That’s how animals operate. … Through the learning process, you move from not having a clue [about something] to actually starting to understand [it],” he said.