When most people talk about artificial intelligence in video games, they're referring to computer-controlled enemies and teammates. In a first-person shooter (FPS), it would be incredibly boring if every enemy just ran in a straight line right towards you. To make the game challenging, programmers setup rules to follow that make the non-player characters (NPCs) in games act more like a human would. But, in virtually all games, that isn't anything resembling actual artificial intelligence. Those games can, however, be used to train machine learning systems on how to perform in the real world.
Artificial intelligence is a very broad term that’s difficult to apply an objective definition to. From a science fiction point of view, an artificial intelligence would be any entity that could think in the same way a human does, but which didn’t evolve naturally. That would be what is generally referred to as a “strong artificial intelligence.” Nothing like that exists now, and nobody can say for sure if it ever will. Even from a non-technical standpoint, there is a lot of philosophical debate about whether such a thing is even possible given our understanding—and lack of understanding—about the nature of consciousness and sentience.
What we do have today is “weak artificial intelligence.” These neural network-based machine learning models learn to understand a specific task through human-selected training sets. Image recognition, for example, relies on training sets full of thousands of images. If you want a weak AI to learn how to identify a dog, show it tens of thousands of images of dogs—and then tens of thousands of images of things that aren’t dogs. Weak AI is hardly what most people picture when they hear the term “artificial intelligence,” but they are capable of impressive feats within their specific area of training.
When you play a game like StarCraft II against the built-in computer-controlled opponent, you’re not actually competing against an artificial intelligence. You’re only battling a series of if-then statements that the game’s programmers have devised to sustain that illusion. In most video games today, those computer-controlled opponents aren’t even bound by the same rules that you are. For instance, if you hide in an FPS game, the “AI” opponent might find you immediately—even if a human player would have no way to know where you are.
That’s because artificial intelligence isn’t actually involved, and the computer-controlled opponent knows where you are because it’s a part of the game too. That enemy is no more a distinct entity than the desk you’re crouching behind, the wall behind you, or the bullets you shoot. If the developers wanted to, they could make it so enemies always know exactly where you are and get a perfect headshot every time. But that’s not fun, so most of the work of programming a computer-controlled opponent goes into making them less good so that they’ll act more human-like.
However, that doesn’t mean an artificial intelligence can’t play a game as a distinct entity like a human would. It requires that they be separate from the game, so they don’t have any unfair advantage, but it has been done multiple times. The most famous example occurred in 1997, when IBM’s Deep Blue AI beat chess grandmaster and world champion Garry Kasparov in a game of chess. Then, in 2016, DeepMind’s AlphaGo AI bested Go champion Lee Sedol.
One particularly impressive example happened in December of last year, when DeepMind’s AlphaStar AI defeated professional gamer Dario Wünsch in a StarCraft II competition with a score of 5-0. That stands out because, unlike chess and Go, StarCraft II is a complex real-time strategy game where you can’t always see what your opponent is doing. The AlphaStar AI had to rely on its training, and new information gained during the game, to overcome Wünsch’s defenses.
This isn’t just yet another example of how artificial intelligence is better than humans at specific tasks. Playing games can actually make AI better at dealing with the real world. When a baby plays with toys, they’re not just being entertained. They’re learning how their own motor functions work, and how they can use those to interact with the world. In the same way, an artificial intelligence can learn about the world through gaming.
For example, as self-driving cars start to become practical, they’ll need a way to navigate our cities. Not just which streets to take, but also how to maneuver around obstacles and deal with unforeseen circumstances. A stalled car on a single-lane, one-way street could cripple a self-driving car. But open world driving video games could help an artificial intelligence learn how to navigate through complex, changing environments. Games like Minecraft could teach an AI about motivation, prioritizing tasks, and setting goals.
Researchers spend a great deal of time developing training sets for their machine learning systems, and simulations for them to test in. But we already have hundreds of simulation sandboxes in the form of video games. Researchers can take advantage of those to help train and test artificial intelligence. If they’re able to play with humans, that becomes even more effective. The next time you accuse an online opponent of being a bot, it may actually be true.