Nav: Home

AI uses less than two minutes of videogame footage to recreate game engine

September 11, 2017

Game studios and enthusiasts may soon have a new tool at their disposal to speed up game development and experiment with different styles of play. Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.

Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.

To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single "speedrunner" video, where a player heads straight for the goal. This made "the training problem for the AI as difficult as possible."

Their current work uses Super Mario Bros. and they've started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.

The researchers found that their game engine predicted video frames significantly more similar to those in the original game when compared to the same test on a neural network. This gave them an accurate, general model of a game using only the video footage.

"Our AI creates the predictive model without ever accessing the game's code, and makes significantly more accurate future event predictions than those of convolutional neural networks," says Matthew Guzdial, lead researcher and Ph.D. student in computer science. "A single video won't produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that's pretty close."

They next tested how well the cloned engine would perform in actual gameplay. They employed a second AI to play the game level and ensure the game's protagonist wouldn't fall through solid floors or go undamaged if hit by an enemy.

The results: the AI playing with the cloned engine proved indistinguishable compared to an AI playing the original game engine.

"The technique relies on a relatively simple search algorithm that searches through possible sets of rules that can best predict a set of frame transitions," says Mark Riedl, associate professor of Interactive Computing and co-investigator on the project. "To our knowledge this represents the first AI technique to learn a game engine and simulate a game world with gameplay footage.

The current cloning technique works well with games where much of the action happens on-screen. Guzdial says Clash of Clans or other games with action taking place off-screen might prove difficult for their system.

"Intelligent agents need to be able to make predictions about their environment if they are to deliver on the promise of advancing different technology applications," he says. "Our model can be used for a variety of tasks in training or education scenarios, and we think it will scale to many types of games as we move forward."
-end-
The research was presented at the International Joint Conference on Artificial Intelligence, Aug. 19-25, in Melbourne, Australia. The paper, "Game Engine Learning from Video," was authored by Matthew Guzdial, Boyang Li, and Mark Riedl.

Georgia Institute of Technology

Related Artificial Intelligence Articles:

Artificial intelligence system gives fashion advice
A University of Texas at Austin-led computer science team has developed an artificial intelligence system that can look at a photo of an outfit and suggest helpful tips to make it more fashionable.
Do we trust artificial intelligence agents to mediate conflict? Not entirely
We may listen to facts from Siri or Alexa, or directions from Google Maps or Waze, but would we let a virtual agent enabled by artificial intelligence help mediate conflict among team members?
Artificial intelligence improves biomedical imaging
ETH researchers use artificial intelligence to improve quality of images recorded by a relatively new biomedical imaging method.
Evolution of learning is key to better artificial intelligence
Researchers at Michigan State University say that true, human-level intelligence remains a long way off, but their new paper published in The American Naturalist explores how computers could begin to evolve learning in the same way as natural organisms did -- with implications for many fields, including artificial intelligence.
Artificial intelligence probes dark matter in the universe
A team of physicists and computer scientists at ETH Zurich has developed a new approach to the problem of dark matter and dark energy in the universe.
Artificial intelligence used to recognize primate faces in the wild
Scientists at the University of Oxford have developed new artificial intelligence software to recognize and track the faces of individual chimpanzees in the wild.
The brain inspires a new type of artificial intelligence
Using advanced experiments on neuronal cultures and large scale simulations, scientists at Bar-Ilan University have demonstrated a new type of ultrafast artifical intelligence algorithms -- based on the very slow brain dynamics -- which outperform learning rates achieved to date by state-of-the-art learning algorithms.
A new approach to the correction of artificial intelligence errors is proposed
The journal 'Physics of Life Reviews', which has one of the highest impact factors in the categories 'Biology' and 'Biophysics', has published an article entitled 'Symphony of high-dimensional brain'.
Artificial intelligence could help air travelers save a bundle
Researchers are using artificial intelligence to help airlines price ancillary services such as checked bags and seat reservations in a way that is beneficial to customers' budget and privacy, as well as to the airline industry's bottom line.
'Artificial intelligence' fit to monitor volcanoes
More than half of the world's active volcanoes are not monitored instrumentally.
More Artificial Intelligence News and Artificial Intelligence Current Events

Top Science Podcasts

We have hand picked the top science podcasts of 2019.
Now Playing: TED Radio Hour

Risk
Why do we revere risk-takers, even when their actions terrify us? Why are some better at taking risks than others? This hour, TED speakers explore the alluring, dangerous, and calculated sides of risk. Guests include professional rock climber Alex Honnold, economist Mariana Mazzucato, psychology researcher Kashfia Rahman, structural engineer and bridge designer Ian Firth, and risk intelligence expert Dylan Evans.
Now Playing: Science for the People

#540 Specialize? Or Generalize?
Ever been called a "jack of all trades, master of none"? The world loves to elevate specialists, people who drill deep into a single topic. Those people are great. But there's a place for generalists too, argues David Epstein. Jacks of all trades are often more successful than specialists. And he's got science to back it up. We talk with Epstein about his latest book, "Range: Why Generalists Triumph in a Specialized World".
Now Playing: Radiolab

Dolly Parton's America: Neon Moss
Today on Radiolab, we're bringing you the fourth episode of Jad's special series, Dolly Parton's America. In this episode, Jad goes back up the mountain to visit Dolly's actual Tennessee mountain home, where she tells stories about her first trips out of the holler. Back on the mountaintop, standing under the rain by the Little Pigeon River, the trip triggers memories of Jad's first visit to his father's childhood home, and opens the gateway to dizzying stories of music and migration. Support Radiolab today at Radiolab.org/donate.