But what’s second nature for us can prove complicated for computers. Will the glass break or bounce? Will the baby laugh or cry?
A team of researchers from the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a system that can predict the following events in images and generate videos to depict them. The system needs work — its current productions are simple, short, and unassuming — but it stands out for its unique approach and accuracy.
“Instead of building up scenes frame by frame, we focus on processing the entire scene at once,” Carl Vondrick, PhD at MIT CSAIL and lead author of the paper, told Digital Trends.
Alternative computer vision models that attempt the same task use recurrent networks to generate predictive videos on a frame-by-frame basis. The system developed by Vondrick and his team uses “convolutional networks” to generate all 32 frames simultaneously.
“The existing approach of going frame by frame has a certain logic,” Vondrick said, “but it also creates a massive margin for error. It’s sort of like a big game of ‘Telephone,” which means that the message most likely will fall apart by the time you go around the whole room.
“In contrast, our approach is the ‘Telephone’ equivalent of speaking to everyone in the room at once,” he added.
The researchers trained the system on a year of footage packed into two million videos and — in order to generate all frames at once — taught it distinguish foregrounds from backgrounds, and mobile objects from stationary ones. They then showed the system still images and had it generate short clips of subsequent events.
Once the system could generate video clips, Vondrick and his team set out to refine it through a method called adversarial learning.
“The idea behind adversarial learning is to have two neural networks compete against each other,” Vondrick said. “One network tries to decide what is real versus fake, and another tries to generate something that fools the first network.”
Through this computer competition the generative algorithm improved the accuracy of its video clips until it was able to fool human subjects 20 percent more often than a baseline model, according to a paper that will be presented next week at the Neural Information Processing Systems conference in Barcelona.
But with accuracy comes complexity and with complexity comes obstacles.
The current system’s videos are short — a mere one and a half seconds long. If the clips were much longer than that, they’d risk their consistency. “The key challenge is being able to reliably track the relationships between all of the objects in a scene … to make sure that the video that’s being generated still makes sense five or ten seconds later,” Vondrick said. To develop accurate and long videos, the system may need human input to help it grasp context and connection between seemingly unrelated actions, such as jogging and showering.
Vondrick’s ambitious end goal is to develop an algorithm that can create believable feature-length films, though he admits that is still some years off. In the near term though he thinks this system could refine AI systems by helping them adapt to unpredictable environments.