Skip to main content

A.I. creates some of the most realistic computer-generated images of people yet

Progressive Growing of GANs for Improved Quality, Stability, and Variation
Sure, artificial intelligence apps can turn your photos into paintings, but now computers can generate their own photographs — of people (and even things) that don’t actually exist. Nvidia recently created a generative adversarial network (GAN) that can generate high-resolution images from nothing but a training set database. The company shared a research paper detailing the computer-generated images on Friday, October 27.

Nvidia’s proposed method relies on a generative adversarial network, or GAN. It consists of two neural networks that are based on algorithms used in unsupervised machine learning, which in itself pushes artificial intelligence to “learn” through trial and error without human intervention, such as separating images of cats and dogs into two groups.

Recommended Videos

In this case, one neural network is called the “generator” while the second is the “discriminator.” The generator network creates an image that, to humans, is indistinguishable from the training sample. The discriminator network will then compare the render to the sample, and provide feedback. Ultimately, the generator network will get better at rendering, and the discriminator network will get better at scrutinizing. The final goal is to re-create the render until it “fools” the discriminator network.

Nvidia wanted to expand earlier image-generation attempts including efforts by Google, by both creating higher-quality images and generating a wider variety of computer-generated images in less time. To do that, the researchers created a progressive system. Since A.I. learns more when data is fed into the system, the group added more difficult renderings as the system progressively improved.

The program started with generating low-resolution images of people that don’t actually exist, inspired by all the photos in the database, which are all images of celebrities. As the system improved, the researchers added more layers to the program, adding more fine detail into low-resolution images became 1080p HD standard photos. The result is high-resolution, detailed images of “celebrities” that don’t actually exist in real life.

Along with creating computer-generated images with more resolution — and more impressive detail — the group worked to increase the variation of generated graphics, setting new records for earlier projects for unsupervised algorithms. The research also included new ways of making sure those two generator-discriminator algorithms don’t decide to engage in any “unhealthy competition.” The group also improved the original dataset of celebrity images that it started out with.

Along with generating images of celebrities, the group also used to algorithms on datasets of images of objects, such as a couch, a horse, and a bus.

“While the quality of our results is generally high compared to earlier work on GANs, and the training is stable in large resolutions, there is a long way to true photorealism,” the paper concludes. “Semantic sensibility and understanding dataset-dependent constraints, such as certain objects being straight rather than curved, leaves a lot to be desired.”

While there are still some shortcomings, the group said that photorealism with computer-generated images “may be within reach,” particularly in generating images of fake celebrities.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
A.I. teaching assistants could help fill the gaps created by virtual classrooms
AI in education kid with robot

There didn’t seem to be anything strange about the new teaching assistant, Jill Watson, who messaged students about assignments and due dates in professor Ashok Goel’s artificial intelligence class at the Georgia Institute of Technology. Her responses were brief but informative, and it wasn’t until the semester ended that the students learned Jill wasn’t actually a “she” at all, let alone a human being. Jill was a chatbot, built by Goel to help lighten the load on his eight other human TAs.

"We thought that if an A.I. TA would automatically answer routine questions that typically have crisp answers, then the (human) teaching staff could engage the students on the more open-ended questions," Goel told Digital Trends. "It is only later that we became motivated by the goal of building human-like A.I. TAs so that the students cannot easily tell the difference between human and A.I. TAs. Now we are interested in building A.I. TAs that enhance student engagement, retention, performance, and learning."

Read more
Groundbreaking A.I. brain implant translates thoughts into spoken words
ibm-chip-human-brain-robot-overlord

Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.

“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”

Read more
Smart A.I. bodysuits could reveal when babies are developing mobility problems
Baby smart bodysuit

In sci-fi shows like Star Trek, people wear jumpsuits because, well, it’s the future. In real life, babies could soon wear special high-tech jumpsuits designed to help doctors monitor their movements and look for any possible mobility issues that are developing.

The smart jumpsuit in question has been developed by medical and A.I. researchers in Finland’s Helsinki Children’s Hospital. In a recent demonstration, they fitted 22 babies, some as young as four months, with jumpsuits equipped with motion sensors. These enabled the suits to register the acceleration and positional data of wearers and relay it to a nearby smartphone. A neural network was then trained to recognize posture and movement by comparing data from the suits with video shot by the researchers.

Read more