Programmer Animesh Karnewar wanted to know how characters described in books would appear in reality, so he turned to artificial intelligence to see if it could properly render these fictional people. Called T2F, the research project uses a generative adversarial network (GAN) to encode text and synthesize facial images.
Simply put, a GAN consists of two neural networks that argue with each other to produce the best results. For example, the job of network No. 1 is to fool network No. 2 into believing a rendered image is a real photograph while network No. 2 sets out to prove the alleged photo is just a rendered image. This back-and-forth process fine-tunes the rendering process until network No. 2 is eventually fooled.
Karnewar started the project using a dataset called Face2Text provided by researchers at the University of Copenhagen, which contains natural language descriptions for 400 random images.
“The descriptions are cleaned to remove reluctant and irrelevant captions provided for the people in the images,” he writes. “Some of the descriptions not only describe the facial features, but also provide some implied information from the pictures.”
While the results stemming from Karnewar’s T2F project aren’t exactly photorealistic, it’s a start. The video embedded above shows a time-lapsed view of how the GAN was trained to render illustrations from text, starting with solid blocks of color and ending with rough but identifiable pixilated renderings.
“I found that the generated samples at higher resolutions (32 x 32 and 64 x 64) has more background noise compared to the samples generated at lower resolutions,” Karnewar explains. “I perceive it due to the insufficient amount of data (only 400 images).”
The technique used to train the adversarial networks is called “Progressive Growing of GANs,” which improves quality and stability over time. As the video shows, the image generator starts from an extremely low resolution. New layers are slowly introduced into the model, increasing the details as the training progresses over time.
“The Progressive Growing of GANs is a phenomenal technique for training GANs faster and in a more stable manner,” he adds. “This can be coupled with various novel contributions from other papers.”
In a provided example, the text description illustrates a woman in her late 20s with long brown hair swiped over to one side, gentle facial features and no make-up. She’s “casual” and “relaxed.” Another description illustrates a man in his 40s with an elongated face, a prominent nose, brown eyes, a receding hairline and a short mustache. Although the end results are extremely pixelated, the final renders show great progress in how A.I. can generate faces from scratch.
Karnewar says he plans to scale out the project to integrate additional datasets such as Flicker8K and Coco captions. Eventually, T2F could be used in the law enforcement field to identify victims and/or criminals based on text descriptions, among other applications. He’s open to suggestions and contributions to the project.
To access the code and contribute, head to Karnewar’s repository on Github here.