Skip to main content

This AI generates fake Street View images in impressive high definition

Photographic Image Synthesis with Cascaded Refinement Networks
Remember when Apple had its disastrous launch of Apple Maps in 2012 and real-world geography suddenly received a dose of accidental “creativity” which replaced hospitals with supermarkets and turned bridges into death slides? Well, researchers at Stanford University and Intel have just debuted a new project which creates imaginary street scenes — except these folks have done it on purpose.
Recommended Videos

What the researchers have developed is an imaginative artificial intelligence that can create photorealistic Google Street View-style images of fake street scenes. These scenes are rendered in highly detailed 1,024 x 2,048 HD resolution.

A bit like a comic artist who draws a city backdrop by taking photo references from different places and weaving them together, the street scenes Stanford and Intel’s AI imagines are based on individual elements it saw during its training — and then combines them to create novel images. The technology that makes this possible is something called a cascaded refinement network, a type of neural network designed to synthesize HD images with a consistent structure. Like a regular neural network, a cascaded refinement network features multiple layers, which it uses to generate features one layer at a time. Each layer has a higher resolution than the layer which came before it. Layers receive a coarse feature from the previous layer and then compute the details locally; allowing synthesized images to be generated in a consistent way.

The result? Street images equivalent to a photo taken with a two-megapixel camera.

While the work is an interesting example of computational creativity in its own right (think Google’s DeepDream or the Massachusetts Institute of Technology’s Nightmare Machine for other examples), this project’s creators think it has multiple real-world applications.

“One application is a new rendering pipeline for video games and animations,” Qifeng Chen, a Stanford Ph.D. researcher on the project, told Digital Trends. “We do not need artists to create the virtual scenes manually. An AI painter can automatically learn from real images and translate the real world content to the virtual world in video games and movies. This approach can save a lot of human labor and potentially synthesize photo-realistic images. The second motivation is that mental imagery is believed to play an important role in decision making and the ability to synthesize photo-realistic images may support the development of artificially intelligent systems.”

Right now, the project is only able to create variations on German streets, because these are the images it was trained on. Going forward, however, it could be possible for the system to expand its knowledge to generate streets styled after any city in the world.

You can read a paper describing the work here.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Forget Dall-E, you can sign up to create AI-generated videos now
A frame from an AI-generated video in claymation style.

Dall-E, ChatGPT, and other AI-generation technologies continue to amaze us. Still, AI image-generation tools like Midjourney might seem boring once you see the new, AI-powered video-generation abilities that will soon be available to us all.

Runway provides an advanced online video editor that offers many of the same features as a desktop app. The company has distinguished its service from others, however, by pioneering the use of AI tools that help with various time-consuming video chores, such as masking out the background.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
Smart A.I. bodysuits could reveal when babies are developing mobility problems
Baby smart bodysuit

In sci-fi shows like Star Trek, people wear jumpsuits because, well, it’s the future. In real life, babies could soon wear special high-tech jumpsuits designed to help doctors monitor their movements and look for any possible mobility issues that are developing.

The smart jumpsuit in question has been developed by medical and A.I. researchers in Finland’s Helsinki Children’s Hospital. In a recent demonstration, they fitted 22 babies, some as young as four months, with jumpsuits equipped with motion sensors. These enabled the suits to register the acceleration and positional data of wearers and relay it to a nearby smartphone. A neural network was then trained to recognize posture and movement by comparing data from the suits with video shot by the researchers.

Read more