Skip to main content

A.I. is getting scary good at generating fake humans. Watch this demo

【データグリッド】全身モデル自動生成AI | [DataGrid] Model generation AI

A.I. is getting pretty scarily good at lying to us. No, we’re not talking about wilfully misleading people for nefarious means, but rather creating sounds and images that appear real, but don’t exist in the real world.

Recommended Videos

In the past, we’ve covered artificial intelligence that’s able to create terrifyingly real-looking “deep fakes” in the form of faces, synthetic voices and even, err, Airbnb listings. Now, researchers from Japan are going one step further by creating photorealistic, high-res videos of people — complete with clothing — who have only ever existed in the fevered imagination of a neural network. The company responsible for this jaw-dropping tech demo is DataGrid, a startup based on the campus of Japan’s Kyoto University. As the video up top shows, the A.I. algorithm can dream up an endless parade of realistic-looking humans who constantly shapeshift from one form to another, courtesy of some dazzling morphing effects.

Like many generative artificial intelligence tools (including the A.I. artwork which sold for big bucks at a Christie’s auction last year), this latest demonstration was created using something called a Generative Adversarial Network (GAN). A GAN pits two artificial neural networks against one another. In this case, one network generates new images, while the other attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the “generator” network to become sufficiently good at creating images that they can successfully fool the “discriminator” every time.

As can be seen from the video, the results are impressive. They don’t appear to have any of the image artifacts or strange glitches which have marked out many attempts at generating images. However, it’s also likely not a coincidence that the video shows humans positioned against simple white backdrops, thereby minimizing the potential of confusing backgrounds which could affect the images created.

Provided all is as it seems, this is a fascinating (albeit more than a little disconcerting) advance. If we were employed as movie extras or catalog models for clothing brands, we’d probably be feeling a little bit nervous right now. At the very least, the possibility of next-level fake news just got a whole lot better.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Bot or not? A.I. looks at Twitter behavior to sort real accounts from fake
Hand holding a Twitter phone

For the past several years, there’s been heightened concern about the impact of so-called bots on platforms like Twitter. A bot in this context is a fake account synonymous with helping to spread fake news or misinformation online. But how exactly do you tell the difference between an actual human user and a bot? While clues such as the use of the basic default “egg” avatar, a username consisting of long strings of numbers, and a penchant for tweeting about certain topics might provide a few pointers, that’s hardly conclusive evidence.

That’s the challenge a recent project from a pair of researchers at the University of Southern California and University of London set out to solve. They have created an A.I. that’s designed to sort fake Twitter accounts from the real deal, based on their patterns of online behavior.

Read more
Future JPEGs could use blockchain to flag fakes, and A.I. for smaller file sizes
Nikon Z6 Hands-on

Imagine if every JPEG image file had blockchain-protected data that would verify -- or refute -- a photograph’s origins. The concept may be more than just an idea. The same organization that created the JPEG wants to use blockchain to flag fake news and fight image theft.

The Joint Photographic Experts Group (JPEG) is organizing workshops to gain feedback into the possibility of creating a standard blockchain that could both help viewers quickly identify a faked photo and help photographers fight image theft. At the same meeting in Sydney, Australia, the committee began exploring the possibility of creating a new JPEG compression that uses artificial intelligence.

Read more
Fake news? A.I. algorithm reveals political bias in the stories you read
newspaper stack

Here in 2020, internet users have ready access to more news media than at any other point in history. But things aren’t perfect. Click-driven ad models, online filter bubbles, and the competition for readers’ attention means that political bias has become more entrenched than ever. In worst-case scenarios, this can tip over into fake news. Other times, it simply means readers receive a slanted version of events, without necessarily realizing that this is the case.

What if artificial intelligence could be used to accurately analyze political bias to help readers better understand the skew of whatever source they are reading? Such a tool could conceivably be used as a spellcheck- or grammar check-type function, only instead of letting you know when a word or sentence isn’t right, it would do the same thing for the neutrality of news media -- whether that be reporting or opinion pieces.

Read more