Skip to main content

Twitter is using A.I. to ditch those awful auto-cropped photos

twitter auto crops improve with ai
Twindesign/123RF
The Twitter auto crop feature functions like a tweet’s character limit in order to keep images on the microblogging platform consistent with the rest of the feed — but now Twitter is getting better at those crops, thanks to artificial intelligence. Twitter is now rolling out a smarter auto crop based on neural networks, the company announced in a blog post on January 24.

The previous auto crop feature worked by using face detection to keep faces in the frame. When no faces were detected in the image, the software would simply crop the preview at the center, while a click on the image allowing users to see the entire shot. Twitter says the crop option without faces would often lead to awkward crops, while sometimes the software didn’t correctly identify faces.

To fix those awkwardly cropped previews, Twitter engineers used what’s called salient image maps to train a neural network. Salient maps use eye trackers to determine the areas of an image that most catch the viewer’s eye. Earlier research in the area showed that viewers tend to focus on faces, text, animals, objects, and areas with high contrast.

Twitter used that earlier data to train the program to understand which areas of the image are the most important. Using that data, the program can recognize those features and make that auto crop in a place that will leave the most visual areas inside the crop.

But Twitter wasn’t done — while saliency software works well, it’s also slow, which would have prevented tweets from being posted in real time. To solve the awkward crops problem without a slowdown, Twitter refined the program again using two different techniques that improved the speed tenfold. The first trained a smaller network using that first good but slow program in order to speed up those crops. Next, the software engineers determined a number of visual points to map on each image, effectively removing the smaller, less important visual cues while keeping the largest areas intact.

Twitter Auto Crop
Before
Twitter Auto Crop
After

The resulting software allows images to post in real time, but with better crops. In a group of before and after pictures, Twitter shows images with faces that the earlier system wouldn’t detect properly cropped to face rather than feet. Other examples show images of objects that were cut out in the first program because they didn’t sit in the middle of the image, but were more appropriately cropped using the updated algorithms. Another example shows the program recognizing text and adjusting the crop to include a sign.

The updated cropping algorithm is already rolling out globally on both iOS and Android apps as well as Twitter.com.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
An Amazon A.I. scientist wants to transform downtown Jackson, Mississippi
Nashlie Sephus

Most people look at a couple of vacant lots and see … vacant lots. But Nashlie Sephus sees gold.

Sephus, a 35-year-old Black A.I. researcher with Amazon, plans to turn seven buildings and about 500,000 square feet of downtown Jackson, Mississippi, into a technology park and incubator. Her story, as detailed on Inc.’s Web site, is remarkable:
The 35-year-old has spent the past four years splitting her time between Jackson, her hometown, and Atlanta, where she works as an applied science manager for Amazon's artificial intelligence initiative. Amazon had acquired Partpic, the visual recognition technology startup where she was chief technology officer, in 2016 for an undisclosed sum. In 2018, she founded the Bean Path, an incubator and technology consulting nonprofit in Jackson that she says has helped more than 400 local businesses and individuals with their tech needs.
But beyond entrepreneurship and deep A.I. know-how, Sephus is eager to bring tech to a city hardly known for its tech roots. "It's clear that people don't expect anything good to come from Jackson," she told Inc. "So it's up to us to build something for our hometown, something for the people coming behind us."

Read more
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more