Skip to main content

Google will begin labeling AI-generated images in Search

AI-generated images have become increasingly predominant in the results of Google searches in recent months, crowding out legitimate results and making it harder for users to find what they’re actually looking for. In response, Google announced on Tuesday that it will begin labeling AI-generated and AI-edited image search results in the coming months.

The company will flag such content through the “About this image” window and it will be applied to Search, Google Lens, and Android’s Circle to Search features. Google is also applying the technology to its ad services and is considering adding a similar flag to YouTube videos, but will “have more updates on that later in the year,” per the announcement post.

AI-generated images showing up in Google search.
Digital Trends

Google will rely on Coalition for Content Provenance and Authenticity (C2PA) metadata to identify AI-generated images. That’s an industry group Google joined as a steering committee member earlier in the year. This “C2PA metadata” will be used to track the image’s provenance, identifying when and where an image was created, as well as the equipment and software used in its generation.

Recommended Videos

So far, a number of industry heavyweights have joined the C2PA, including Amazon, Microsoft, OpenAI, and Adobe. However, the standard itself has received little attention from hardware manufacturers and can currently only be found on a handful of Sony and Leica camera models. A few prominent AI-generation tool developers have also declined to adopt the standard, such as Black Forrest Labs, which makes the Flux model that Grok leverages for its image generation.

The number of online scams utilizing AI-generated deepfakes have exploded in the past two years. In February, for example, a Hong Kong-based financier was duped into transferring $25 million to scammers who posed as the company’s CFO during a video conference call. In May, a study by verification provider Sumsub found that scams using deepfakes increased 245% globally between 2023 and 2024, with a 303% increase in the U.S. specifically.

“The public accessibility of these services has lowered the barrier of entry for cyber criminals,” David Fairman, chief information officer and chief security officer of APAC at Netskope told CNBC in May. “They no longer need to have special technological skill sets.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Midjourney’s AI image editing reimagines your uploaded photos
The new web UI for Midjourney.

Midjourney released its External Editor on Thursday, "a powerful new tool for unleashing your imagination." Available to select users, the AI tool will enable users to upload their own images, then adjust, modify and retexture them in a wide variety of artistic styles.

Previously, users could upload a reference image to Midjourney, either through the alpha web app or its discord server, then have the generation model use that as a reference to create a new image. You could not, however, make any edits to the source image itself. That's changing with the new External Editor. With it, you'll be able to add, modify, move, resize, remove, and restore specific assets within the image, as well as reskin it as a whole in an entirely new style — shifting it from, say, a photograph to pointillism to impressionist to anime. The system reportedly works on doodles and line drawings as well.

Read more
Google’s AI detection tool is now available for anyone to try
Gemini running on the Google Pixel 9 Pro Fold.

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Read more
Radiohead’s Thom Yorke among thousands of artists who issue AI protest
Thom Yorke on stage.

Leading actors, authors, musicians, and novelists are among 11,500 artists to have put their name to a statement calling for a halt to the unlicensed use of creative works to train generative AI tools like OpenAI’s ChatGPT, describing it as a “threat” to the livelihoods of creators.

The open letter, comprising just 29 words, says: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”

Read more