Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Adobe gets called out for violating its own AI ethics

Ansel Adams' panorama of Grand Teton National Park with the peak in the background and a meandering river in the forest.
Ansel Adams / National Archives

Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-generated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”

While the Adobe Stock platform, where the images were made available, does allow for AI generated images, The Verge notes that the site’s contributor terms prohibit images “created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist.”

Recommended Videos

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

A screenshot of Ansel Adams images put in Adobe Stock.
Adobe

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”

The ability to create high-resolution images of virtually any subject and in any visual style by simply describing the idea with a written prompt has helped launch generative AI into the mainstream. Image generators like Midjourney, Stable Diffusion and Dall-E have all proven immensely popular with users, though decidedly less so with the copyright holders and artists whose styles those programs imitate and whose existing works those AI engines are trained on.

Adobe’s own Firefly generative AI platform was, the company claimed, trained on the its extensive, licensed Stock image library. As such, Firefly was initially marketed as a “commercially safe” alternative to other image generators like Midjourney, or Dall-E, which trained on datasets scraped from the public internet.

However, an April report from Bloomberg found that some 57 million images within the Stock database, roughly 14% of the total, were AI generated, some of which were created by their data-scraping AI competitors.

“Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson told Bloomberg at the time.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is "taking another big leap forward" with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday's update, "you can use Lens to search by taking a video, and asking questions about the moving objects that you see," Google's announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team's previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it's available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won't necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying "dramatically more helpful results," per the announcement. Those include reviews of the specific product you're looking at, price comparisons from across the web, and information on where to buy the item. 

Read more
Meta rolls out its own version of Advanced Voice Mode at Connect 2024
Zuckerberg debuting natural voice interactions

At Meta Connect 2024 on Wednesday, CEO Mark Zuckerberg took to the stage to discuss his company's latest advancements in artificial intelligence. In what he describes as "probably the biggest AI news that we have," Zuckerberg unveiled Natural Voice Interactions, a direct competitor to Google's Gemini Live and OpenAI's Advanced Voice Mode.

"I think that voice is going to be a way more natural way of interacting with AI than text," Zuckerberg commented. "I think it has the potential to be one of [the], if not the most frequent, ways that we all interact with AI." Zuckerberg also announced that the new feature will begin rolling out to users today across all of Meta's major apps including Instagram, WhatsApp, Messenger, and Facebook.

Read more
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more