Skip to content

Adobe's AI Controversy: Creators Demand Transparency

Adobe's updated terms of service sparked backlash over fears of using artists' work for AI training. Despite clarifications, skepticism persists. Efforts to protect artists' rights continue amid growing concerns.

Photo by Lukas / Unsplash

Backlash Over Terms of Service Update

Adobe recently faced significant backlash from the creative community after quietly updating its terms of service in February. The new terms allowed Adobe to access user content “through both automated and manual methods” and utilize “techniques such as machine learning to improve [Adobe’s] Services and Software.”

Many artists interpreted this as Adobe granting itself unlimited access to their work to train its generative AI, Firefly. The uproar led to Adobe issuing a clarification on Tuesday, pledging not to use user content stored locally or in the cloud for training AI and providing an option to opt out of content analytics. Despite this, scepticism remains high among artists, who fear their work might still be exploited.

Scepticism Among Artists

Artists like Jon Lam, a senior storyboard artist at Riot Games, remain unconvinced by Adobe’s assurances. Instances like award-winning artist Brian Kesinger discovering AI-generated images mimicking his style sold on Adobe’s stock image site without his consent have fueled distrust.

Additionally, the estate of renowned photographer Ansel Adams accused Adobe of selling generative AI imitations of his work. These incidents highlight a broader concern over the nonconsensual use and monetization of copyrighted work by generative AI models.

Scott Belsky, Adobe’s chief strategy officer, attempted to alleviate concerns by explaining that Adobe’s machine learning refers to non-generative AI tools, such as Photoshop’s “Content Aware Fill.” However, the misunderstanding sparked a larger debate about Adobe’s market dominance and its potential impact on artists' livelihoods.

Efforts to Protect Artists’ Rights

  • The controversy surrounding Adobe is part of a larger narrative of artists struggling against AI’s encroachment on their intellectual property. Early last year, artist Karla Ortiz initiated a class action lawsuit against Midjourney, DeviantArt, and Stability AI for similar issues. Polish fantasy artist Greg Rutkowski also found his name commonly used as a prompt in Stable Diffusion, raising alarms within the art community.

  • In response to these challenges, Adobe has taken steps to support creators. In September 2023, the company announced the Federal Anti-Impersonation Right (FAIR) Act, aimed at protecting artists from the unauthorized use of their work for commercial purposes. However, this initiative has faced criticism for its limited scope and potential privacy issues.

  • Outside of Adobe, researchers at the University of Chicago have developed tools like Nightshade, which “poisons” training data to damage AI models, and Glaze, which helps artists mask their signature styles. The Concept Art Association, with members like Jon Lam, is also advocating for artists' rights through crowd-funded lobbying efforts.

Despite these efforts, the debate over AI’s impact on the creative industry continues, with many artists calling for clearer regulations and greater transparency from companies like Adobe.