The risk of copyright infringement remains a key concern with generative AI, but Shutterstock hopes that offering more indemnity might increase enterprise customers’ appetites for generative content while also lowering the stakes.
With a new way to have humans check AI-generated images for any potential copyright issues, Shutterstock is offering full indemnification for enterprise clients that experience copyright-related claims related to images generated via Shutterstock’s AI platform. The updates, announced today, are meant to put compliance teams a little more at ease with the legal gray areas created by the influx of AI-generated content.
The move also puts Shutterstock on par with competitors like Adobe, which announced a similar promise for enterprise customers in June while revealing its own generative AI updates.
Offering indemnity fulfills what Shutterstock VP of Product Jeff Cunning described as a “top need” for enterprise clients that are worried about any risks that might come from using AI-made images. While most human reviews take a day or two on average, he said Shutterstock also has a way to offer “express review.”
“One of the biggest needs that our customers have had is we need that commercial licensing confidence to be able to use this content in all the ways we use content from Shutterstock today,” Cunning told Digiday.
Since Shutterstock first announced its generative AI tools last fall, it’s made several updates to the platform including a new AI image generator that debuted in January. In May, the company mentioned it offered indemnity to a few unnamed enterprise clients — including a major tech company and a TV company — along with new ways to compensate Shutterstock contributors when their content is used by AI models. Shutterstock also has AI-related partnerships with major tech companies including Nvidia, Meta and LG as well as OpenAI — whose DALL-E image generator was trained on Shutterstock images.
As more enterprise-level brands look to experiment with AI, copyright concerns still loom large. Just last week, OpenAI was hit with two separate copyright-related lawsuits, which followed another earlier this year filed by Getty Images against the AI platform Stability AI.
“I think our lawsuit is groundbreaking here in one important respect,” Ryan Clarkson, managing partner of Clarkson Law Firm, told Digiday last week after filing one of the OpenAI lawsuits. “The way in which peoples’ personal information posted online, the property rights in personal information hasn’t really been tested in the way we lay out our legal theory.”
Despite the growing number of complaints, some major advertising companies are moving forward with new AI deals. Last month during the Cannes Lions Festival, Omnicom announced a new partnership with Google that offers a new tool for AI-generated text and images trained on content that’s free from copyright.
Some legal experts say companies are more hesitant to use generative content in public-facing materials — such as advertising — if there’s a chance that it could put them at risk of lawsuits or other criticism. But the indemnity offerings from Adobe and Shutterstock cover the biggest areas where companies remain reticent.
Because Shutterstock’s AI models are trained using images it claims to already have permission to use, it’s already unlikely anyone using its products will be on the hook since it’s already licensed. Even if it’s more of a marketing ploy than Shutterstock sticking its neck out to protect potential clients, the move further highlights a key issue for the industry — and an obstacle that’s prevented more adoption.
Adding more reassurance that content is actually licensed will “reduce a lot of friction when making these deals” and make it easier to go through the legal process, said Katherine Gardner, a partner at the law firm Gunderson Dettmer. However, she also noted that there’s still tension with another key area: How do companies ethically collect enough data to help decrease bias and increase accuracy?
“You’re kind of balancing these infringement and licensing issues with the reality that these models need to have access to more and more data,” she said. “[It] is really a good thing for having more accurate models that are not as biased and having access to many different types of inputs from across all cross-sections of various demographics.”