Save now
off all Envato Elements plans this Cyber Sale.
Get in quick!

We independently review everything we recommend. When you buy through our links, we may earn a commission.

AI image generators are transforming the stock photo industry

By Matic Broz
a hero image comparing real and ai generated portraits side by side

Though AI art has been quietly evolving since the 1960s, it wasn’t until OpenAI released DALL·E in 2021 that the world took notice. (Technically, Nvidia released the first GANs model from text to image in 2018). Suddenly, anyone could make images from text prompts, and the internet exploded with a mix of curiosity and amusement.

This breakthrough sparked a wave of increasingly sophisticated AI image generators. OpenAI raised the bar with DALL·E 2 and DALL·E 3, while competitors like Stable Diffusion and Midjourney joined the fight for the best AI image generator. Tech giants weren’t far behind, with Google launching Imagen and Adobe entering the scene with Firefly.

In just a few years, AI art has evolved from a source of online amusement to a tool that’s not only fooling the masses but also winning in photography competitions.

How are stock image sites adapting?

The $4 billion stock media industry is facing an existential crisis. The rise of AI image generation threatens traditional (micro)stock image models, forcing industry giants like Shutterstock, Adobe Stock, and Getty Images to adapt or face obsolescence. These companies, which collectively control over half the market, are each exploring new ways to adapt.

Shutterstock, it seems, is diving headfirst into the AI revolution. “We don’t think working against AI is the answer,” stated Alexandros Maragoudakis, Shutterstock’s Sr. Director of Contributor & Content Operations, in a recent interview. “We rather see it as a new marketplace, new commercial opportunities where content again will be critical.

This approach is evident in Shutterstock’s recent moves. The company has aggressively expanded its portfolio beyond photography over the years, acquiring platforms like Bigstock, Premiumbeat, Pond5, and Envato Elements, to name a few. This strategy provides a large and diverse dataset of digital assets—including video, audio, and 3D content—that can be licensed to AI companies. For example, the company has partnered with Nvidia for 3D content creation and ElevenLabs for text-to-SFX models. They’ve even licensed content to Reka.AI and collaborated with Databricks to create ImageAI, a platform designed to address copyright concerns in AI-generated content. And I wouldn’t be surprised if I left out a partnership or two.

Further solidifying its AI embrace, Shutterstock extended its contract with OpenAI for an additional six years, granting OpenAI access to its massive library for AI training. In return, Shutterstock integrated DALL-E 3 into its platform, offering generative AI credits with subscriptions.

Screenshot of Shutterstock’s AI image generator.

In contrast to Shutterstock’s full-on embrace of AI, Getty Images—Shutterstock’s primary competitor—is treading more cautiously. While Getty has also expanded its reach through acquisitions, notably iStock and Unsplash, its AI strategy has been more measured. Getty, for example, recently released its own AI image generator in partnership with Nvidia.

Adobe, a creative software giant, has opted for a path of self-reliance. Leveraging its library of creative assets, Adobe has developed its own suite of proprietary AI tools. Firefly, Adobe’s text-to-image generator, is now in its third iteration and integrates seamlessly with Adobe’s Creative Cloud applications. This strategy allows Adobe to offer a comprehensive and tightly integrated AI-powered creative ecosystem.

Screenshot of Firefly, Adobe’s AI image generator.

These contrasting approaches from industry leaders raise several questions: Will these adaptations be enough to secure the future of stock media companies in an AI-dominated world? Or are these companies merely delaying the inevitable? The implications extend far beyond the major players, impacting the entire stock media ecosystem. While Shutterstock and Getty Images have the resources to invest heavily in AI, smaller stock media providers face a much steeper climb.

Some mid-sized platforms are scrambling to keep pace. Artlist, for example, has released an AI voiceover generator, expanding its offerings beyond traditional stock media. Similarly, platforms like Envato Elements, 123RF, and Freepik have integrated several AI tools on their own. However, these efforts often seem more reactive than strategic, lacking the data advantage or market positioning of their larger counterparts.

How ethical is all this anyway?

The rapid rise of AI image generation has opened a Pandora’s box of ethical dilemmas, many of which remain unresolved. The technology’s legal landscape is still in its infancy, yet already grappling with several controversial cases.

Take, for instance, Getty Images’ lawsuit against Stability AI. Getty alleges that Stability’s text-to-image generator was trained on copyrighted images, many bearing Getty’s watermark, scraped without permission from its website. My own research supports this claim and suggests that copyrighted images from other stock agencies, including Depositphotos, Alamy, Dreamstime, and iStock, were also likely used in Stability’s training data.

Getty Images watermark 3
Getty Images watermark I found using Stable Diffusion.

Similar allegations have been levied against Midjourney, DeviantArt, and Stable Diffusion. Further fueling the controversy, EyeEm, after its acquisition by Freepik, opted to license user photos for AI training purposes without explicit consent. While a 30-day opt-out period was offered, the process for removing images was needlessly complex.

These cases raise a question: Is it ethical to train AI models on the creative work of contributors without their explicit consent? Currently, the law, or the lack thereof, favors AI companies.

“The lack of law favors AI companies, for now.”

Shutterstock’s approach, while imperfect, is a good start. In addition to offering its own AI image generator, the stock image site allows users to license AI-generated images created by others using the platform’s tool. However, Shutterstock explicitly prohibits the uploading and sale of AI images created using external generators like Stable Diffusion. “We don’t accept generative AI content created with tools other than our own due to concerns about the source and ethical use,” explained Alexandros Maragoudakis.

This policy allows Shutterstock to sidestep some copyright concerns, as it can exert more control over the training data and licensing agreements for images generated using its own platform. Getty Images has taken a similar stance, banning user-uploaded AI content entirely. On the other hand, Freepik now has so many AI images (that you cannot filter out) that it made me stop my subscription.

One of the fears is that AI image generators would be misused or would accidentally produce copyrighted content. In an interview with The Verge, Getty Images CEO Craig Peter described this issue beautifully: “If you have an image and it produces an image of a third-party brand or somebody of name and likeness like [Travis] Kelce or [Taylor] Swift, that’s a problem. But there are much more nuanced problems in intellectual property, like showing an image of the Empire State Building. You could actually get sued for that. Tattoos are copyrighted. So, fireworks can actually be copyrighted. That smiley firework that shows up? Grucci Brothers actually own that copyright.

AI images of Taylor Swift, created by Stable Diffusion 2.1 (left) and Stable Diffusion XL (right). The older version drew a very realistic photograph of Taylor, while the newer version internally changed my prompt to draw a girl similar to Taylor.

To prevent this from happening, text-to-image generators have been coded to refuse to create images from prompts containing certain words. Of course, as we know from chatbots, it’s possible to bypass these restrictions with what’s called a “jailbreak“. For example, I was able to create (as an experiment only) pictures of Trump and Biden using Meta AI by swapping their names with the word “president” instead. Such jailbreaks are typically fixed quickly, but new ones are found regularly—it’s a never-ending game of cat and mouse.

The pursuit of ethical AI image generation also raises questions about the balance between historical accuracy and representation. Earlier this year, Google’s Gemini AI image generator faced scrutiny when it struggled to generate images of white historical figures. A viral post on X showcased this issue, with a user requesting images of “Founding Fathers of America” and receiving results that included Black, Native American, and Asian individuals.

Gemini AI image of Founding Fathers of America

While these results sparked controversy, there’s nothing inherently wrong with depicting historical figures with diverse ethnicities, especially if such representations were historically accurate. However, in this case, the AI model prioritized diversity over historical fidelity. Google subsequently paused the AI tool to address these concerns.

The resilience of editorial content

While AI-generated imagery is scaring many-a-stock photographers, certain domains will be more resistant to it. Editorial photography, in particular—especially images capturing celebrities, newsworthy events, and unique moments in time—seems safe for now.

“High-quality, human-shot images may become more valuable as AI images become more common.”

The essence of editorial photography lies in its authenticity. These images, often referred to as “archival,” are irreplaceable visual records of real-life occurrences and individuals. This intrinsic value, rooted in capturing the unrepeatable, is something AI, despite its advancements, cannot (yet) replicate. Intriguingly, Shutterstock’s Maragoudakis suggests that the proliferation of AI-generated images might inadvertently enhance the value of high-quality, human-captured photographs. “High-quality, human-shot images may become more valuable as AI images become more common, and the scheme might shift from high-volume sales with low price points to lesser volume but higher price points,” he explains.

Stocksy, on the other hand, has completely shunned AI-generated imagery, quoting US copyright authorities, “The main issue from a legal standpoint is that the US copyright authorities do not consider art produced using generative AI to be protectable by copyright. Unless and until that position changes, uploading AI would breach your member agreement with Stocksy since we require all our artists to own the copyright in all content they upload. It would also breach the promises that we make to clients in our license agreement about the ownership of rights.”

What does the future hold for stock media?

Based on everything we’ve learned so far, I am making the following two predictions about the long-term impacts of this AI revolution on the stock media industry:

  1. AI-generated imagery will reduce the demand for generic stock photos. Consequently, stock image sites will continue to license stock imagery to AI companies, and lean more heavily towards accepting editorial and authentic contributions from real photographers.
  2. Smaller platforms without the resources to invest in AI technology will struggle to compete and will get either acquired by one of the big guys or go out of business.

I believe the same lies ahead for stock video, audio, and 3D content.

However, it’s not all bad. While AI will almost certainly eradicate the average stock photographer, who already earns only about $0.02 per photo per month, it will create new job openings, such as prompt engineer.

Posted in:

Meet your guide

matic broz
Matic Broz

Matic Broz is stock media licensing expert and a photographer. He promotes proper and responsible licensing of stock photography, footage, and audio, and his writing has reached millions of creatives.