Is AI killing stock photography or saving It?

Overview

Generative AI has turned the stock photo world upside down — but is it the end of creative work or a powerful new beginning? In this episode of Today in Tech, Keith Shaw speaks with Allesandra Sala, Head of Artificial Intelligence and Data Science at Shutterstock, about how AI is reshaping content creation, licensing, ethics, and the future of creative jobs. Get an inside look at what’s really happening behind the scenes as Shutterstock balances human creativity with the rise of AI-generated imagery.

Register Now

Transcript

Keith Shaw: AI didn’t just enter the creative world — it detonated it. Today we ask: can human creativity survive when anyone can generate “perfect” images in seconds? Hi everybody, welcome to Today in Tech. I’m Keith Shaw.

Joining me today is Alessandra Sala, Head of Artificial Intelligence and Data Science at Shutterstock. Welcome to the show, Alessandra. Alessandra: Hello, Keith.

Thanks for having me.

Keith: Thanks for joining us. Let’s start big picture. Generative AI has changed how people think about creating images. Shutterstock has long been the place people go to license visuals for marketing and media.

When AI image generation emerged — and the results became surprisingly good — how did Shutterstock decide to adapt rather than resist?

Alessandra: You’re bringing me back to when I joined Shutterstock five years ago. Even before generative AI, we were already using AI to power marketplace performance, search, recommendations, and customer engagement. We had strong technical capabilities and close awareness of where the industry was headed.

Computer vision and generative technology weren’t new — we were already deeply involved — but when DALL·E 1 arrived, it showed a huge leap in capability. At that point, we realized this wasn’t just another tool. It was a turning point. Resisting wasn’t the right strategy.

Our focus became figuring out how to integrate this technology into our marketplace in a healthy way — one that supports creativity, artists, and our business model at the same time.

Keith: Was there debate internally? The choice between adapting or resisting must have been complicated.

Alessandra: The debate lasted years — and honestly, it still continues. It’s not a simple binary choice. It’s a spectrum: how much do you adapt, how deeply do you integrate, and how do you do it safely? We constantly evaluate our position, test new functionality, reassess, and keep evolving.

That debate is ongoing as the technology itself keeps changing.

Keith: Shutterstock now has its own AI image generator. Is that correct? Alessandra: Yes.

We offer AI generation directly on our platform, but with a marketplace mentality. We provide access to a variety of best-in-class technologies through one subscription.

For example, when I needed to create a generated video of myself for a presentation, I had to subscribe to four different tools to get the specific capabilities I wanted — movement, setting, accessories. Who can afford that complexity?

With our platform, users get access to the best technologies in one place and can choose what fits their needs with a single subscription.

Keith: We’ve seen public issues around early image generators — bias problems, copyright risks, deepfakes, election misinformation. What kind of guardrails exist inside Shutterstock to make AI generation safer?

Alessandra: Safety is harder than generation. These models naturally amplify bias and are capable of producing harmful or unsafe content. So our system includes a middle layer between the user and the generator. Prompts are monitored, cleaned, and rewritten to reduce risk, and then outputs are audited before delivery.

It’s not a simple “prompt in, image out” process. It’s a complex safety pipeline — and it must continuously evolve because safeguards are always tested by misuse.

Keith: Most of your customers are legitimate creative professionals, not people trying to break safeguards. Are people generating content more than they’re searching now, or are both approaches coexisting?

Alessandra: It’s shifting and evolving. Early on, when the generator was free, misuse was higher. But business customers — designers, marketers — use the tools responsibly. We’re attracting new customers through AI generation who then discover traditional human-created stock content.

At the same time, traditional stock users are experimenting with generation — sometimes just for ideation, sometimes for campaigns. We see usage across both sides of the platform, and they reinforce each other.

Keith: There’s confusion about ownership of AI content. What do “legal” and “ethical” AI content actually mean right now?

Alessandra: Globally, laws are still emerging. Italy recently implemented guidance aligned with the EU AI Act stating that generative AI-assisted works can be attributed to human authors, assigning responsibility and allowing copyright. That’s a meaningful shift. In the US, however, purely AI-generated works are generally not eligible for copyright.

At the same time, governments assert IP control over the data used to train models — which creates legal contradictions. We’re clearly in a transition phase without consistent global standards.

Alessandra: One thing I’d like to emphasize is the immense value of artists. When our studio artists work with generative tools, they can spend hundreds of prompt iterations perfecting a single piece. The results are visually superior to casual two-prompt outputs.

Copyright protections for artists are critical so they can continue to earn a living from their creative skill.

Keith: AI images now sometimes look almost “too perfect.” Early glitches were obvious — bad hands, distorted faces — but now the images feel generic. Is that a prompting issue?

Alessandra: Prompting skill is underestimated. Model training data also tends to reinforce repeated patterns, which leads to similarity and homogeneity. But as we discover weaknesses, models improve. Research focused on diversity and bias reduction is driving future versions toward greater authenticity and imperfection — which better reflects real life.

Keith: Do you think we’ll see a backlash against generative content?

Alessandra: I don’t think it’s generational — it’s preference-based. Think of fashion: mass-market brands coexist with luxury brands. Visual creation is the same. AI opens new possibilities without replacing human creativity. Different audiences will use different tools depending on their goals.

Keith: Is AI killing creativity — or unlocking it?

Alessandra: I believe it unlocks creativity. I work with filmmakers and artists worldwide — some use AI to produce powerful storytelling never before possible. But many artists are understandably scared. Education, access, and ethical platforms are essential to ensuring human creativity and AI coexist in a healthy way.

Keith: Transparency is another concern. Labels indicating AI generation or “no AI used” seem important. What’s your view?

Alessandra: Transparency is critical for trust. Standards like C2PA and JPEG Trust enable metadata labeling so consumers understand how content was created or modified. California’s AI Transparency Act, effective January 1, 2026, will require labeling of AI-generated content. These systems bring accountability into digital media.

Keith: Looking ahead, what excites you most beyond the hype?

Alessandra: I’m excited by our work on trust, bias mitigation, and safety. Creativity-wise, we’ve only scratched the surface. Emerging areas like immersive 3D animation and spatial computing will enable new storytelling formats that resonate deeply with people.

Keith: Are the commercial models improving on bias and diversity?

Alessandra: Yes, they’re improving — but issues remain. Transparency audits show there’s more work to do bringing research breakthroughs fully into large-scale commercial platforms.

Alessandra: Thank you, Keith. It’s been a pleasure.

Keith: Thanks, Alessandra, for joining us. And thanks to everyone watching. Be sure to like the video, subscribe, and leave your thoughts in the comments. Join us next week for another episode of Today in Tech. I’m Keith Shaw — thanks for watching.