Adobe To Introduce Tools Against AI Made Deepfakes For Customer Protection

As a leader in digital creative tools, Adobe holds a significant responsibility to address the challenges posed by AI-driven deepfakes, misinformation, and content theft.

In the first quarter of 2025, Adobe plans to introduce its Content Authenticity web app in beta, allowing creators to apply content credentials to their work, certifying ownership.

This process goes beyond merely changing an image’s metadata, as that level of protection can easily be bypassed through screenshots.

Adobe’s content credentials offer enhanced protection by employing digital fingerprinting, invisible watermarking, and cryptographically signed metadata to safeguard artworks, including images, videos, and audio files.

Invisible watermarking alters pixels in a way that is imperceptible to the human eye, while the digital fingerprint encodes an ID into the file.

This ensures that even if content credentials are removed, the file can still be identified as belonging to its original creator.

Adobe’s senior director of Content Authenticity, Andy Parsons, explained to TechCrunch that with this technology, Adobe can “truly say that wherever an image, or a video, or an audio file goes, on anywhere on the web or on a mobile device, the content credential will always be attached to it.”

The effectiveness of opt-in initiatives like this depends on widespread adoption.

Adobe, with its 33 million paying subscribers, is well-positioned to achieve this. Even non-Adobe users can apply content credentials using the web app.

A key challenge is making content credentials universally accessible across the internet.

Adobe has co-founded two industry groups focused on preserving content authenticity and promoting transparency and trust online.

These groups include camera manufacturers representing 90% of the market, as well as companies like Microsoft, OpenAI, TikTok, LinkedIn, Google, Instagram, and Facebook.

While their involvement doesn’t guarantee that Adobe’s content credentials will be integrated into their products, it does ensure Adobe has their attention.

However, not all social media platforms and websites currently display provenance information prominently.

Adobe Products

“In the meantime, to bridge that gap, we’re going to release the Content Authenticity browser extension for Chrome as part of this software package, and also something we call the Inspect tool within the Adobe Content Authenticity website,” Parsons said.

“These will help you discover and display content credentials wherever they are associated with content anywhere on the web, and that can show you again who made the content, who gets credit for it.”

Interestingly, AI itself is not particularly adept at distinguishing AI-generated content from real content.

As it becomes harder to differentiate between real and synthetic images, these tools can provide a more reliable method of determining an image’s origin, provided it has credentials.

Adobe is not opposed to AI. Instead, it aims to provide transparency about when AI is used in an artwork and prevent artists’ work from being utilized in AI training datasets without consent.

Adobe has even developed its own generative AI tool called Firefly, which is trained using Adobe Stock images.

“Firefly is commercially safe, and we only train it on content that Adobe explicitly has permission to use, and of course, never on customer content,” Parsons said.

While many artists have expressed concerns about AI tools, Adobe’s Firefly integrations in apps like Photoshop and Lightroom have been well received.

Parsons noted that Photoshop’s generative fill feature, which can extend images using prompts, saw a 10x adoption rate compared to typical Photoshop features.

Adobe has also collaborated with Spawning, a tool designed to help artists maintain control over how their work is used online.

Spawning’s website, “Have I Been Trained?,” enables artists to search for their works in popular AI training datasets.

Artists can also add their works to a Do Not Train registry, signaling to AI companies that their content should not be included in training datasets.

While this only works if AI companies respect the list, companies like Hugging Face and Stability have already shown support.

On Tuesday, Adobe will launch the beta version of its Content Authenticity Chrome extension. Creators can also sign up to receive notifications for when the full web app’s beta version becomes available next year.

John Edward
John Edward
John Edward is a distinguished market trends analyst and author renowned for his insightful analyses of global financial markets. Born and raised in New York City, Edward's early fascination with economics led him to pursue a degree in Finance from the Wharton School at the University of Pennsylvania. His work is characterized by a meticulous approach to data interpretation, coupled with a deep understanding of macroeconomic factors that influence market behavior.
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x