There was a time—not long ago—when questioning whether a photograph was real never crossed your mind. You might wonder if someone edited the background, adjusted the colors, maybe removed a blemish or two. But the fundamental scene? The people in it? The event itself? Those were real. That assumption, built over more than a century of photography, is gone.
Now, entire scenes are fabricated. People who were never present appear in photos. Events that never happened have documentation. The White House publishes AI-manipulated images of people getting arrested. Deepfakes of government officials flood social media during crises. Convincing fake videos of protests spread faster than journalists can verify them. We’ve crossed from “is this edited?” to “did this even happen?”
Instagram’s Adam Mosseri made it official on New Year’s Eve. “For most of my life, I could safely assume photographs or videos were largely accurate captures of moments that happened,” he wrote. “This is clearly no longer the case.” His recommended response: “We’re going to move from assuming what we see as real by default to starting with skepticism.” The head of one of the world’s largest photo-sharing platforms is telling users they can no longer trust what they see.
The tech industry saw this coming. Years ago, major players including Adobe, Meta, Microsoft, Google, and OpenAI backed a metadata standard called C2PA (Coalition for Content Provenance and Authenticity) designed to label AI-generated content and preserve some shared sense of reality. The idea was simple: embed tamper-proof metadata at creation, have platforms read it, show users clear labels distinguishing real from fake.
It’s not working. Despite corporate backing and years of development, C2PA is failing. The metadata strips easily—sometimes accidentally by the very platforms claiming to support the standard. Apple refuses to participate. Creators hate the labels. Platforms can’t agree on what counts as “AI-generated.” The system built to save reality is becoming just another “meritless badge” companies display while doing nothing substantive.
⚡
WireUnwired • Fast Take
- C2PA metadata standard backed by Adobe, Meta, Google, OpenAI to label AI content has failed
- Metadata easily stripped by platforms during upload—even accidentally by companies supporting the standard
- Apple refuses to participate; platforms don’t implement detection; creators hate the labels
- Instagram’s Mosseri: “Start with skepticism”—we can no longer assume photos/videos are real

The Technical Reasons Behind C2PA Failure
C2PA embeds metadata at creation—when you snap a photo, upload to Photoshop, or generate with AI. This metadata travels with the file, recording every edit, tool used, and transformation. Platforms scan for this metadata and display labels: “AI-generated,” “Edited,” or “Authentic.”
The problem: metadata strips easily. OpenAI, a steering committee member, openly admits C2PA data is “incredibly easy to remove” and platforms might strip it accidentally during uploads. Instagram, Facebook, LinkedIn, and Threads all claim to support C2PA, yet metadata vanishes when images upload. Whether intentional or accidental doesn’t matter—the effect is the same.
Even when metadata survives, platforms don’t know what to do with it. How much AI editing makes something “AI-generated”? Photoshop’s AI-powered tools include basic features like noise reduction and sharpening. Should those trigger “made with AI” labels? Nobody’s defined the threshold, so implementation varies wildly or doesn’t happen at all.
The Adoption Failure
C2PA only works if everyone participates. Google embeds metadata in Pixel phones. A handful of camera makers (Leica, Sony, Nikon) add it to new models. Adobe includes it in Creative Cloud. That’s the creation side—barely scratching the surface since older cameras can’t be retrofitted and Apple refuses to participate entirely.
Apple’s absence is devastating. As the world’s largest camera manufacturer by volume (iPhones), their non-participation means most photos lack C2PA data from the start. Apple hasn’t publicly explained why, but sources suggest they’re “carefully skirting on the sidelines,” perhaps recognizing that current solutions are fundamentally flawed.
Distribution platforms are worse. Twitter—a founding C2PA member—abandoned the standard after Elon Musk’s acquisition. TikTok claims involvement but labels appear inconsistently. YouTube runs SynthID and supports C2PA but AI videos flood the platform unlabeled. Pinterest is “unusable” because you can’t filter AI-generated images despite the company’s C2PA membership.
The Incentive Problem
Companies profiting from AI-generated content have no incentive to label it effectively. Google invests billions in AI while running YouTube, where AI slop proves profitable. Meta pours resources into AI research while operating Instagram and Facebook. These platforms make money from engagement—AI content generates engagement. Labeling it as “fake” or “AI-generated” would devalue that content and reduce time-on-platform.
Creators hate the labels too. When Instagram first implemented C2PA detection, it slapped “made with AI” tags on photos using basic AI-powered editing features. Creators revolted—the label made their work seem less valuable, less authentic, less worthy of attention. Instagram quickly backed off, and the industry learned: labels make everyone angry.
This creates impossible dynamics. Users want to know what’s real. Creators don’t want labels devaluing their work. Platforms profit from AI content regardless of authenticity. Nobody has strong incentives to make labeling work, despite everyone claiming to support it.
What Actually Happens Now
Mosseri’s “start with skepticism” represents the industry’s surrender. Instead of labeling AI content, Instagram might shift to verifying real content—authenticating trusted photographers and news organizations while treating everything else as suspect. This inverts the problem without solving it, since AI can still flood platforms from unverified sources.
The original vision—C2PA metadata traveling with every image, platforms reading it reliably, users seeing clear labels—requires universal adoption and consistent implementation. That’s not happening. Apple won’t join. X abandoned it. Platforms strip metadata accidentally. Even when they want to implement it, they can’t agree on what counts as “AI-generated.”
Meanwhile, the need intensifies. Government agencies publish AI-manipulated photos. Election misinformation uses convincing deepfakes. Social movements organize around videos that might be fake. The foundation of “I saw it, so it’s real” has crumbled, and the system built to replace it with verified authenticity doesn’t work.
Regulatory intervention seems inevitable. The EU will likely mandate labeling standards. The U.S. might follow. But regulation faces the same technical challenges: metadata strips easily, platforms implement inconsistently, and defining “AI-generated” remains contentious. Laws can’t fix incentive misalignments or technical limitations.
The endpoint Mosseri described—skepticism as default, trust only verified sources, assume everything’s manipulated until proven otherwise—represents a fundamental shift in how society processes visual information. For most of human history, seeing was believing. That era ended, and the industry’s attempt to restore it through metadata standards failed before it truly began.
C2PA will continue existing. Companies will keep claiming they support it. Press releases will tout progress. But the gap between “supports C2PA” and “successfully labels AI content” has become unbridgeable given current incentives and implementation realities. We’re past the point where standards and good intentions could solve this. What comes next is anyone’s guess, but it won’t be metadata labels saving reality.
For discussions on AI-generated content, platform accountability, and the future of digital media trust, join our WhatsApp community where journalists and technologists discuss media authenticity challenges.
Discover more from WireUnwired Research
Subscribe to get the latest posts sent to your email.




