C2PA Specifications: a Devious Way to Automate Suppression of Viewpoints?
I’m struggling to understand how this is useful for a regular photographer.
And how this double-edged sword will do more good than harm For example, it could be applied to track down political dissenters (bad) or child pornographers (good) if images taken on a camera can be traced back to the camera which can then be tied to the photographer. But it seems like it might be a lot worse, more on that below.
But maybe there is some value in you and me being able to certify our own images. Why I would bother, I’m not sure.
C2PA Explainer
The Coalition for Content Provenance and Authenticity (C2PA) addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. C2PA is a Joint Development Foundation project, formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic.
This site contains the various specifications and documents produced by the C2PA.
It is supposed to in part be about helping consumers check the provenance of the media they are consuming.
Alice sends a video to a friend, Bob. The video includes text with alarming and controversial allegations. Bob immediately seeks confirmation of its validity, starting with its provenance. The video that Alice sent contains C2PA provenance. With a C2PA-enabled application, Bob is able to establish that this video has been validated as being published by an organisation he can trust and is held in public high regard.
Ummm.... for starters, Bob doesn’t give a shit. Who is going to do this?
But assuming Bob is a nerd, is Bob going to trust propaganda outlets like the NYT to tell the truth? Or the US military to give us “real” images? All this seems to do is to verify it this way: “bullshit discredited organization <any acronum> claims this is real”. Given that virtually nothing in the news can be take as real, let alone at face value, how does this help? It seems like a way to allow already untrustworthy organizations to make claims they can be held to. Wait, maybe that’s a good thing.
Nefarious side
This is hardly speculation, given what the past few years have shown us.
The rationale should scare the crap out of you: “organization he can trust and is held in public high regard”. As in the every lying acronym organization that exists and cannot be trusted with anything, and whose data is total garbageware? Who determines who is trustworthy? But maybe that doesn’t change much, since it boils down to trusting the party first. What this does is hold that party accountable for the content it publishes.
Consider that signed content can provide a 100% automated way to delete/suppress content that any organization or government or computer routing network doesn’t like. Or only allow content emanating from an approved organization. Maybe I’m missing something, but this looks like a powerful hammer for control of visual media.