Attempts at a Technological Solution to Disinformation Will Do More Harm Than Good
There is widespread today about the use of generative AI and deepfakes to create fake videos that can manipulate and deceive people. Many are asking, is there any way that technology can help solve this problem by allowing us to confidently establish whether an image or video has been altered? It is not an easy task, but a number of techniques for doing so have been proposed. They include – most prominently — a system of “content authentication” supported by a number of big tech firms, and which was discussed by the released this month. The ACLU has doubts about whether these techniques will be effective and serious concerns about potential harmful effects
Racial Justice
ACLU Statement on House AI Task Force Report
Racial Justice
ACLU Statement on House AI Task Force Report
There are a variety of interesting techniques for detecting altered images, including frames from videos, such as statistical analyses of discontinuities in the brightness, tone, and other elements of pixels. The problem is that any tool that is smart enough to identify features of a video that are characteristic of fakes can probably also be used to erase those features and make a better fake. The result is an arms race between fakers and fake detectors that makes it hard to know if an image has been maliciously tampered with. Some have predicted that efforts to identify AI-generated material by analyzing the content of that material are doomed. This has to a of to use another approach to proving the authenticity of digital media: cryptography. In particular, many of these concepts are based on a concept called “digital signatures.”
Using Cryptography to Prove Authenticity
If you take a digital file — a photograph, video, book, or other piece of data — and digitally process or “sign” it with a secret cryptographic “key,” the output is a very large number that represents a digital signature. If you change a single bit in the file, the digital signature is invalidated. That is a powerful technique, because it lets you prove that two documents are identical — or not — down to every last one or zero, even in a file that has billions of bits, like a video.
Under what is known as public key cryptography, the secret “signing key” used to sign the file has a mathematically linked “verification key” that the manufacturer publishes. That verification key only matches with signatures that have been made with the corresponding signing key, so if the signature is valid, the verifier knows with ironclad mathematical certainty that the file was signed with the camera manufacturer’s signing key, and that not a single bit has been changed.
Given these techniques, many people have thought that if you can just digitally sign a photo or video when it’s taken (ideally in the camera itself) and store that digital signature somewhere where it can’t be lost or erased, like a blockchain, then later on you can prove that the imagery hasn’t been tampered with since it was created. Proponents want to extend these systems to cover editing as well as cameras, so that if someone adjusts an image using a photo or video editor the file’s provenance is retained along with a record of whatever changes were made to the original, provided “secure” software was used to make those changes.
For example, suppose you are standing on a corner and you see a police officer using force against someone. You take out your camera and begin recording. When the video is complete, the file is digitally signed using the secret signing key embedded deep within your camera’s chips by its manufacturer. You then go home and, before posting it online, use software to edit out a part of the video that identifies you. The manufacturer of the video editing software likewise has an embedded secret key that it uses to record the editing steps that you made, embed them in the file, and digitally sign the new file. Later, according to the concept, someone who sees your video online can use the manufacturers’ public verification keys to prove that your video came straight from the camera, and wasn’t altered in any way except for the editing steps you made. If the digital signatures were posted in a non-modifiable place like a blockchain, you might also be able to prove that the file was created at least as long ago as the signatures were placed in the public record.
Content Authentication Schemes Are Flawed
The ACLU is not convinced by these “content authentication” ideas. In fact, we’re worried that such a system could have pernicious effects on freedom.
The different varieties of these schemes for content authentication share similar flaws. One is that such schemes may amount to a technically-enforced oligopoly on journalistic media. In a world where these technologies are standard and expected, any media lacking such a credential would be flagged as “untrusted.” These schemes establish a set of cryptographic authorities that get to decide what is “trustworthy” or “authentic.” Imagine that you are a media consumer or newspaper editor in such a world. You receive a piece of media that has been digitally signed by an upstart image editing program that a creative kid wrote at home. How do you know whether you can trust that kid’s signature — that they’ll only use it to sign authentic media, and that they’ll keep their private signing key secret so that others can’t digitally sign fake media with it?
The result is that you only end up trusting tightly controlled legacy platforms operated by the big vendors like Adobe, Microsoft, and Apple. If this scheme works, you’ll only get the badge of authentic journalist authority if you use Microsoft or Adobe.
Furthermore, if “trusted” editing is only doable on cloud apps, or on devices under the full control of a group like Adobe, what happens to the privacy of the photographer or editor? If you have a recording of police brutality, for example, you may want to ask the police for their story about what happened before you reveal your media, to determine whether the police will lie. But if you edit your media on a platform controlled by a company that regularly gives in to law enforcement requests, they might well get access to your media before you are willing to release it.
Locking down hardware and software chains may help authenticate some media, but would not be good for freedom. It would pose severe threats to who gets to easily share their stories and lived experiences. If you live in a developing country or a low-income neighborhood in the U.S., for example, and don’t have or can’t afford access to the latest authentication-enabled devices and editing tools, will you find that your video of the authorities carrying out abuses will be dismissed as untrusted?
It’s not even certain that these schemes would work to prevent an untrustworthy piece of media from being marked as “trusted.” Even a locked-down technology chain can fail against a dedicated adversary. For example:
- Sensors in the camera could be tricked, for example by to make the “secure” hardware attest that the location where photography took place was somewhere other than where it really was.
- Secret signing keys could be . Once the keys are extracted, they can be used to create signatures over data that did not actually originate with that camera, but can still be verified with the corresponding verification key.
- Editing tools or cloud-based editing platforms could potentially be tricked into signing material that they didn't intend to sign, either by or infrastructure that support those tools, or by .
- Synthetic data could be laundered through the “.” For example, a malicious actor could generate a fake video, which would not have any provenance information, and play it back on a high-resolution monitor. They then set up an authentication-capable camera so that the monitor fills the camera’s field of view and hit “record.” The video produced by the camera will now have “authentic” provenance information, even though the scene itself did not exist outside of the screen.
- Cryptographic signature schemes have often proven to be than people think, often because of implementation problems or because of how humans the signatures.
Another commonly proposed approach to helping people identify the provenance of digital files is the opposite of the scheme described above. Instead of trying to establish proof that content is unmodified, establish proof that modified content has been modified. To do this, these schemes demand every AI photo creation tool to register all “non-authentic” photos and videos using a signature, or a watermark. Then people can check if a photo has been created by AI rather than a camera.
There are numerous problems with this concept. People can strip digital signatures, evade media comparison, or elide watermarks by changing parts of the media. They can create a fake photo manually in image editing software like Photoshop, or with their own AI,which is likely to become increasingly possible as AI technology is democratized. It’s also unclear how you could force every large corporate AI image generator to participate in such a scheme.
A Human Problem, Not a Technology Problem
Ultimately, no digital provenance mechanism will solve the problem of false and misleading content, disinformation, or the fact that a certain proportion of the population is deceived by it. Even content that has been formally authenticated under such a scheme can be used to warp perception or reality. No such scheme will control how people decide what is filmed or photographed, what media is released, and how it is edited and framed. Choosing focus and framing to highlight the most important parts is the ancient essence of storytelling.
The believability of digital media will most likely continue to rely on the same factors that storytelling always has: social context. What we have always done with digital media, as with so many other things, is judge the authenticity of images based on the totality of the human circumstances surrounding them. Where did the media come from? Who posted it, or is otherwise presenting it, and when? What is their credibility; do they have any incentive to falsify it? How fundamentally believable is the content of the photo or video? Is anybody disputing its authenticity? The fact that many people are bad at making such judgments is not a problem that technology can solve.
Photo-editing software, such as Photoshop, has been with us for decades, yet newspapers still print photographs on their front page, and prosecutors and defense counsel still use them in trials, largely because people rely on social factors such as these. It is far from clear that the expansion of democratized software for making fakes from still photos to videos will fundamentally change this dynamic — or that technology can replace the complex networks of social trust and judgment by which we judge the authenticity of most media.
Voters hit with deepfakes for the first time (such as a fake President Joe Biden telling people in a Republican primary) may well fall for such a trick. But they will only encounter such a trick for the first time once. After that they will begin to adjust. Much of the hand-wringing about deepfakes fails to account for the fact that people can and will take the new reality into account in judging what they see and hear.
If many people continue to be deceived by such tricks in the future, as a certain number are now, then a better solution to such a problem would be increased investments in public education and media literacy. No technological scheme will fix the age-old problem of some people falling for propaganda and disinformation, or replace the human factors that are the real cure.