AI Generated Police Reports Raise Concerns Around Transparency, Bias
A small but growing number of police departments are adopting software products that use artificial intelligence (AI) to draft police reports for officers. Police reports play an important role in criminal investigations and prosecutions, and introducing novel AI language-generating technology into the criminal justice system raises significant civil liberties and civil rights concerns. Today, the ACLU published a six-page white paper explaining in depth why we don’t think police departments should use this technology.
Police reports play a crucial role in our justice system. They are central to the criminal proceedings that determine people’s innocence, guilt, and punishment, and are often the only official account of what took place during a particular incident. The concept behind the AI products –the most prominent of which is sold by the police technology company Axon – is that an officer can select a body camera video file and have the audio of that file transcribed. A large language model (LLM) like ChatGPT is then used to turn that transcript into a first-person narrative for the officer in the typical format of a police report. The officer can then edit the file before swearing to its veracity and submitting it.
The problems we see with this concept fall into four main buckets:
1. Problems with AI itself.
The technology, as anyone who has experimented with LLMs like ChatGPT knows, is quirky and unreliable and prone to making up facts. AI is also biased. Because LLMs are trained on something close to the entire Internet, they inevitably absorb the racism, sexism, and other biases that permeate our culture. Even if an AI program doesn’t make explicit errors or exhibit obvious biases, it could still spin things in subtle ways that an officer doesn’t even notice.
2. Using body camera transcripts for creating AI police reports raises significant issues around evidence and memory.
Human memory — unlike video recordings — is extremely malleable; subsequent information about an event can literally change a person’s memory of what took place. That’s why it’s important that an officer’s subjective experiences and memories of an incident be memorialized before they are contaminated by an AI’s body camera-based storytelling. But if the police report is just an AI rehash of the body camera video, it may write over certain facts or details the officer might have otherwise recorded, or even worse, allow officers to lie — for example if they see that something illegal they did was not captured by the camera.
3. AI raises serious questions about transparency.
Given the novel, experimental nature of AI-generated police reports, it’s important for the public to understand what’s going on so that independent experts can evaluate the technology, and communities and their elected representatives can decide whether they want the police officers that serve them to use it. And it’s vital that defendants in criminal cases be able to interrogate the evidence against them. Yet, much of the operation of these systems remains mysterious.
4. Forcing police to write down the reasons for use of discretionary power reminds them of the legal limits of their authority.
Their written justifications for things like stops, frisks, searches, and consent searches, are also reviewed by their supervisors, who use what’s written to identify when an officer might not know or observe those limits. A shift to AI-drafted police reports would sweep away these important accountability roles that reports play within police departments and within the minds of officers.
For these reasons, the ACLU does not believe police departments should allow officers to use AI to generate draft police reports. As we describe in more detail in our white paper, AI report-writing technology removes important human elements from police procedures and is too new, too untested, too unreliable, too opaque, and too biased to be inserted into our criminal justice system.