Back to News & Commentary

New Orleans Program Offers Lessons In Pitfalls Of Predictive Policing

Sign: NO PD Police Do Not Cross
Sign: NO PD Police Do Not Cross
Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
March 15, 2018

Should police gather up statistical information from a variety of sources and then use computer algorithms to try to predict who is likely to be involved in violent crime in the future? Just such an attempt has been underway in New Orleans, as the Verge Feb. 27, and the New Orleans Times-Picayune described in a on March 1.

In the wake of these reports, the Times-Picayune that New Orleans has decided to end its partnership with the data mining company Palantir, whose software the program used. Nevertheless, there are several important lessons that we can draw from the city’s experience.

The New Orleans Police Department (NOPD) program, according to the Verge, centered around

an intelligence technique called social network analysis (or SNA) to draw connections between people, places, cars, weapons, addresses, social media posts, and other indicia in previously siloed databases…. After entering a query term — like a partial license plate, nickname, address, phone number, or social media handle or post — NOPD’s analyst would review the information scraped by Palantir’s software and determine which individuals are at the greatest risk of either committing violence or becoming a victim, based on their connection to known victims or assailants.

The data on individuals came from information scraped from social media as well as NOPD for ballistics, gangs, probation and parole information, jailhouse phone calls, calls for service, the central case management system (i.e., every case NOPD had on record), and the department’s repository of field interview cards.

Police officials and Palantir executives disputed the characterizations in this report—but the NOPD did acknowledge to the Times-Picayune that they had created a “risk assessment database” — a “database of about 1 percent of the city’s population who are more likely to be perpetrators or victims of gun violence,” as the paper put it.

Predictive policing and police “threat scores” are deeply problematic applications of “big data” analytics. We identified eight big problems with one such program in 2016. The reports on the New Orleans program, however, add important new information to our picture of how one such program has actually played out in the messy reality of a big American city.

Transparency, Oversight, and Community Support

First, as the Verge exhaustively demonstrates, top New Orleans political and community leaders — including city council members — were not told about this program, let alone asked whether they thought it was a good idea. Because the program was presented as a philanthropic gift to the city by Palantir, the Verge notes, combined with the strong unilateral powers possessed by the mayor in New Orleans, the agreement between Palantir and the city never had to pass through a public procurement process, which would have required the signoff of the city council, and provided an opportunity for it to be publicly debated.

In another indicator of the lack of transparency around this program, the Times-Picayune obtained information about police use of data, including a heat list called a “gang member scorecard,” but the police department refused the paper’s requests for interviews about the programs. They relented after the Verge piece was published, mostly in order to dispute some of what the Verge reported — but still refused to divulge the factors used to identify and rank those listed on the scorecard.

The solution for this kind of problem is for cities like New Orleans to enact legislation that we at the ACLU are advocating through our effort called Community Control Over Police Surveillance, or CCOPS. CCOPS is a push to enact legislation at the state and local level that would prohibit police departments or other agencies from acquiring or using surveillance technologies without public input and the approval of elected representatives. If New Orleans had had such a statute in place, this technology could not have been deployed before the community knew about it.

It is true that in this case the program was actually not a secret. Palantir and city officials talked about it from time to time. I was actually aware of it myself, having heard a presentation on it at a “big data” conference. Does that mean that CCOPS is not the remedy here? To the contrary. The fact that, as the Verge exhaustively shows, city officials and communities didn’t know about it highlights the need for regular processes by which to put programs like this before elected officials and the communities they serve. New Orleans city council members and other community leaders have a lot on their plates and generally do not focus on big data issues. They can’t be expected to have known about it from discussions within that specialized community.

CCOPS also requires law enforcement to secure city council approval of operational policies for surveillance technologies. That helps communities verify that they don’t run afoul of civil rights and civil liberties principles — a significant risk with predictive policing software.

We couldn’t summarize it any better than former NOPD crime analyst Jeff Asher, who told the Times-Picayune that this kind of technology “needs oversight, it needs transparency, it needs community support.” None of those mutually reinforcing and necessary (but not sufficient) conditions can be met without the right institutional structure in place — the kind of structure that CCOPS requires.

Fortunately, there are strong indications that this wisdom is beginning to sink in. The Verge quotes one of Palantir’s own employees, Courtney Bowman, as saying, “These sorts of programs only work if the community is comfortable with the degree to which this type of information is being applied and if they’re aware of how the information is being used.” And while police departments around America continue to acquire and deploy sensitive new technologies in secret, a number of the savvier police chiefs I’ve spoken with appreciate the need to get community buy-in before they deploy controversial new technologies.

Stop and frisk

One of the key data inputs for this predictive policing program, as mentioned above, was the NOPD database of field interview cards (FICs). Under the (and possibly unconstitutional) FIC program, NOPD officers were instructed to fill out information on every encounter with citizens, even where there was no arrest. Police officials tout the intelligence benefits of this data.

Although the FIC program has been improved in recent years, there is every reason to believe there’s a heavy racial bias in whose information is entered into this database. Certainly there has been a strong racial bias in stops made under the infamous “stop and frisk” program, as well as similar programs in Milwaukee, , , and . The police may stop and hassle people for no reason and collect information on them, but they’re a lot less likely to do that in affluent White neighborhoods. Police know that affluent Whites are just not part of the “mistreatable class.”

And of course it’s not just FIC data that has a racial bias; so does arrest, conviction, and much other data tied to the criminal justice system. As everyone knows, when it comes to computer algorithms, bad data produces bad results — “garbage in, garbage out.” Or in this case, “racism in, racism out.” Tying the FIC database to the city’s risk assessment program also increases the NOPD’s incentives to perpetuate the collection of more and more data.

Carrots and sticks

Another lesson from the New Orleans experience has to do with the outputs of predictive risk assessments. Four years ago, when we first heard about this kind of law enforcement application of data analytics in the form of Chicago’s “heat list,” I wrote:

Overall, the key question is this: will being flagged by these systems lead to good things in a person’s life, like increased support, opportunities, and chances to escape crime—or bad things, such as surveillance and prejudicial encounters with the police?

The Verge reports that New Orleans settled on both — a “carrot and stick” approach called the “CeaseFire” program. The New Orleans police

used the list of potential victims and perpetrators of violence generated by Palantir to target individuals for the city’s CeaseFire program…. In the program, law enforcement informs potential offenders with criminal records that they know of their past actions and will prosecute them to the fullest extent if they re-offend. If the subjects choose to cooperate, they are “called in” to a required meeting as part of their conditions of probation and parole and are offered job training, education, potential job placement, and health services.

That’s not quite how things worked out, however. The Verge quotes a community activist named Robert Goodman who worked with people identified as at risk in the CeaseFire program.

Over time, Goodman noticed more of an emphasis on the “stick” component of the program and more control over the non-punitive aspects of the program by city hall that he believes undermined the intervention work.

The numbers tell the story. According to the Times-Picayune, there was a sharp drop in “call-ins,” in which those identified as high risk are offered social services and support.

Records show the city hosted 10 call-ins from October 2012 through November 2015, bringing in 285 participants. Since November 2015, only one call-in has been held — in March 2017, records show.

On the other hand, the Verge reports:

By contrast, law enforcement vigorously pursued its end of the program. From November 2012, when the new Multi-Agency Gang Unit was founded, through March 2014, racketeering indictments escalated: 83 alleged gang members in eight gangs were indicted in the 16-month period, according to an .

The fact that New Orleans emphasized the stick over the carrot suggests further reasons for skepticism about such programs. As I have discussed, how we evaluate the potential pitfalls and benefits of such programs is much different when data analytics are used to offer benefits to people than when it results in adverse consequences. The consequences of inaccurate identifications are far greater when people are hurt, for example. Unfortunately, when analytics programs such as this are introduced in a national context and culture of a justice system that functions as a racist “New Jim Crow,” this kind of outcome — a tiny, shriveled carrot and a big, brutal stick — is all too predictable.

Social media and “risk assessments”

In the wake of the publication of the Verge’s report, New Orleans officials and Palantir disputed the portrayal of the program in that piece. Police officials, including the NOPD’s top data analyst, the Times-Picayune, and Palantir’s Courtney Bowman reached out to us at the ACLU.

For example, the Verge report repeatedly asserts that the New Orleans program made use of “social media posts” and “information scraped from social media.” The use of social media as part of any law enforcement risk assessment program is a big concern because it might chill speech, organization, and dissent. Bowman told us that “there was no bulk social media data collection or scraping” as part of this program, and that social media was only “used on an ad hoc basis,” for example when collected in the course of specific criminal investigations (a use we do not object to). Police officials told the Times-Picayune, meanwhile, that their “gang scorecard” was simply a (non-Palantir) spreadsheet with names sorted by “the number of gun related events,” and said it hadn’t been used. Bowman told us that “opaque algorithms, statistical models, machine learning, or AI were never elements of this effort,” and emphasized that (as with its other clients) Palantir did not actually collect or store any data itself.

I’m glad to hear about these limitations on what the program involved. None of that makes any difference, however, for the points I make above. If officials believe that the details of what they are doing have been exaggerated or misunderstood, they have only themselves to blame in failing to build the transparency and trust that would make the program’s contours clear to all. And what police officials did acknowledge building — that “risk assessment database” — is troubling enough.

Finally, what are we to make of today’s news that the city has terminated its agreement with Palantir? First, it’s a reminder that the benefits of this kind of approach to policing are unproven. Transparency about the uses of analytics in law enforcement is important because it’s so new — but that very novelty also means that we don’t know how experiments in this area will turn out. Sometimes new technologies are the subject of a lot of hype, excitement, and sales pitches and are eagerly adopted by police departments, but then die on the vine because they don’t actually prove to be effective or practical. That happened with face recognition right after 9/11, for example. Here the mayor’s office told the Times-Picayune that the contract is not being reviewed because “This technology is no longer being utilized in day-to-day operations.” That fact may speak louder than any statement the New Orleans police may issue. The incoming mayor told the paper that “this particular model of policing will no longer come under review.”

At the same time, as with face recognition, we have to assume that police departments — perhaps including New Orleans — will continue to experiment with data analytics in ways that will raise civil liberties issues. If they do, we can only hope they absorb the lessons above.

Learn More About the Issues on This Page