My colleague Jay Stanley just wrote about an invasive new police tactic employed by the Chicago Police Department. Using software created by an engineer at the Illinois Institute of Technology, the city developed a “’heat list’ — an index of the roughly 400 people in the city of Chicago supposedly most likely to be involved in violent crime.” The criteria for placement on the list are secret but reportedly go beyond indicators like criminal conviction, and raise real questions about racial bias in the selection process.
The results of placement can be very invasive. At least one person reported that a Chicago police commander showed up at his door to let him know the police would be watching him. He hadn’t committed a crime or even recently interacted with police.
Is this type of automated profiling a privacy problem or a civil rights problem? It’s both. When personal information is used to make secret determinations, that’s a violation of privacy. When there is significant potential for racial discrimination and police abuse, that’s a civil rights problem.
The Chicago list is just the tip of an iceberg of dangerous ways that “big data” is being used. A recently described marketers’ use of lists based on racial and other characteristics to identify “the most and least desirable consumers.” The government E-Verify database, which many employers check to determine immigration status, has a persistent bias that causes legal immigrants to be wrongly identified as ineligible to work. Police too frequently spy on innocent people who pray at mosques.
All of this points to a growing need to consider how privacy and civil rights intersect. As , often the best way to predict the future of surveillance is to ask poor communities what they are enduring right now. That’s why the ACLU has joined together with leading civil rights and media justice groups to endorse “Civil Rights Principles for the Era of Big Data.” These principles aim to shape the intellectual debate around privacy, big data, and civil rights, and to guide our own work.
Some of the key principles include:
- Stopping high-tech profiling. For example when the FBI engages in detailed mapping of racial and ethnic communities across the country, we need to address the profound privacy and civil rights implications of this invasion.
- Preserving constitutional principles in new technology. Legal protections like search warrants are often most important when there is a danger of racial bias, or when communities lack the money to hire lawyers to protect their rights. On a practical level that means making sure that as well.
- Enhancing individual control of personal information. Traditional lenders are forbidden from using their knowledge of consumers’ finances (like the fact that they are behind on their mortgage payments) for marketing. New data collection techniques allow marketers to .
Recognizing the overlap between privacy and civil rights enriches the work in both areas. For privacy advocates it resoundingly rebuts the canard that privacy violations don’t harm anyone, or that if you have nothing to hide you have nothing to worry about. For civil rights leaders, it helps identify the pernicious and subtle discrimination that can pervade new systems, and helps arm them with the knowledge to battle it.
New technologies give rise to new conveniences and opportunities, but also to new harms. And we too must evolve to create new efforts to tackle them.
Please read the full statement of principles .
For more examples of the types of harms the principles aim to address, see .