Back to News & Commentary

Artificial Intelligence at Any Cost Is a Recipe for Tyranny

CCTV camera looking onto city
CCTV camera looking onto city
Ben Wizner,
Director,
ACLU Speech, Privacy, and Technology Project
Share This Page
August 23, 2017

This post was adapted from a at an AI Now held on July 10 at the MIT Media Lab. is a new initiative working, in partnership with the ACLU, to explore the social and economic implications of artificial intelligence.

It seems to me that this is an auspicious moment for a conversation about rights and liberties in an automated world, for at least two reasons.

The first is that there’s still time to get this right. We can still have a substantial impact on the legal and policy debates that will shape development and deployment of automated technologies in our everyday lives.

The second reason is Donald Trump. The democratic stress test of the Trump presidency has gotten everyone’s attention. It’s now much harder to believe, as Eric Schmidt once assured us, that “technology will solve all the world’s problems.” Technologists who have grown used to saying that they have no interest in politics have realized, I believe, that politics is very interested in them.

By contrast, consider how, over the last two decades, the internet came to become the engine of a surveillance economy.

Silicon Valley’s apostles of innovation managed to exempt the internet economy from the standard consumer protections provided by other industrialized democracies by arguing successfully that it was “too early” for government regulation: It would stifle innovation. In almost the same breath, they told us that it was also “too late” for regulation: It would break the internet.

And by the time significant numbers of people came to understand that maybe they hadn’t gotten such a good deal, the dominant business model had become so entrenched that meaningful reforms will now require Herculean political efforts.

How smart can our “smart cameras” be if the humans programming them are this dumb?

When we place “innovation” within — or atop — a normative hierarchy, we end up with a world that reflects private interests rather than public values.

So if we shouldn’t just trust the technologists — and the corporations and governments that employ the vast majority of them — then what should be our north star?

Liberty, equality, and fairness are the defining values of a constitutional democracy. Each is threatened by increased automation unconstrained by strong legal protections.

Liberty is threatened when the architecture of surveillance that we’ve already constructed is trained, or trains itself, to track us comprehensively and to draw conclusions based on our public behavior patterns.

Equality is threatened when automated decision-making mirrors the unequal world that we already live in, replicating biased outcomes under a cloak of technological impartiality.

And basic fairness, what lawyers call “due process,” is threatened when enormously consequential decisions affecting our lives — whether we’ll be released from prison, or approved for a home loan, or offered a job — are generated by proprietary systems that don’t allow us to scrutinize their methodologies and meaningfully push back against unjust outcomes.

Since my own work is on surveillance, I’m going to devote my limited time to that issue.

When we think about the interplay between automated technologies and our surveillance society, what kinds of harms to core values should we be principally concerned about?

Let me mention just a few.

When we program our surveillance systems to identify suspicious behaviors, what will be our metrics for defining “suspicious”?

Know the eight signs of terrorism

This is a brochure about the “8 signs of terrorism” that I picked up in an upstate New York rest area. (My personal favorite is number 7: “Putting people into position and moving them around without actually committing a terrorist act.”)

How smart can our “smart cameras” be if the humans programming them are this dumb?

And of course, this means that many people are going to be logged into systems that will, in turn, subject them to coercive state interventions.

But we shouldn’t just be concerned about “false positives.” If we worry only about how error-prone these systems are, then more accurate surveillance systems will be seen as the solution to the problem.

I’m at least as worried about a world in which all of my public movements are tracked, logged, and analyzed accurately.

Bruce Schneier likes to say: Think about how you feel when a police car is driving alongside you. Now imagine feeling that way all the time.

There’s a very real risk, as my colleague Jay Stanley has warned, that pervasive automated surveillance will:

“turn[] us into quivering, neurotic beings living in a psychologically oppressive world in which we’re constantly aware that our every smallest move is being charted, measured, and evaluated against the like actions of millions of other people—and then used to judge us in unpredictable ways.”

I also worry that in our eagerness to make the world quantifiable, we may find ourselves offering the wrong answers to the wrong questions.

Terror alert levels

The wrong answers because extremely remote events like terrorism don’t track accurately into hard predictive categories.

And the wrong question because it doesn’t even matter what the color is: Once we adopt this threat-level framework, we say that terrorism is an issue of paramount national importance — even though that is a highly questionable proposition.

Think about how you feel when a police car is driving alongside you. Now imagine feeling that way all the time.

The question becomes “how alarmed should we be?” rather than “should we be alarmed at all?”

And once we’re trapped in this framework, the only remaining question will be how accurate and effective our surveillance machinery is — not whether we should be constructing and deploying it in the first place.

If we’re serious about protecting liberty, equality, and fairness in a world of rapid technological change, we have to recognize that in some contexts, inefficiencies can be a feature, not a bug.

Bill of Rights first ten articles

Consider these words written over 200 years ago. The Bill of Rights is an anti-efficiency manifesto. It was created to add friction to the exercise of state power.

The Fourth Amendment: Government can’t effect a search or seizure without a warrant supported by probable cause of wrongdoing.

The Fifth Amendment: Government can’t force people to be witnesses against themselves; it can’t take their freedom or their property without fair process; it doesn’t get two bites at the apple.

The Sixth Amendment: Everyone gets a lawyer, and a public trial by jury, and can confront any evidence against them.

The Eighth Amendment: Punishments can’t be cruel, and bail can’t be excessive.

This document reflects a very deep mistrust of aggregated power.

If we want to preserve our fundamental human rights in the world that aggregated computing power is going to create, I would suggest that mistrust should remain one of our touchstones.

Learn More About the Issues on This Page