Back to News & Commentary

Legal Responsibility As Computers Get More Unpredictable

Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
June 28, 2012

There has been some discussion lately of whether the output of computer algorithms should be considered protected free speech, as and my colleague Gabe Rottman addressed in a blog post in response.

As Gabe mentioned, the ACLU has no formal policy on this question. But it seems to me that the output of a computer algorithm should be treated under the law as the behavior of the entity that controls the computer. If that behavior is constitutionally protected, then so be it; if it is not, then neither is the computer’s output. In other words, the fact that a computer is used as the instrument by which a corporation or individual acts should not enter into the calculation.

Several commentators have made essentially this point, including of Cato, and , whose started the whole conversation. of Public Citizen agrees—but also reminds us that some speech, including price-fixing agreements, is not constitutionally protected in the first place, and argues that Google’s search results may, in fact, be subject to anti-trust scrutiny (but not because they are computerized).

Along those lines, I am not convinced by Wu’s suggestion that unless we divorce computer output from free speech rights, we will shut down important avenues of government oversight. He gives no examples other than anti-trust regulation of Google, but as Levy points out, the “speech” quality of Google’s computerized output does not automatically insulate it from oversight.

But one interesting and potentially countervailing aspect of the story here is that as computer programs get more complex, their output becomes increasingly unpredictable. Wu’s critics understate that reality in stressing that computers are merely expressions of the intent of their human creators. Volokh writes, “The computer algorithms that produce search engine output are written by humans. Humans are the ones who decide how the algorithm should predict the likely usefulness of a Web page to the user.”

But that link between human “deciding” and computer outcomes is sometimes tenuous, and will only become more so. Modern machine-learning techniques work by abandoning attempts to consciously direct a computer’s logic, instead turning to nonlinear statistical black box-approaches such as that can work successfully even when their programmers do not understand why. And over time, as computers are increasingly used to program computers, a machine’s ultimate output may rest on a tower of computer code quite far removed from any conscious human guidance.

In fact, once a computer algorithm becomes sufficiently complex, its behavior may not be predictable even in principle. Here we rapidly enter the realm of complex mathematics and computer science, but as a taste of this world suffice it to say that Alan Turing formally in 1936 that the behavior of some computer programs cannot be predicted. And Stephen Wolfram has the label of “computational irreducibility” to the concept that it can be impossible to predict what a computer program will do, without actually running the program.

Rigid computer “minds” have sometimes been touted as having certain advantages over our messy human minds—for example, their blindness to race, gender and other characteristics that all too often bias and distract human monitors. Defenders of computerized airport body scanners, for example, have said to me, “At least they don’t discriminate against Muslims!” That is true, and despite the privacy problems we have with body scanners, it is one of their advantages. And, those same advantages might hold in other contexts, such as the use of data mining to predict individuals’ behavior.

Ironically, however, as computers get smarter, they will also lose that very predictability that has been one of their advantages. They will come to exhibit quirks, lapses, and perhaps biases just like humans.

Over time, this could change our intuitions about how we should treat computer code. Perhaps new doctrines of law will evolve that we cannot anticipate at this time. But from where we stand now, my intuition is that this breach between conscious human intentionality and the behavior of our computer familiars will only increase the importance of the principle I stated up front: that those who deploy computers for real-world decisions and actions will still be responsible for their outcomes.

After all, the growing “intent-output divide” will have important implications as humans are increasingly judged by computer algorithms. Think of a bank or insurance company making life-altering decisions about consumers, a government deciding who is suspicious enough to get special treatment, and who-knows-what-else as increasingly smart computers are assigned ever more decisionmaking roles. Because of this growing divide, the decisions that computers make may become even more baffling and inscrutable to the subjects of these decisions than they often are today, and that has important implications for fairness and due process. The masters of these decisionmaking computers will need to ensure that people are not treated with bias, or placed on watch lists or otherwise disadvantaged unfairly. They will have to be responsible for the choices their institutions make—however those choices are made.

Learn More About the Issues on This Page