I recently wrote about how difficult it is to know which technologies on the horizon will turn into genuine privacy nightmares and which remain menacing but distant threats. One group of technologies that we’ve had our eyes on for a while are those that purport to read minds. On Sunday the Washington Post ran an on a Maryland case where a murder defendant is trying to introduce fMRI “lie detector” evidence in his defense. (fMRI) allows researchers to look at neural activity in real-time by using powerful magnets to trace blood-flow changes in the brain.
Meanwhile, as CBS Seattle first , some scientists are claiming they can “hack” information out of a subject’s brain, and engage in lie detection, by using a simple “”—biofeedback brain-wave readers that are increasingly used to control computers and are off the shelf for only $300. According to the scientists, a “guilty knowledge test” based on a particular brain wave, the P300, “has a promising use within interrogation protocols that enable detection of potential criminal details held by the suspect.”
Five or so years ago, we looked at this issue in great depth—consulting with experts, hosting a forum, and filing a Freedom of Information Act Request. Around that time several companies were aggressively marketing the technology as lie detectors and for other purposes. One of our concerns was that the technology might be applied by our national security agencies against suspects in the “war on terror,” who would then be mistreated based on inaccurate results. Our FOIA request attempting to find out if that was true did not yield any answers.
Unlike the polygraph, which measures heart rate and temperature in an attempt to detect a subject’s response to lying, fMRI lie detection attempts to detect a subject’s decision to lie. And for a polygraph to work you have to get a subject to actively participate by answering questions, while fMRI could be used to extract information from a person whether they actively provide an answer to a question or not. For example, scientists might try to flash a picture of a murder weapon and measure whether a subject has seen it before– in effect, identifying “guilty knowledge.” In this sense, fMRI hopes to go well beyond lie detection, and presents us with something far closer to the concept of “mind reading.”
Another concern is how private employers might seek to use these kinds of technologies. Attempts to read minds using the brain computer interface devices might be especially tempting in some work situations, since the technology is so cheap and readily available. However, “lie detector” use by private employers is (with a few very narrow exceptions), having been banned by the Employee Polygraph Protection Act of 1988. That defines the term “lie detector” as including
a polygraph, deceptograph, voice stress analyzer, psychological stress evaluator, or any other similar device (whether mechanical or electrical) that is used, or the results of which are used, for the purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an individual.
That would seem to pretty clearly include fMRI or brain computer interface devices, which therefore remain illegal for use by private employers.
Although we have heard less about fMRI in the past few years, stories such as the above are a reminder that the technology remains on the horizon.
However, it is telling that the judge in the Maryland case ruled that the MRI results were inadmissible. Polygraphs have long been barred from the courtroom on grounds of unreliability. The unreliability of polygraphs is a well-established scientific fact (and their continued use by the national security establishment a scandal that routinely damages innocent people). And fMRI lie detection is likely to have the same fate in the foreseeable future. Like all deception detection, fMRI lie detection may well prove to be inherently unreliable because in real world situations, the act of deception is highly complicated by factors such as ambiguity, faulty memory, external stresses, or delusion. We should never forget what a swamp of ever-shifting ambiguity the human mind is. Some liars, for example, come to half-believe what they are saying. Some storytellers imagine their tales with such vividness that the question of “truth” becomes mentally ambiguous.
Ultimately, of course, we can’t predict whether fMRI or other brain-reading techniques will take us to incredible new places, or whether deceptively difficult complexities will stall progress for decades (as with artificial intelligence) or indefinitely.
In either case, however, nonconsensual mind reading is not something we should ever engage in. At the ACLU our longtime opposition to lie detectors (which dates to the 1950s) has never been exclusively about the effectiveness, or the specific technology of the polygraph. We have said since the 1970s that even if the polygraph were to pass an acceptable threshold of reliability, or a more accurate lie-detection technology were to come along, we would still oppose it because of the unacceptable violation of civil liberties it represents.
We view techniques for peering inside the human mind as a violation of the 4th and 5th Amendments, as well as a fundamental affront to human dignity. Until relatively recently, even a person’s private written papers were seen as reflecting their innermost thoughts, and were regarded as immune to seizure by the government—even with a warrant. As Jeffrey Rosen has (see pp. 27-31), English law for centuries did not permit the government to access private papers in civil or criminal cases, warrant or not. Behind this rule was a belief that using a person’s papers as evidence against him was akin to forcing him to testify against himself. That position was still widely held when our Founders wrote the Constitution, and was as late as 1886.
Although we have fallen far from that position, we must not let our civilization’s privacy principles degrade so far that attempting to peer inside a person’s own head against their will ever becomes regarded as acceptable.