New ACLU Report Reveals Alarming Growth of AI Video Surveillance Technologies
NEW YORK — America’s nearly 50 million surveillance cameras could soon police people in real time with the development of increasingly smart artificial intelligence technologies, according to a new report released today by the American Civil Liberties Union. These cameras won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated “video analytics” technologies threaten to fundamentally change the nature of surveillance from the current fragmented, collection-and-storage only model to a mass automated real-time monitoring system that raises significant civil liberties and privacy concerns.
“Cameras that collect and store video just in case it is needed are being transformed into robot guards that actively and constantly watch people,” said Jay Stanley, senior policy analyst with the ACLU. “It is as if a great surveillance machine has been growing up around us, but largely dumb and inert — and is now, in a meaningful sense, waking up. The end result, if left unchecked, will be a society where everyone’s public movements and behavior are subject to constant and comprehensive evaluation and judgment by what are essentially AI security guards.”
Automated video analytics technologies are already being deployed by schools, retail stores, and police departments in the United States. For instance, the NYPD is partnering with Microsoft to equip the city’s over 6000 surveillance cameras with the technology and companies are using it to, they claim, identify shoplifters before they commit a crime based on “fidgeting, restlessness and other potentially suspicious body language.” In China, a financial lender is using emotion recognition to purportedly evaluate customers’ creditworthiness.
The ACLU is calling on U.S. policymakers to contend with the technology’s enormous power, prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.
“Video surveillance powered by artificial intelligence raises some of the same issues that AI and algorithms raise in many other contexts, such as a lack of transparency and due process and the potential to worsen existing racial disparities and biases. AI surveillance also introduces new concerns, including the possibility of widespread chilling effects and the reshaping of our behavior. We will quickly become aware that our actions are being scrutinized and evaluated on a second-by-second basis by AI watchers with consequences that can include being flagged as suspicious, questioned by the police, or worse,” said Stanley.
The ACLU’s new report, The Dawn of Robot Surveillance, reviews established and ongoing research in the field of computer vision to reveal the many video analytics capabilities scientists are currently envisioning or developing. It also looks at where the technology has already been deployed, and what capabilities companies are claiming they can offer.
These capabilities include human action recognition, “anomaly detection,” physiological measurements (including heart and breathing rates and eye movements), wide-area tracking of the patterns of our movements throughout a town or city, and emotion recognition.
The ACLU warns, “when it comes to AI video analytics, we should be scared that it won’t work, and we should be scared that it will.” Smart cameras will fundamentally shift the balance of power between the government and people, arming governments — and corporations — with unprecedented amounts of detailed information on people that could be used against them in numerous ways. Coupled with other rapidly advancing technologies, such as face recognition, AI-powered cameras won’t just be able to recognize what individuals are doing, but also who they are. This has the potential to turn every surveillance camera into a digital checkpoint as part of a comprehensive distributed tracking network capturing people’s identities, associations, and locations on a mass scale.
“It doesn’t take a big stretch of the imagination to think of this technology’s darker possibilities,” said Stanley. “Life insurance companies could monitor jogging speeds to determine which plans to offer particular individuals. Political campaigns could track and monitor attendees at rallies, assessing their facial expressions and emotion levels to tailor messages and distinguish between supporters. A corrupt politician could instruct their staff to find all instances of political enemies jaywalking.”
Video analytics could also exacerbate existing racial disparities, lead to over-policing, and be subject to abuse. False or inaccurate analytics can lead to people being falsely identified as a threat and hassled, blacklisted, or worse. False facts could be associated with them, and discriminatory effects are likely, such as flagging a Black man entering a predominantly white neighborhood as an “anomaly.”
A blog post by Stanley on the new report can be found here:/blog/privacy-technology/surveillance-technologies/32-billion-industry-could-turn-americas-50-million.
An explainer video on the report is here: https://www.youtube.com/watch?v=1dDhqX3txf4&feature=youtu.be.
The report is available here:/report/dawn-robot-surveillance.