Back to News & Commentary

DHS Focus on "Soft Targets" Risks Out-of-Control Surveillance

Pedestrians walking through Times Square.
AI surveillance of public places risks subjecting everyday spaces to airport-level security that heightens the risk of discrimination.
Pedestrians walking through Times Square.
Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
October 24, 2024

The U.S. Department of Homeland Security (DHS) is investing resources in what it calls protecting “soft targets,” which include crowded places that aren’t subject to “hardened” security measures. Examples include shopping areas, transit facilities, and open-air tourist attractions. We are all used to going through security checks at airports and some other venues, but the vast majority of events and spaces in the United States don’t have that kind of security. The question is: Do we want DHS having a role in all of those spaces?

DHS protecting soft targets as one of four goals in its mission to counter terrorism and homeland security threats. The agency says that it’s working to “assess soft targets and address security gaps, and investing in research and development for technological solutions.” Many measures to increase the security of soft targets are uncontroversial and probably beneficial. There’s nothing wrong with assessing risk, for example, and we all want our government to be well-prepared to respond to emergencies of all kinds. But other parts of the agency’s efforts in this area raise serious questions about mass surveillance and the maintenance of an open society, including those that use AI or other technological solutions to generate or share domestic intelligence, attempt to measure “suspiciousness,” or monitor and track people in public places.

Most of these efforts are targeted at people going about their business in public places, mostly unaware of the AI technologies that are being trained upon them, raising significant privacy and constitutional issues.

One way that DHS has begun work in this area is through the funding of an industry or academic research and development center called the Soft Target Engineering to Neutralize the Threat Reality, or , which aims to create “resources and tools for anticipating and mitigating threats to soft targets and crowded places.” Among the center’s is “advanced sensing technologies” that aims to develop “new sensing capabilities to detect threats,” in particular, “to establish new stand-off sensor concepts for detecting concealed threats in crowds.”

Part of SENTRY’s research includes developing AI tools “for data mining of social media, geospatial data platforms, and other sources of information to extract insights on potential threats.” part is looking at the application of AI “to risk assessment, quantitative threat deterrence, development of layered security architectures; and providing methods for fusing data and other information.”

DHS’s Silicon Valley Innovation Program, which funds private companies to research and develop products that DHS would like to see, also has . It funds companies that are aiming to “build AI algorithms that link objects (e.g., unattended baggage) to people and track them,” to “identify motion of interest from security video feeds,” and to create an “anomaly detection system that leverages activity recognition and tracking to capture multiple data points per subject.”

DHS has also been testing various sensors and detectors intended to be used on people in non-secure public spaces. For several years the agency has been carrying out public tests of thermal cameras designed to spot weapons underneath people’s clothing and identify medicines or other substances that people may have on their person. Various at-a-distance AI are also being by in the SENTRY program.

Most of these efforts are targeted at people going about their business in public places, mostly unaware of the AI technologies that are being trained upon them, raising significant privacy and constitutional issues. At their worst, efforts to secure soft targets may lead to the “airportization” of American life by local authorities to increase the use of security perimeters, searches, and surveillance at an ever-widening group of public gatherings and events. Such efforts could lead to the emergence of a checkpoint society, an enclosed world where people are scanned, vetted, and access-controlled at every turn.

Of course, security technology does not operate itself; people will be subject to the petty authority of martinet guards who are constantly stopping them based on some AI-generated flag of suspicion. And, inevitably, doing so in discriminatory ways. AI-facilitated surveillance of public venues may lead to the harassment, investigation, and arrest of people who are already disproportionately singled out for scrutiny, such as protestors, communities of color, and immigrants. The use of AI machine vision to monitor people, even when done in an ostensibly anonymous manner, has the potential to significantly change the experience of being in public in the United States.

Such efforts may lock down American life in ways that impose not only direct costs — the price of equipment and personnel – but also introduce inefficiencies – wait times and efforts by members of the public to avoid false alarms. These efforts also perpetuate the intangible social and psychological costs that come from surveillance, submission to authority, and the lack of an open society.

We recently filed with the Privacy and Civil Liberties Oversight Board (PCLOB), an agency created by Congress to serve as a check and balance on our security agencies, urging the agency to keep a close eye on these activities, among others. Given the sensitivity around surveilling people in public places, those activities bear close watching — something we will be doing as well.

Learn More About the Issues on This Page