It’s Simply Too Dangerous to Arm Robots
San Francisco was embroiled in controversy earlier this month over a proposal to allow police to deploy robots armed with deadly weapons. After initially greenlighting the technology, the Board of Supervisors due to widespread public outcry. For the time being, killer robots are banned in San Francisco, but the controversy there has put the issue in the national spotlight. People are increasingly aware that this technology exists and that some police departments want to deploy it.
Our overarching position is that the police should be prohibited from using robots to enact violence. Robots should not be used to kill, subdue, push, constrain, or otherwise control or harm people. Judging from the near unanimous protests in San Francisco and , the public agrees.
It’s simply too dangerous to arm robots. While anybody can come up with hypothetical stories in which the use of police killer robots might sound reasonable, such “movie-plot scenarios” are just the tiny tip of a large pyramid of instances in which police use force — many of which are totally unjustified and take place in a legal context that protects almost all officers from accountability. Since 2013, over have not resulted in criminal charges against the officer. And in recent years the police have continued to kill around — with Black people more likely to be killed by the police than white people.
As we have long argued with respect to both domestic drones and other robots, tools that allow force to be used remotely and risk free for the operator make it inevitable that force will be overused, and increase the chances that it will be used sloppily, hitting unintended targets. Signals between the remote operator and the robot may also be degraded due to communications and control problems — or even hacked. Under any proper cost/benefit calculus, the negative effects of introducing armed robots into police departments would far outweigh any benefits.
The public is not alone in its opposition to robot violence. There is a general consensus in law enforcement that flying robots — aka drones — should never be armed in domestic contexts. A 2021 New York Police Department experiment with robot dogs that didn’t include any discussion of weaponization was still cut short by public opposition, and the nation’s foremost manufacturer of legged robots, Boston Dynamics, which makes the robot dog “Spot,” commendably the of its robots, and says it will of any customer that arms them.
Since the NYPD controversy, however, police departments have continued to use robot dogs, and Customs and Border Protection (CBP) that it was with the technology on the border, in both cases without weapons. But another robot dog manufacturer, Ghost Robotics, has already built a rifle-toting robot marketed to the military (the company’s slogan: “the warfighter’s best friend”). In 2016, Dallas police killed a gunman during a mass shooting using a wheeled bomb-squad robot jerry-rigged with explosives, in the nation’s first and only robot use of deadly force. And now we’ve seen the police in San Francisco push back against a ban on weaponized robots.
No matter how many protestations some police departments and manufacturers may make about their lack of interest in arming robots, the possibility will continue to hang out there like a forbidden fruit unless it is made illegal.
Other robot rules
It seems likely, however, that domestic law enforcement agencies will find uses for robots even if weaponization and use of force are banned. As the technology continues to improve, they will be deemed too useful not to become widespread in society — and law enforcement will not be an island of abstention as the technology is routinely deployed for various uses. In fact, the use of ground robots by police bomb squads is already widespread, as is the use of drones.
Nevertheless, police use of the technology to interact with civilians is a unique matter — a different beast than the use of robots in industrial, employment, or household contexts, or even by police bomb squads. Police engage in fraught, power-laden interactions with civilians. Robots, no matter how they are used, represent an extension of officers’ power to act in the physical world.
Police robots may in the future introduce risks we can’t now imagine, but there are some civil liberties threats beyond weaponization that are already apparent. Robots may enter private property and gather video and other data, which can create privacy risks. That’s already a major issue with flying robots, of course. Mission creep is also a concern. For example, the primary use case envisioned for police robot dogs is to assist in hostage and barricade situations by scouting, delivering food, and the like. (Drones are also being marketed .) But if the technology is sold to the public for that purpose, it’s likely to expand to all manner of other uses. In Honolulu, the police used their robot dog to .
Fear and dehumanization are also a risk. Years of have primed people to find robot dogs spooky, but such fears are not to be dismissed as irrational or “merely atmospheric.” Some police departments have used militarized equipment to and , a tactic called a “show of force.” Such tactics chill dissent and, when used routinely, degrade people’s quality of life. A robot dog may be, strictly speaking, just a tool, but the fact that people find it frightening matters.
Given these potential problems, what should policymakers do when communities decide that they approve of police ground robots for various non-violent uses? Our recommendation would be that communities put the following principles into law:
No weaponizing or use of force.
Robots (including drones) should not be equipped with weapons of any kind designed or intended for use against people, or permitted to use force of any kind against people including pushing, nudging, or pressing them. Bomb-disposal robots of the kind already widely deployed, which are not designed to interact with humans, would be permitted.
No entering private property without a warrant.
No ground robot should enter private property unless a police officer would have a legal right to be there, by judicial warrant or “exigent circumstances.” Courts will very likely compel this policy under the Fourth Amendment, but in the meantime, it should be enacted into law. Communities should also require a warrant to conduct surveillance in any situation in which a police officer would be required to obtain a warrant to conduct electronic surveillance.
No uses without community permission.
Too often we see police deploy controversial high-tech devices without telling, let alone asking, the communities they serve, using money from sources such as or federal grants. Robots should not be deployed unless approved by a community’s city council or other elected oversight body in accordance with our recommended “Community Control Over Police Surveillance.” Elected officials should also consider limiting permission for the deployment of robots to a list of enumerated uses. If that later blocks some unanticipated use that people think makes sense, the police can easily come back to the city council and ask for permission, and an open discussion can ensue.
Transparency.
Communities can’t debate high-tech police tools if they don’t know about them. San Francisco residents only learned about the SFPD’s robots due to a that required police departments to disclose the military technology and weaponry within their arsenal. Communities that decide to allow police ground robots should follow the recommendations for public notice and auditing and effectiveness tracking that we have already called for where police want to deploy drones.