ACLU Statement on OMB Memo for AI Use by Federal Agencies
WASHINGTON — Today, the U.S. Office of Management and Budget (OMB) released a establishing baseline protections for civil rights and safety that federal agencies must meet in their use of artificial intelligence (AI). The memorandum makes important strides, including an appropriately broad scope of uses of AI that are presumed to affect people’s rights and safety. However, this broad scope is diluted by exceptions and waivers for national security and law enforcement that significantly undercut the memorandum’s protections.
“OMB has taken an important step, but only a step, in protecting us from abuses by AI. Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” said Cody Venzke, senior policy counsel with the American Civil Liberties Union. “Policymakers must step up to fill in those gaps and create the protections we deserve.”
Under the adopted memorandum, covered agencies must implement minimum risk management practices for AI that impact rights and safety. The OMB defines “rights-impacting AI” as AI that has a “legal, material, binding or similarly significant effect” on individuals’ or communities’ civil rights and equal opportunities. This broad scope is necessary to meet existing and imminent challenges as AI is already affecting access to employment, housing, credit, and education and continues to develop at an incredible pace across governmental agencies and economic sectors. The final memorandum also makes important improvements in building out transparency for key agency determinations, such as the grants of waivers under the memorandum.
However, the memorandum entirely or largely exempts national security systems and intelligence agencies. U.S. intelligence agencies are racing to integrate AI into some of the government’s most profound decisions: who it surveils, who it adds to government watchlists, who it labels a “risk” to national security, and even who it targets using lethal weapons. Despite the increasing reliance on AI, these agencies lack specific rules and safeguards for their AI systems, as well as clear processes to implement and enforce those rules.
In addition, exceptions for “sensitive law enforcement” information may undermine the memo. Law enforcement uses of AI continue to have a pronounced impact on individuals’ rights and safety. Law enforcement agencies across the country have deployed algorithmic systems such as facial recognition technology and predictive policing systems, often with harmful results. Moreover, the memorandum does not protect against abuses of AI by state agencies – even if they receive federal funds.
OMB’s memorandum can be found online here:
Related Documents