Mark Zuckerberg will soon appear before Congress to address recent news that a company called Cambridge Analytica harvested the data of some 50 million Facebook users in the service of its influence and propaganda campaigns. Members should use this opportunity to press Zuckerberg on Facebook’s collection, use, and sharing of sensitive user data — including on why the company has not taken more steps to prevent discriminatory ads that may run afoul of our civil rights laws.
Why did Facebook ignore warnings to adopt procedures that would have prevented the Cambridge Analytica fiasco?
The Cambridge Analytica debacle was easily preventable. As early as 2009, the Facebook to close privacy holes which ultimately led to the Cambridge Analytica incident, but the company ignored those suggestions. And when Facebook first learned about the issue, it kept it hidden until the press caught wind of it. Members of Congress should demand to know: How did that happen? Did internal processes fail, or were they simply not in place? Why can’t Facebook address its problems when they happen rather than when they are revealed to the public?
Why has Facebook still not taken adequate measures to address the Cambridge Analytica incident and prevent future fiascos?
Facebook has announced plans to close the holes that allowed Cambridge Analytica to commit a “breach of trust” and misuse information about millions of users. But the company needs to go further to address this incident and prevent future ones. And the best way to do that is to ensure that users are in control. Facebook should retire the concept of information and restore the ability for users to control access to all of their information on Facebook. It should also make privacy settings easily understandable, invest sufficient resources into auditing to identify privacy violations, and notify all users whose information was used improperly.
To its credit, Facebook has promised to do many of these — but it hasn’t always followed through on past promises to protect user privacy. What will ensure that doesn’t happen again?
Why does Facebook track and profile millions of people who have never even created a Facebook account?
Facebook collects data about millions of people who have never created a Facebook account, whether through information uploaded to Facebook by their "friends," or by trackers embedded across the open Web. And Facebook explicitly states that . Facebook doesn’t give these individuals to the opportunity to learn what data Facebook has about them or to request deletion of that data, and only registered Facebook users can view and edit their “ad preferences.” Why does Facebook collect the data of these individuals and why has it not provided them the ability to control how this data is treated?
Can Facebook guarantee that its advertising is not illegally excluding individuals from housing, employment, credit, and public accommodation ads based on race, gender, age, or other protected characteristics?
Facebook offers advertisers many thousands of targeting categories, some of which can serve as “proxies” for characteristics that are protected by civil rights laws — such as race, gender, familial status, sexual orientation, disability, and veteran status. Advertisers have used these categories on Facebook in ways that violate civil rights laws through ads for housing, credit, and employment that exclude members of those groups. And while Facebook has officially prohibited some of these practices, found that discriminatory ads are still being accepted. Why has the company yet again failed to respond effectively to a known problem?
You have said, “I’m not sure we shouldn’t be regulated,” yet in the past Facebook has opposed many common-sense regulations. What new regulations will Facebook support?
As Cambridge Analytica and other incidents have shown, there are insufficient regulations in place to protect users when companies fail to protect their data. To address this gap, there have been proposals to require meaningful notice and consent for users, place limits on use and retention of data, require data portability, and increase government enforcement. The European Union has adopted a far more comprehensive privacy law that places restrictions on how companies treat data and allows meaningful penalties in cases where companies fail to adhere to standards. The ACLU has long pointed to the need in the U.S. for a comprehensive privacy laws. Which regulations, specifically, will Facebook support?
What is Facebook doing to prevent future incidents where companies and governments have improperly used Facebook to surveil its users?
In 2016, the ACLU revealed that a company marketing police surveillance software had obtained access to Facebook and Instagram user data via developer channels. We know that police used this software to monitor black activists protesting police violence. Last week, Facebook said it will notify people whose data was misused by developers. When will Facebook notify users impacted by this surveillance and clarify what safeguards — such as regular audits of developers — are in place or planned to ensure that this does not happen again? What steps is Facebook taking to protect user data given the Trump Administration’s “,” for which the government intends to use social media to assist in vetting visa applicants and generating targets for deportation?
Why can’t users easily move their data from Facebook to another social media site?
You have often said that users own the data they post to Facebook. But Facebook hasn’t made it clear whether and how users can exercise meaningful control over it. When a company like Facebook disregards demands for greater privacy and functionality, users should be able to remove their data from Facebook in a usable format (including their network of connections) so they can join another service that offers stronger privacy benefits. What steps are you taking to facilitate data portability?
Will Facebook commit to not providing government entities or third parties access to their facial recognition technology?
Facebook’s long-term plans for facial recognition are not clear, but the growing use of the technology on the platform raises serious questions about how the company may use it to target ads, whether it will be vulnerable to government demands, and the risk of . What internal controls are in place to limit the use of facial data? Will Facebook pledge not to target advertisements — either online or in the real world — based on detailed, intimate data extracted from users’ faces in images and videos? Will Facebook commit to not allowing the government or third parties to use this technology, and informing the public if this policy changes?