Back to News & Commentary

Spies Want to Make the FaceTime Eavesdropping Bug Into a Feature

Back of iPhone with pair of Airpods
Back of iPhone with pair of Airpods
Daniel Kahn Gillmor,
Senior Staff Technologist,
ACLU Speech, Privacy, and Technology Project
Share This Page
January 31, 2019

On Monday, we that Apple’s FaceTime video chat service suffers from a bug that permits other people to get audio and even video directly from your iPhone or Mac computer. This can happen without your permission and without the standard indication that the other person is listening and watching. Anyone with FaceTime could eavesdrop on any other FaceTime user by simply calling and performing a simple operation — and the victim’s device would start transmitting, even if they never accept the call.

This is a fairly catastrophic bug. Yet alarmingly, if major national spy agencies get their way, a comparable bug will become a standard feature in almost every popular communications product currently in use. As incredible as that might seem, we know this because they’ve told us so.

The FaceTime bug is a failure in the user interface — the parts of the software that make the user aware of and in control of what the device is doing. FaceTime’s user interface fails in at least two ways that are related, but distinct. First, it sends audio and video to the attacker without the victim’s permission — the transmission starts without the victim approving it. Second, it does so without victim’s knowledge — the normal indication that an active call is underway is absent.

The engineering community has that user interface failures are a frequent cause of security failures and that these failures are often worse than others. There are , , and dedicated to working on trustworthy and secure user interfaces, and Apple itself has guidelines that reinforce the .

But officials from Britain’s Government Communications Headquarters (GCHQ) — a close surveillance partner of the U.S. National Security Agency — recently proposed that government agents be able to inject hidden participants into secure messaging services. This proposal has come to be known as the “.”

Written by GCHQ’s Ian Levy and Crispin Robinson, it recommends institutionalizing an untrustworthy user interface when the government wants to spy on a conversation:

It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved — they’re usually involved in introducing the parties to a chat or call…. In a solution like this, we’re normally talking about suppressing a notification on a target’s device… and possibly those they communicate with.

In short, Apple — or any other company that allows people to privately chat — would be forced to allow the government to join those chats as a silent, invisible eavesdropper. Even the most secure apps like (which we recommend) and WhatsApp, which use end-to-end encryption, would be rendered insecure if they were forced to implement this proposal.

mytubethumb
play

%3Ciframe%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FvSQQXS3q1k8%3Fautoplay%3D1%26version%3D3%22%20thumb%3D%22%2Ffiles%2Fweb19-protect-digital-privacy-thumb-560x315.jpg%22%20width%3D%22560%22%3E%3C%2Fiframe%3E

Privacy statement. This embed will serve content from youtube.com.

The Ghost proposal institutionalizes a significantly worse user interface failure than Monday’s FaceTime flaw. With the FaceTime bug, the vulnerable user at least gets an alert about an incoming call to know that something is happening, even if the user interface is misrepresenting the situation and violating the user’s expectations. With the Ghost proposal, the user has no way of even knowing that something is happening that violates their expectations.

The GCHQ authors claim that Ghost provides law enforcement with wiretap-like capability, and “you don’t even have to touch the encryption.” This is true, but only in the most disingenuous sense.

When people want encryption in their communications tools, it’s not because they love the mathematics. People care because of what encryption does. Encryption and other cryptographic protocols are necessary to protect people through properties like confidentiality, integrity, and authenticity. The Ghost proposal essentially says, “Let us violate authenticity, and you can keep encryption.” But if you don’t know who you are talking to, what security guarantee is left?

Cryptography is necessary to ensure these properties, but it is not sufficient on its own. The entire system, from the cryptographic mathematics to the software implementation to the network protocols to the user interface, is critical to providing secure communications in an increasingly hostile online environment.

And let’s not forget: If companies like Apple are compelled to enable governments to participate silently in private conversations, that tool won’t be available only to democratic governments — it will be employed by the world’s worst human rights abusers to target journalists, activists, and others.

We should be clear: All software has bugs, and Apple’s software, as good as it is, is no exception. Although it too long for Apple to recognize the flaw, the company is now treating it with the gravity it deserves.

Since the vulnerability is accessed through Group FaceTime, Apple has those servers entirely offline until the FaceTime app itself can be fixed. But any connected FaceTime app is still currently vulnerable if Apple chooses to re-enable the Group FaceTime servers, so until an upgrade is shipped, people should probably . (This is a good reminder of why it’s important to install new software updates as soon as they’re available)

That such a serious flaw could be discovered in the software of a company known for prioritizing privacy should be a warning to anyone, including GCHQ and NSA, who advocates for intentional security flaws to facilitate government surveillance. It’s very difficult to engineer software correctly in the first place, and it's even more difficult to design it with intentional flaws, however limited. If a mechanism exists to deliberately make the user interface untrustworthy, it will be an attractive target for malicious hackers and other hostile actors. Who will be responsible for its inevitable abuse?

Any future discovery of a software flaw that enables eavesdropping, false identities, message tampering, or any other compromise of communications security should be treated the same way as this latest weakness: with serious emergency mitigations, followed as soon as possible by a software update that removes the flaw. And governments certainly shouldn’t consider adding such vulnerabilities on purpose.

Learn More About the Issues on This Page