Back to News & Commentary

Encrypted Email and Security Nihilism

A lock in front of encryption screen
A lock in front of encryption screen
Daniel Kahn Gillmor,
Senior Staff Technologist,
ACLU Speech, Privacy, and Technology Project
Share This Page
May 18, 2018

Earlier this week, a group of German researchers published an alarm about newly discovered problems with encrypted email that is creating major controversy in the internet security community. This research — published in a snappy-titled report called — is a valuable and important work highlighting the challenges with email security.

Unfortunately, many of the responses to this report have been close to the line of "security nihilism:" Throwing your hands in the air and saying that because certain important security measures aren’t perfect, we should abandon them altogether. This is harsh and potentially damaging to the best efforts we currently have to protect email and risks leading people astray when it comes to securing their communications. In fact, there are important things that people can do to protect their email. This post examines the controversy, what people should do to secure their email, and how we might do better in the future.

Email is a widespread communications tool and people generally expect it to be private. But from a security standpoint, the baseline assumption is that email is "like a postcard:" Anything you write in an email can be read by your email provider (e.g., Google, if you use Gmail) and also by the email provider of the person you send mail to. If those providers (or any of their system administrators or lawyers) , or are hacked, or bribed, or coerced by law enforcement into sharing access, the content of your email is easily accessible to them.

But the situation isn't hopeless. For many years, we have had mechanisms that allow email users to hide the content of their email messages from prying eyes. These systems are collectively called "end-to-end" e-mail encryption schemes because only the endpoints (the users) can read the contents of the mail, and none of the providers in the middle can read it.

Protecting message confidentiality is important for activists, whistleblowers, journalists, and anyone else who wants to communicate privately. Snowden's revelations of NSA abuse without the end-to-end protections of PGP, but you don't have to be Snowden to want to keep the contents of your communications private from organizations whose interests .

Sadly, end-to-end email encryption is not widely used because of technical challenges and usability difficulties. Some of the same dynamics that have held confidential email back for decades are playing out again this week.

What is security nihilism?

Security nihilism is a common affliction in the information security, or infosec, world. A security nihilist, upon discovering a problem that we don't know how to solve says, "Since we can't solve that problem, it's not worth solving other related problems." Or, more subtly, they make any gains contingent on systems that are so cumbersome and inconvenient that they drive people away from the security mechanism in the first place. In short, they let the perfect be the enemy of the good — in a world where security is never perfect anyway.

I've been guilty of security nihilism myself. In the past, I've discouraged people from using encrypted email if they hadn't verified the cryptographic identity of the person they're emailing ("checking fingerprints"), and I've discouraged people from storing their secret keys on devices that I thought weren't trustworthy enough (like mobile phones). I have discouraged people from indexing encrypted mail (which makes it hard to search) because I was concerned about the security of the index.

While all my concerns are valid, I've realized that these objections were just driving people to fall back to sending unencrypted cleartext email. For example, I learned it was common practice for people to reply with, "I can't read your mail right now because I only have access through my phone, please re-send in the clear." Or I saw people give up on encrypted mail altogether because “checking fingerprints is too much of a pain.” (And it is! Most humans are not comfortable comparing long strings of hexadecimal). I even found myself declining to encrypt mail or wishing that friends and contacts would mail me cleartext because I knew it would be a pain to find the encrypted mail later.

The result of this kind of nihilism? Email remains — for most people including most technically sophisticated geeks — entirely in cleartext and open to scrutiny by any service or network operator that handles the messages in transit.

What should a user do?

The answer to EFail should not be “disable email protection,” but rather “ensure your email program is updated and continue to keep it maintained.”

  • Use a good, well-maintained program to access your email.
  • software updated. This is true for all software, and it is especially important for software that handles sensitive information, like an email program.
  • Ensure that your emails are not leaking information when you read them by loading remote content. Many emails contain links to remote servers and pull images or other data from those servers when the email is loaded. But those remote servers can be hostile. Reasonable email programs . You can test your own email setup with Mike Cardwell's excellent . If it indicates that your email program is leaking information when you read a message, it's a sign that something is wrong, and you should either contact the supplier of your program or switch email programs. People who need extreme caution can avoid all major risks of automatic data exfiltration by disabling HTML in their email entirely (e.g., in Thunderbird, choose View » Message Body As… » Plain Text).

Finally, you should be cautious about clicking on links in emails that you receive because clicking the links might result in leaking sensitive information. This advice already appears in . But what does "cautious" actually mean?

  • If you didn't expect to receive an email that has a link in it, or the email seems at all suspicious, just don't click.
  • You can usually view the URL where a link points to by hovering over the link with your mouse or maybe by "long-pressing" it on your mobile device. Try viewing this one without following it: .
  • If the text of the hyperlink says one domain, but the actual link is different (like in this example) then don’t click.
  • Also, if the link's URL does not start with https://, don’t click!
  • And if the URL's domain (the part between the https:// and the first /) doesn't make sense for the email, you should not click. Watch out for misspellings in the domain name, too. For example, microsorft.com is probably not the organization that provided your operating system.
  • If the length of the URL is too long for you to view, or the URL looks unusually ugly (e.g. ) then that's another warning sign that maybe you shouldn't follow it.
  • If there is information in the email about what the link points to — for example, if it refers you to a website or a news story — consider visiting the website by typing it into your browser directly. You can also search for the news item in your favorite instead of clicking the link directly.
  • Finally, if you were expecting the email to include a link and the email seems legitimate and the link points somewhere that you expect, then it might be appropriate to consider clicking it. But in general, you can just ignore most links and never click them.

All of the above is guidance that the email security community has been giving for years, and it is only further reinforced by the EFail research.

Bad responses to the EFail research

Several responses to the recent research (from and , among others) take extreme positions, basically advocating disabling email encryption capabilities entirely. The rationale for this has something to do with "an abundance of caution" or "avoiding a false sense of security" — both classic lines from the security nihilist's playbook.

Advocates for giving up on email encryption propose two solutions to make up for disabling this capacity: switch to using Signal (or some other end-to-end-encrypted messaging service) or manually decrypt incoming messages "from the command line." Neither of them is adequate.

Switch to Signal?

is a great tool for private communications. Everyone who has access to it should use it where they can to replace SMS messages and cleartext over-the-air phone calls for sure. To the extent that someone can use it instead of email, that's great.

But not everyone can use Signal for all communications, for many reasons, including:

  • There are many similar instant messaging systems, and each person can only handle so many of them. And they don't talk to each other. So if you're in touch with two people who use Signal and another three use and you want to chat with all five of them, how do you do it? Email has the advantage of universal interoperability.
  • The means that some network operators can, and do, block Signal, , and other centralized messaging services like them. People stuck behind those networks simply can't use these tools at all.
  • Some people can only be contacted by email and have no public Signal number. For example, lists email addresses (with PGP fingerprints) and office phone numbers, but no Signal numbers. If I've switched off end-to-end email security in favor of Signal, how am I supposed to communicate with the EFF securely?
  • Signal requires registration to a phone number. Not everyone has a phone number, knows the phone number of the person they want to contact, or is with other people.
  • Some versions of the Signal app . The answer to these problems, of course, is not "disable Signal." Rather it’s to ensure that you are running a patched, well-maintained version of an important security tool.

Manually decrypt incoming messages?

of emails rather than integrating with an email program. This is classic security nihilism, and it’s a terrible idea.

For one thing, for emails of any complexity — attachments, non-english text, messages using HTML — it just doesn't work. Most people can't read raw HTML, can't interpret base64 encoding, can't separate MIME parts, etc. To the extent that they can pick their way through the detritus of the “message source,” they're likely to make at least as many mistakes in making sense of it as a poorly implemented email program. Those mistakes can have bad consequences for privacy and confidentiality.

For another, this advice discourages people from using easy, fit-for-purpose, well-maintained tools. The result? Again, we normalize, “I can't read this thing, please re-send it to me unencrypted.”

Ecosystem concerns

A valid point raised in some of the responses to the EFail research is that even if your email setup is updated and safe, sending encrypted email to someone else who is running vulnerable tools risks leaking the contents of that email.

This is an important reminder that privacy is relational and interdependent. My data is only fully private when the people I communicate with keep their data private and vice versa.

If we go down the security nihilism route, then we would also have to discourage everyone from using Signal and other messenger services because you don't know whether your contacts are running a safe, updated version. And in a reductio ad absurdum maybe we want to discourage people from using computers for confidential communication at all, since the person on the receiving end might not have updated their operating system or might have a weak or missing local password.

But the right solution to this conundrum is, of course, to encourage everyone to maintain their phones and computers, not to discourage the use of secure communications.

EFail shows flaws in email, not just PGP

One final point of concern about how the discussion of EFail has been handled:

There are two major techniques for encrypted email, roughly known as "PGP" and "S/MIME." Some of the pushback also mischaracterizes the problems as being PGP-related, even though all of the most significant findings in their research applied equally to both PGP and S/MIME. This misframing does a disservice to everyone involved: the researchers, the community of PGP-based email developers, and to the userbase of all encrypted email, S/MIME and PGP alike.

The problems are related to the handling of email decryption generally, and they are compounded by email programs not being as cautious and sensitive about user privacy as they should be. These are bugs that email programs need to fix so that people can rely on their email accounts not invading their privacy or violating their confidentiality

Should we give up on email?

Given these concerns, one more option presents itself: Can we just give up on email entirely?

I think that many of the nihilistic responses to the EFail research are motivated by this hope — maybe we can just phase out this clunky old system. However, I don't think this is realistic.

Email is too widespread and too useful to give up on trying to protect it. We need to focus on , rather than trying to demote this fundamental internet technology. There are as well, and we don't discourage people from using secure web browsing.

And even if we could somehow phase out email, the centralized, siloed architectures of the most plausible current replacements (like Signal, Wire, and Telegram) introduce a new set of problems.

Usable, secure communication

Confidential, private internet communication is critically important to anyone who cares about free speech, free association, and privacy. When excellent research, like the EFail research, identifies infrastructural problems, our response needs to be to improve the infrastructure and to help people stay safe. Nihilism is tempting, both from an absolutist perspective and because of despair in the ongoing litany of technical failure. But it's not a realistic option, and we need to keep up the good fight.

Learn More About the Issues on This Page