Efail or OpenPGP is safer than S/MIME

Andrew Gallagher andrewg at andrewg.com
Mon May 14 22:43:56 CEST 2018

> On 14 May 2018, at 18:32, Werner Koch <wk at gnupg.org> wrote:
> On Mon, 14 May 2018 15:44, andrewg at andrewg.com said:
>> This all exposes one of the difficulties with trying to manage security
>> software in a decentralised ecosystem. We end up in arguments over whose
> That is actually easy compared to a system which is also designed to
> protect data at rest.  Some users may want to restore their 2 year old
> backup to fix a problem with garbled tapes; some may want to read the
> real documents about WMD from 2003; some may even want to be able to
> decrypt their old love letters at the time of their silver wedding.

Indeed. This is why data must be treated as a living object. If tape drive technology moves on, the data must be moved on. Same with file formats, encryption systems and dying raid arrays. Librarians and archaeologists understand the process of care and feeding for physical artefacts. Digital artefacts are no exception. 

>> 1. change the default behaviour of GPG so that any integrity failure is
>> fatal by default, even for old ciphersuites (we could have a flag to
> I am all in favor of this and even considered to that some time ago.
> However, not too long ago we removed support for PGP-2 keys which
> unfortunately resulted in lots of angry mails from people who now think
> they need to use gnupg 1.4 every day because they seem to read mails
> From the last century on a regular base.  Well, they think and they were
> quite vocal.  Now telling them they need to enable an option to read
> certain not that old mail (e.g. creating by other OpenPGP
> implementations) will a) lead to even more angry mails and b) they will
> keep on using that option for all mails.  Thus my tentative plan was to
> make the next major version hard fail on messages without MDC and slowly
> start using our forthcoming AEAD encryption mode.
> Well okay, with the new support of the Ehtmlfail paper we could now
> point to that paper and always hard error out if no MDC is used even for
> old algorithms.  Shall we consider this?

Yes, absolutely. I think this is the easiest and most effective technical mitigation available. If people have a problem with data archival, they should be pointed to a guide on how to re-encrypt their sensitive data in a modern format. Their data is probably horrendously insecure anyway, so we’re doing them a favour. :-)

If we believe that there will be more encrypted messages in the future than there have been in the past, then protecting those future messages takes priority, especially if an upgrade pathway exists. 

>> the obsolete ciphersuites by default (again, we can provide an
> They are not used by default.  3DES is a MUST algorithm and will only be
> deprecated with RFC_4990bis and thus GnuPG 2.3. 


As an aside, I think we have to be careful about the meaning of “use”. They are not used by default in encryption, but they are in decryption. I’ve had multiple conversations today over this ambiguity. 

>> 2. AND the MUAs need to make sure they fail hard on integrity warnings,
>> because old versions of GPG may hang around for a while. Also ensure
> Fortunately the majority of them do.

Yes, nevertheless I don’t think it is good practice to rely on one single layer for security protection, because then we have a single point of failure. With two interacting systems, neither should assume that the other is behaving correctly. Trust but verify, belt and braces, measure twice cut once.

That means security policy should be enforced by both applications, so that a single failure doesn’t blow open the entire system. This is especially important when there are potentially unlimited kinds of systems, of varying compliance, that could be involved in any interaction. 

I think also that we should be mindful that “be strict about what you send but liberal about what you receive” is great advice for interoperability, but absolutely disastrous advice for security. 

>> that links aren't followed by default, that the capabilities of
>> encrypted HTML mail are constrained, etc.
> Yes please, I consider this the minimum requirement for HTML based
> mails.  Why sending email when you need to go online for reading them.
> And also disallow Javascript.  How you only need to convince the mail
> content designers that they can't simply use the web page and send it as
> mail.  That will be the hard part.

Another thing we need to learn from this is that HTML elements may be a privacy concern in plaintext mail, but they are a *security* concern in encrypted mail. The context changes the risk profile. So mail clients (tbird) that disable risky HTML (such as loading images) by default but provide user overrides are doing so justifiably from a privacy standpoint, where a warning about the privacy implications may be sufficient. 

But encryption has to change this risk analysis - in an encrypted mail there can’t be an easy override because the stakes are much higher and people are easily tempted. When we have a system like tbird+enigmail+gpg where there are *three* interacting components, this coordination becomes really difficult. At the very least, enigmail must be able to enforce a stricter content hygiene policy on encrypted HTML mail than tbird applies to plaintext HTML.

How *feasible* this might be is a question aimed at the enigmail devs in the list. :-)

>> The PGP ecosystem will survive this, because the tech is in place. The
> I am not so sure for S/MIME - but that is whishful thinking ;-)



More information about the Gnupg-users mailing list