AW: Break backwards compatibility already: it’s time. Ignore the haters. I trust you.

Fiedler Roman Roman.Fiedler at ait.ac.at
Tue May 22 11:44:22 CEST 2018


Hello list,

I failed to decide, which message would be the best to reply to, so I took one with a title, rational humanists could be proud of. Ignoring the title, many of the messages had valid arguments for both sides. From my point of view the main difference seems to be, what is believed to be valid use cases and hence requirements for GnuPG. As I do not know of any requirements engineering documentation for GnuPG (also did not search for it yet), I just skimmed over the various use cases, that would be affected by fully dropping legacy support from GnuPG in the hardest way (both en/decrypt).

Foreword: The arguments from below are ONLY from the mostly fully automated, non-mail, gnupg use-cases I currently have and their implications on backward compatibility. Those use cases might not be representative for automated use-cases or not worth to be considered in gnupg future. If they are not regarded important for gnupg , that would also be OK for me. It is all just about making the decision if gnupg will be the preferred encryption tool for the next 5-10yrs in our setups.


To stick to logics, here are my assumptions for reasoning:

* A:LegacyBad: Legacy support is more a security risk than a usability benefit, so it should be removed (or at least disabled) in the current version.

* A:LateAdoptersPay: The burden on migrating legacy systems should be mostly on the side of those owning them - their lifecycle strategy decision was to minimize their costs, this should also have included a prediction, where software development/features/availability will move to. If done wrong, it is their fault.

* A:MigrationPathes GnuPG on the other side has to provide simple and clear data/function migration pathes, so that long term users have trust in gnupg to be a solution for longer time.

* A:NonStdBenefits: Non-standard use case support (e.g. non-mail) is a benefit for both parties: gnupg software gets also non-standard testing (thus security-relevant bugs might be discovered, that would not show up in standard use case) while other party can use software that is 80% fit to their purpose, so that system integration can be done much faster.

* A:MachineTurnaround: Turn-around time of server software is about 5yrs, not all machines are migrated at once. So there will be a transition phase, where legacy and non-legacy systems will have to work together.

* A:NoArchiveReencrypt: Full reading and reencryption of old tape archives (some that have to be stored/copied for 20yrs+) is not an option, both regarding efforts plus auditing support.

* A:LateAdopterIsolate: As legacy software might not be able to tackle modern threats, that part of threat mitigation has to be dealt with by the operator, meaning: while gpg1.4 might have been suitable to decrypt in online (networked) setups back then, a backward compatibility setting might do that only in a state of the art 2018 64bit OS-release virtual machine with GnuPG running in an old i386, unprivileged, minimal,  fully hardened LXC-container.

* A:AttackSurface: While in desktop setups, more complex gnupg might not be the largest part of attack surface, the size of the gnupg-attack surface might be relevant in hardened, automated setups. If gnupg cannot be run in a simple, auditable, automatable minimal configuration, this will also affect trust in the future usability of gnupg.


Considering all those assumptions, I would hope that following strategy would be somewhere in the direction of the optimal point for splitting costs between legacy operators and development (hopefully both for mail and automation):

* Have 3 categories of features to ease long-term planning, both for dev and users (mail and automation):

"recent": those just work

"deprecated": not insecure, but something considered to be removed over the next 5-10 years. They can be enabled by "--deprecated" without any known, current security risks.

"legacy": In an ideal world, they would not even be in the main code line.

Having those levels would ease coordination of migration pathes between devs and users within timeline as required for [A:MachineTurnaround]. As soon as one of your tools requires "--deprecated", you should start prioritizing/handling that with your devops team.


* While running a mixed setup [A:MachineTurnaround], [A:MigrationPathes] should be available, to reduce the amount of data produced with "deprecated" (or even "legacy" features) while obsolescence is already dawning.

As the producing systems might not be changed without breaking warranty while [A:MachineTurnaround] is not over yet, but operators may already have increased costs according to [A:LateAdoptersPay], GnuPG tool support for migration of data in use therefore would be nice. This should be quite easy to use to tackle "deprected" features (also to motivate users to migrate in steps). For "legacy" the effort on integration might be much higher, which is OK. This could go even that far, that gnupg only writes a protocol, what was done during decryption/signature checks and the caller has to check, if that result is OK for him.

In a generic solution, such a tool could be something like "gpg-archive": it justs wraps the old message, the old decryption report, the plaintext with a new key into new encrypted/signed GPG elements (wish list: would be nice if such a thing could be defined as backward-compatibility RFC for PGP some time in future): Such a feature might be needed anyway, because the old crypto (RSA-1024 or other non-quantum-safe) might not be good enough for data at rest anyway from some time point on.

Such a tool might then e.g. be used on a MitM message reencryption gateway: the old machines still send messages with old (deprecated/legacy options), they are transformed by "gpg-archive": The full data (old message, old decrypt report, reencrypted plaintext) go to the auditing storages, the reencrypted plaintext to the standard (before MitM) receiver (who does not need to support legacy/deprecated from now on anymore).


* For long-term-archive use cases (which usually means, that the data is REALLY at rest according to [A:NoArchiveReencrypt]), access will happen that rarely so that tools like described in [A:LateAdopterIsolate] would be acceptable for me (apart from that: the session key extraction features do you a real favor on large data streaming here, boosting performance multiple orders of magnitude. Thanks for that!) . If access happens more often, data is not really at rest, but sometime in use (and often at risk as the old archives are accessed frequently but their crypto might be weak already), so that on-the-fly reencryption might be cheaper for me as operator (reencrypt-costs vs. security-risk-costs).

* Those features for automation/long-term-use need to be available somehow without gnupg becoming desktop optimized bloat-ware, which would increase the costs of hardening, testing, auditing [A:AttackSurface].



I hope I could make it clear, what I am trying to argue for. So let's drop legacy, but with style.

BTW: If there is a wiki/git structure/ .... for use case documentation and requirements engineering, I would volunteer to participate - helping developing a good strategy is orders of magnitude cheaper than replacing all gnupg stuff.

LG Roman


More information about the Gnupg-users mailing list