encryption algorithm

Daniel Kahn Gillmor dkg at fifthhorseman.net
Wed Dec 18 08:18:05 CET 2013

On 12/18/2013 12:29 AM, Robert J. Hansen wrote:
> A flawed standard is just that, a flawed standard.  It's not a cause for
> a crisis of trust in an outfit that has enjoyed the community's trust
> for many decades.

Sorry, but NIST does face a crisis of trust, particularly in the area of
cryptography, whether either of us wants that to happen or not.  I don't
think we should pretend otherwise.  NIST certainly isn't pretending
otherwise; it is taking the situation seriously:





> I don't understand.  If your argument against switching to ECC is "we
> won't get rid of RSA-2048 for a long time anyway," then how can you use
> the same logic to argue for switching to RSA-3072 right now, since
> presumably we still won't get rid of RSA-2048 for a long time anyway?

Because it takes a long time for these fixes to trickle into downstream
distributions and reach wider adoption, and longer still for the users
of those systems to adopt new stronger keys and integrate them into
their workflow and their certification networks.

You don't plant a tree now because you want fruit tomorrow; You plant a
tree now because you want fruit in 3 years.

> Doesn't address the issues of forcing people to rewrite tutorials and
> manuals, rewrite standard operating procedures for their businesses,
> rewrite scripts to accommodate the new defaults... etc.  I think you are
> vastly underestimating the infrastructure here.

People are already rewriting tutorials and manuals and standard
operating procedures.  They're doing it because gpg isn't updating the
defaults.  That work is under way right now, it exactly in the ways that
you've expressed frustration about on this thread, because the authors
of manuals and tutorials want to provide their users with a strong
safety margin, particularly in light of recent disclosures about
powerful attackers who have access to large amounts of traffic and
outlandish compute power.

The best manual would say "Use a reasonably up-to-date version of GnuPG,
and use the defaults.  They are strong, and functional, and provide a
security margin that everyone can rely on."  That wouldn't require much
updating if the defaults track the current cryptanalysis and
publicly-acknowledged threats.

> Further, there's nothing preventing you from packaging your own GnuPG
> build that has 3072-bit RSA as a default.  Speaking just for myself, I'd
> welcome that -- I wouldn't use it, but I'm completely in favor of there
> being a competitive marketplace of ideas and letting the users sort it out.

sigh.  Of course i could do this, but i don't want to, because i would
rather that gnupg have a set of defaults with a forward-looking safety
margin built in to begin with.

> The point of the metaphor was to show that moving from "adequate for 99%
> of the population" to "adequate for 100% of the population" has some
> extreme costs involved.

Except that i'm not asking for anything that has extreme costs.  There
are larger costs involved with just scanning and parsing a keyring with
a few dozen keys than in the difference between a 2048-bit RSA operation
and a 3072-bit RSA operation.

>> I'm asking for a strong baseline set of defaults with a reasonable 
>> security margin based on current knowledge.  This isn't perfection, 
>> it's (fallible, human) engineering.
> And your belief is that 112 bits of keyspace is not a strong set of
> defaults with a reasonable security margin based on current knowledge?
> For crypto that's currently projected to be secure out until 2030?

2048-bits is *at most* 112 bits of keyspace, depending on whose analysis
you rely on.

ENISA and ECRYPT both explicitly recommend 128 bit equivalence (but not
112-bit) as a "Good, generic application-indep. recommendation" (ECRYPT)
and "secure for future use" (ENISA).  We know that modern machinery can
do this level of work without serious drawbacks, and we know that it's
possible to change the defaults for gpg.

Looking past RSA keysize, the default certification mechanism uses
SHA-1, which has only 80-bits of protection against collision attacks
(and is known to have cryptanalytic results that bring this figure down
to the mid 60s).  Fortunately, preimage attacks on SHA-1 appear to still
be out of reach, and it's harder to exploit the cryptosystem as a whole
with a collision attack than it would be with a preimage attack.  But
effective collision attacks possible against OpenPGP certifications
could still be devastating, since users are signing data provided almost
entirely by a potential attacker.  Switching to a widely-supported
stronger algorithm like SHA-256 by default for OpenPGP certificates
would address this even weaker link.

Look, i'm *not* a bitsize fetishist.  I'm not advocating for 512-bit
equivalent security everywhere, or 30Kbit RSA keys, or anything like
that.  I'm reading the same standards you are, aware of the same news
you are, and trying to make plans for what we both know is a long
upgrade cycle so that we don't have a bunch of users whose security is
compromised in the mid-term future.  If you want to continue to cite the
weaker of the public standards, and ignore the advice in the stronger
ones, and argue that we don't need to upgrade now because no one has
demonstrated any attacks publicly, i guess that's your call.

I want gpg to take the lead on this, to make it clear that we continue
to take our users' information security seriously, and provide a healthy
safety margin given our long-term stability and commitment to maintenance.

We can't fix any user's operational security if it's terrible, but we
can make sure that anyone who uses a reasonably modern version of gpg
won't be burned by the crypto itself, even if they have a large
organization as an adversary.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 1027 bytes
Desc: OpenPGP digital signature
URL: </pipermail/attachments/20131218/ea49cbf8/attachment.sig>

More information about the Gnupg-users mailing list