encryption algorithm

Daniel Kahn Gillmor dkg at fifthhorseman.net
Wed Dec 18 01:07:42 CET 2013


On 12/17/2013 05:04 PM, Robert J. Hansen wrote:
> I don't understand the reasoning by which you have concluded that I am
> advocating RSA-1024.  I'm not.  I think the default of RSA-2048 is a
> good one.  I'm only saying that for most users and most purposes,
> RSA-1024 is sufficient; to reach "virtually all users" and "virtually
> all purposes" we have to move to RSA-2048.

I never attributed RSA-1024 to you: i'm merely pointing out that good
enough for "virtually all users" and "virtually all purposes" is the
wrong way to select choices that we want to cover the most vulnerable
targets.

>> I'm not sure how you get this claim from these reports...
> 
> Simple: I'm human and I misremembered NIST's "secure until 2030" as
> "secure for 30 years".  :)

Thanks for the clarification.  I get this sort of thing screwed up
myself sometimes too, so i'm glad to be the one setting the record
straight for once :)

>> what it looks like to me.  For example, ECRYPT 2012's report sees
>> 2432-bit RSA as equivalent of 112 bit symmetric cipher, which it claims
>> is acceptable for ≈20 years.  Please see section 7.2:
> 
> NIST's guidance says 2048-bit RSA is equivalent to 112 bits of symmetric
> cipher, as does ENISA and RSADSI.  ECRYPT is certainly free to come up
> with their own metric; they're a competent outfit.  But let's
> acknowledge that ECRYPT's opinion is a minority one, rather than
> cherry-pick an outlier opinion and declare it to be authoritative.

ECRYPT has some pretty decent conceptual frameworks, engineering, and
mathematics to explain how they arrived at their strength equivalences.
 Chapter 6 has details.  None of us can predict where the mathematical
advances will happen next, of course, but these are hardly arbitrary
"opinions" pulled from thin air.

Arguably, it's probably also worth being more skeptical about guidance
coming from NIST specifically, since they are known to have taken advice
from the NSA, and the NSA is now known to have deliberately misused
NIST's position of trust.  This is a bad situation, but Werner's earlier
line today about "a direct line to Maryland" seems apposite.

If you're into figuring out which is the "outlier opinion" and assessing
these things by conensus of authorities, i invite you to look at table
3.1 on page 22 of the ENISA algorithm report. NIST and SECG equate 112
to 2048-bit RSA.  Lenstra and the IETF (RFC 3766) and ECRYPT all suggest
that 2048-bit RSA is weaker than 112-bit symmetric ciphers.  NIST and
SECG (in which NIST played a prominent role) hold down the low end of
the scale.

Regardless of which group is "right", none of these authorities believe
that 2048-bit RSA is in the range of a 128-bit symmetric cipher, just
112-bits at most.  Do we care about the idea that a cryptosystem is as
secure as its weakest link?

I note that we don't generally support any symmetric ciphers with less
than a 128-bit key (3DES with keying option 2 would use 112-bits -- but
GnuPG uses keying option 1: 168 bits, derived from 192 bits).  If we
want to "even out" the crypto so that no one part is clearly weaker to
attack than the others, we ought to to increase our RSA keylengths by
default.  RSA keys are currently the weakest link according to any of
the authorities anyone has cited in this discussion thus far.

Additionally, since breaking a long-term asymmetric key can effectively
decrypt all messages encrypted to that key, breaking the RSA key has
more value to an attacker than breaking any single symmetric cipher.  So
if there is going to be a strength difference, i'd expect it the other
way around.  A reasonable hybrid cryptosystem like OpenPGP should make
the asymmetric part *stronger* than the symmetric part, since it
presumably is a more valuable target anyway.

Finally, in the face of adversaries who possess incremental (not
catastrophic) mathematical or computational advances beyond what we know
about, increasing keylength beyond what we think is strictly needed is a
reasonable defense.  In

https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html

Schneier writes: "it's pretty easy to stay a few steps ahead of the NSA
by using ever-longer keys. We're already trying to phase out 1024-bit
RSA keys in favor of 2048-bit keys. Perhaps we need to jump even further
ahead and consider 3072-bit keys."  The next day, Schneier announced his
new 4096-bit RSA key:

https://www.schneier.com/blog/archives/2013/09/my_new_gpgpgp_a.html

Do we want the asymmetric key length to be the weakest link for users of
GPG's default choices?

> Can it be broken?  Sure.  Easily?  No.  If you're worried about Google
> being able to mine your message for targeted ads, that's plenty enough. 
> If you're worried about your local sysadmin reading your personal mail,
> that's plenty enough.  If you're sending Vladimir Putin slashfic to a
> Russian publisher, maybe you should rethink using such a short key.

I think i've made it abundantly clear that i don't think it's trivial
for anyone to break a 2048-bit key.  My arguments here are not about
protecting my e-mail contents from a hobbyist attack.  I'm interested in
trying to build cryptographic defenses against powerful adversaries.

While we the systems programmers are choosing default key sizes for the
overwhelming majority of users, we could put the onus on those who need
*less* security (due to constrained devices or terribly-old
interoperability concerns) to explicitly weaken their own tools, rather
than requiring people who need stronger security to become cryptographic
experts and figure out what needs to be done.  If someone is choosing to
use OpenPGP to secure their messaging or data, whether for
confidentiality, integrity, or authenticity, the tools should offer
strong security choices against powerful attackers, by default.

The other two objections to stronger defaults that have been raised in
on this list today are:

 0) the cryptosystem isn't the weakest link for most people, and
 1) we don't want huge keys because they're inefficient

(0) is clearly true, and it *should remain true*.  That's the whole
point of using cryptography, not to leave open the possibility that
someone who happens to have more compute power or better math can
decrypt your messages or impersonate you without bothering to do any of
the other stuff.  That is, the security of my data should depend on my
operational security, *not* on my cryptography.  The cryptography should
be standard, background stuff.

So (0) is not an effective argument for why we should have default
public key lengths that are widely acknowledged to be weaker than the
symmetric keys we routinely use.  The goal is to avoid having the math
be the weakest link, for any potential attacker.

As for (1), i'd find it easier to accept claims about efficiency and
performance concerns if gpg was plausibly efficient or high-performance
in other realms.  Two obvious areas come to mind where efficiency and
performance have not historically been a priority for gpg: key selection
from even medium-sized keyrings, and the programmatic use of gpg as a
backend to other tools.  GPG's subprocess model, coupled with its need
to do linear scans of a key ring each time any asymmetric mechanism is
triggered leads to seriously low performance in many contexts.  The
performance difference between, say, a single 2048-bit RSA operation and
a 3072-bit RSA operation (NIST's 128-bit-equivalent) is small compared
to the cost of pulling the keyring off the disk, parsing all the keys,
selecting the right(?) key, evaluating the trust model, etc.  If it's
worth arguing that 3072-bit is too expensive to be the default, then we
probably need major work in giving gpg a plausible library (not
subprocess) interface or proper indexed key storage (which i hear is
coming for 2.1, and i'm quite happy about that, though i have yet to get
to play with it).  GnuPG and gcrypt's underlying crypto primitive code
has also never been the fastest code, even among the free software
variants available, and Werner has (rightly, i think) typically declined
to prioritize speed over other development goals (like information
security).  I'm excited to see the recent speed improvements in gcrypt,
though.  As gcrypt's speed improves, maybe we can take advantage of the
faster speeds to switch to stronger asymmetric keys and message digests
by default as well?

Regards,

	--dkg






-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 1027 bytes
Desc: OpenPGP digital signature
URL: </pipermail/attachments/20131217/63c526fb/attachment.sig>


More information about the Gnupg-users mailing list