slow(&huge) password digest algorithm for GPG

Bernhard Kuemel bernhard at
Sun Dec 12 05:00:55 CET 2004

Hi gnupg-devel!

gpg needs a slow algorithm to digest the passphrase before it 
decrypts the secret key. If it takes 1 s then dictionary or brute 
force attacks will be substantially slowed down and much shorter 
passphrases will suffice. I forgot my passphrase and now I need a 
new key. And a new passphrase. And a way to memorize or store it. 
Hmm, frequent use of gpg would help, but I don't have gpg 
integrated in mozilla yet ...

So ... I have some ideas about the passphrase digest:

the passphrase (together with a salt) is hashed with e.g. SHA1. 
The result is hashed, and we keep computing hashes from the 
previous hash until a user selected time like approximately 1 s 
has passed. The last hash (and the salt) is stored in the secret 
key and the secret key is encrypted with the second last hash.

When using the secret key the passphrase+salt is again hashed 
until a hash is produced that compares equal to the stored hash. 
The second last hash decrypts the secret key.

A dictionary or brute force attaker would test a passphrase by 
computing hashes until he finds a matching hash. But with a wrong 
passphrase he won't find one. So he has to stop and try another 
passphrase. If he stops too early he will not make it to the 
matching hash when trying the correct passphrase. So he has to 
pick a number of hashes which he assumes is likely to be larger 
than the one used by the user and that may well be twice as much.

SHA1 (IMO) is a good choice here, as
1.) it is not easy to speed it up, because many people want a 
fast SHA1 so it is already well optimized. If a significantly 
faster version comes up it is easy to upgrade GPG (unless patents 
2.) it has not shown cryptographic weaknesses for some time. 
Should it become possible to find the plain text for a given SHA1 
then the second last hash (which decrypts the secret key) could 
be found via the last hash stored in the secret key. I'm not sure 
if this is of concern as hashig typically loses information about 
the plain text (it must, if the plain text is larger than the 
hash). A solution should be to compute the new hash over the last 
2 hashes.

Another way to harden the scheme is to make it consume much 
memory. So the guys who build custom chips to crack keys will 
need lots of silicon. I'm not sure which way really requires an 
ideally configurable amount of memory preferably for the whole 
runtime. We could scatter the computed hashes over a buffer with 
configurable size and at the end pick some (non zero) samples 
which will be hashed and used to en/decrypt the secret key. But 
when making the buffer too big it will contain too much empty 
space and smart memory management may do with less memory. We 
could fill the memory with a pseudo random number generator, but 
smart memory management might supply this fill data where it 
previously only declared the managed empty space to be zero filled.
Maybe the internal state variables of the SHA1 algo can deliver 
more and fast high quality data to fill memory.

Hey, if we could find something useful to do instead of this 
trash, like seti at home or this cancer research project, that would 
be great. But I'm afraid that's incompatible with our goal.

I'm sorta tired now so any ideas I missed may come in future 


More information about the Gnupg-devel mailing list