Key generation: is it possible to fail fast?

Daniel Kahn Gillmor dkg at
Fri Feb 17 21:51:46 CET 2017

On Fri 2017-02-17 14:45:19 -0500, Werner Koch wrote:
> Libgcrypt and gpg are pretty conservative in these things which might be
> a good or bad thing.  Right now we require /dev/random for long term key
> generation.  Given that on Linux the kernel RNG should be good enough,
> we could imagine an option to never use /dev/random (so basically doing
> the above trick).

The other approach, which has been advocated by folks like djb and
Yevgeniy Dodis, is to take a healthy fixed-size chunk from the system's
"hard" entropy source -- say, 256 bits, or 512 if you want to be
conservative, and use that to seed a cryptographically-strong
pseudorandom number generator (CSPRNG) like yarrow or fortuna.  Then
your key generation pulls from the CSPRNG, instead of /dev/random

There's still a risk of blocking in this framework, but it's reduced to
pulling out 32 or 64 octets from /dev/random, instead of going back to
blocking after you've exhausted your entropy while looking around for

For a long-running process like gpg-agent, this could be done once at
agent initialization, and then all subsequent needs for entropy could be
pulled from the internal CSPRNG.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: </pipermail/attachments/20170217/e66c8328/attachment.sig>

More information about the Gnupg-devel mailing list