Why does gpg use so much entropy from /dev/random?

Werner Koch wk at gnupg.org
Tue Apr 2 12:52:35 CEST 2013

On Sun, 31 Mar 2013 11:45, philip.g.potter at gmail.com said:

> Can anyone shed any light on this? Why does GPG use more entropy than
> /dev/random says it should?

Which /dev/random - there are hundreds of variants of that device all
with other glitches.  Thus GnuPG has always used /dev/random only as a
source of entropy to seed its own RNG:

  This random number generator is loosely modelled after the one
  described in Peter Gutmann's paper: "Software Generation of
  Practically Strong Random Numbers". at footnote{Also described in chapter
  6 of his book "Cryptographic Security Architecture", New York, 2004,
  ISBN 0-387-95387-6.}
  A pool of 600 bytes is used and mixed using the core RIPE-MD160 hash
  transform function.  Several extra features are used to make the
  robust against a wide variety of attacks and to protect against
  failures of subsystems.  The state of the generator may be saved to a
  file and initially seed form a file.
  Depending on how Libgcrypt was build the generator is able to select
  the best working entropy gathering module.  It makes use of the slow
  and fast collection methods and requires the pool to initially seeded
  form the slow gatherer or a seed file.  An entropy estimation is used
  to mix in enough data from the gather modules before returning the
  actual random output.  Process fork detection and protection is
GPG uses ~/.gnupg/random_seed but it needs to creater it first.  For
generating keys it also makes sure to put in a lot of new entropy just
to be safe.  Better be safe than sorry (cf. the recent NetBSD problem).



Die Gedanken sind frei.  Ausnahmen regelt ein Bundesgesetz.

More information about the Gnupg-users mailing list