[gnupg-devel] Re: True RNG and GnuPG / libgcrypt

Matthias-Christian Ott ott at mirix.org
Thu Oct 3 20:27:15 CEST 2013

On 10/03/13 01:30, Nelson H. F. Beebe wrote:
> Matthias-Christian Ott <ott at mirix.org> responds on Wed, 02 Oct 2013
> 21:54:12 +0200 to an earlier question about possible direct-access
> to random-number-producing devices:
>>> ...
>>> If you want to use the output of the random number generator directly
>>> you should somehow know how much true randomness per bit of output it
>>> yields. Feeding the output of the random number generator into
>>> /dev/random has the advantage that it gets mixed with other sources of
>>> randomness and that there is a single implementation that can be
>>> peer-reviewed.
>>> ...
> That is definitely good advice.  
> However, it is important to remember a distinction between the
> /dev/urandom and /dev/random device interfaces available on some
> flavors of Unix:
> 	Both produce random streams of bits, but /dev/urandom
> 	generally continues to produce output for every read, whereas
> 	/dev/random may block until the interface believes that enough
> 	additional entropy has been collected to produce more random
> 	bits.

FreeBSD doesn't distinguish them for example. I think the semantics of
/dev/random are supposed to be the following:

Cryptography often needs “true randomness” for key generation and other
operations. /dev/random uses an entropy estimator to only output “true
randomness” from different sources to a requesting application. The
application can be sure that if it needs n bits “true randomness” it
will get it.

This leaves two questions: What to do when /dev/random is an alias for
/dev/urandom, i.e. what should software do that needs n bit randomness
for cryptography? Does entropy estimation work? What if some proprietary
device produces lower amounts than the entropy estimator assumes?

Especially as an application developer who just needs n bits of “true
randomness” these are hard questions. Do you for example really need 128
bits or a 64 bits that are “stretched” to 128 bits enough? Are realistic
attacks possible if you got less than n bits of “true randomness”?

For NeuG this also leaves the question: How to estimate how much “true
randomness” is in NeuG's output (on Linux you have to do this when you
feed data to /dev/random via the RNDADDENTROPY ioctl or you weaken the
security of software that depends on the entropy estimator of /dev/random)?

> That means that reading /dev/random makes you susceptible to
> denial-of-service attacks that could be as simple as this:
> 	dd ibs=10240 count=1 < /dev/urandom > /dev/null

I think what you meant here is:

dd ibs=10240 count=1 < /dev/random > /dev/null

> I just tried that on one of my GNU/Linux servers and had the process
> hang for tens of seconds; changing it to read from /dev/urandom
> instead produced 10KB of data in 2 msec.  I then increased the count
> from 1 to 10240, and pulled from /dev/urandom 100MB of random data in
> 6 to 20 seconds on a half-dozen different servers.

> If you have a single-user single-process machine, you might be immune
> to such attacks, but change either, or both, qualifiers "single-" to
> "multi-", and you could be a victim, even if the other consumer(s) of
> random bits from /dev/random are just trying to get data for their own
> use, rather than intentionally trying to deny YOU access to the
> device.

That sounds to be a tough problem and probably involves resource limits
and process priorities.


More information about the Gnupg-devel mailing list