[gnutls-dev] redesign of Linux/Unix /dev/random
Travis H.
solinym at gmail.com
Wed Feb 15 15:23:01 CET 2006
Quoting:
"We've made some progress on the RNG issue on the GnuTLS discussion
list. I think the Linux /dev/urandom implementation is sub-optimal.
When used heavily, it depletes the /dev/random pool, causing
applications that access /dev/random to stall. I believe /dev/urandom
should be changed, to be a PRNG (re-)seeded with strong randomness
from the /dev/random pool, rather than simply taking entropy from the
/dev/random pool directly. The source code for the /dev/random
interface in Linux is rather unreadable IMHO. The code for it should
not be as complicated it is today."
I concur, and brought this point up on the cryptography mailing list a
while ago. You can read my gripes at the end of this email if so
inclined.
In particular, I'd like to have fortuna (an evolution of yarrow
described only in "Practical Cryptography" to my knowledge) as the
/dev/urandom, only stealing enough entropy to reseed itself, and maybe
only doing that under certain constraints (nobody waiting on
/dev/random). It merits further discussion and exploration. BTW,
FreeBSD has yarrow as its /dev/random! That's right, there's no "real
random bits" in FreeBSD, except potentially as seeds to yarrow
internally.
Perhaps it could return "real random" bits iff there are no processes
reading /dev/random. Sounds like a good thing to control via sysctl.
See also:
http://www.aculei.net/~shardy/bcard/talks/bhe04-paper.pdf
In this paper Seth describes extractors, which can transform "weakly
random" bits into "arbitrarily close to uniform" bits via a few
"uniformly random" bits.
I had an idea which involves a userland process which hashes "possibly
unpredictable" bits with the pool without updating the entropy count.
For example, it could get them from one of the various "free random
number" web sites, and if they are not observed by the attacker, they
can help recover from a seed compromise. If they are observed, or
worse yet controlled, there's no discernable drawback, given standard
assumptions about cryptographic one-way functions between the source
and eventual use.
One subtle attack that Schneier mentions in PC is the state compromise
followed by tracking; that is, if the attacker gets the PRNG state
somehow and has access to its outputs, then if we only update it
incrementally, he can guess the added bits and compare them with his
model, thereby "tracking" the state changes. To deal with this, we
must update the seed with a large amount of unpredictable bits at
once, not incrementally like the current Linux /dev/urandom.
Also, there's a speed issue; /dev/urandom does one SHA-1 computation
over the whole pool for each 160 bits of output. On one of my old
boxes, that's under 1MB/s, which is too slow for some applications
(e.g. wiping a 100GB disk partition with pseudorandom data, or filling
a gigabit ethernet link with random bits between actual transmissions
to prevent traffic analysis).
Another thing I wish to explore is securely combining random bits from
different sources and the relation between how you combine them,
efficiency, and how much you trust the sources. That is, can we
make use of a HWRNG that we don't trust entirely by combining it with
a slower source we trust more?
I'd also like some kind of statistical sanity checks on the various
entropy sources, so that I can be protected (or at least warned) if
they fail. I will obviously have to define some kinds of failure
modes and test for them, as unpredictability cannot be tested for.
Nevertheless, some people want to know if their HWRNG produces 1kB of
nulls all of a sudden.
I would also like *writing* to /dev/random to block when the pool is
full, under the assumption that getting random bits from a HWRNG may
be expensive in terms of some resource, and so you don't necessarily
want to be pulling them all the time if nobody is using them.
One interesting suggestion Terry Ritter had was to provide the
available random bits in a buffer, and tell the application that you
could only fulfill part of its request immediately, giving the
application the opportunity to decide what to do. I'm not sure how
this would fit into the Unix I/O model, but I suspect something could
be done (poll/select perhaps?).
Due to the number of changes, I'd like to pull some/all of it out of
the kernel for experimentation, and then maybe integrate it back into
the kernel if necessary. For example, a .png chart of a distribution
of values can convey much more information than statistics, and
besides all that you can't do floating point in the kernel, which
makes doing Shannon entropy or chi-square calculations there, uh,
challenging.
===prior /dev/random email===
So far I haven't seen any userland tools for updating the entropy
count (NetBSD, OpenBSD, Linux).
This is unfortunate, because sometimes I generate entropy on one machine
and want to pipe it into the /dev/random pool (currently it stirs the
pool but doesn't update estimated entropy count).
However, I cannot update entropy counts without writing programs that can
do ioctl (meaning C or C++). This is no good. Can't we make writes to
/dev/random take some kind of structured form that's easy to do from a
shell script so that we don't have to use ioctls? Failing that, could
we have a userland tool that can make the requisite ioctls?
The entropy harvesting and estimation code is bound too tightly to the
entropy pool (Linux).
It is in kernelspace so cannot do floating point, like measuring
chi-square or Shannon entropy to estimate the amount of randomness.
Reading from /dev/urandom empties the entropy pool immediately; this is
unfriendly to people who need real random numbers.
In Linux, writing to /dev/random and /dev/urandom is absolutely
identical; the data gets mixed in, but the entropy count isn't updated.
The random_write_wakeup_thresh is almost worthless as any woken processes
will probably not be able to update the entropy count, unless they are
specially coded for this purpose.
The write interface isn't exploited thoroughly enough. If writes to
/dev/random were to block when the entropy pool is full, and writes
to /dev/urandom never blocks. then it greatly simplifies the design of
userland programs that harvest entropy from sources of non-zero cost;
they merely write(2) to /dev/random, and if it doesn't need any more
entropy, then it simply blocks until more is needed. This way it doesn't
have to pool the entropy pool using ioctl(2).
If we change the semantics, they should be queryable in some way,
because currently the source code or experimentation is the only way of
discerning them. Getting good randomness shouldn't be platform-specific,
and shouldn't fail in silent or unsafe ways at runtime.
--
"Cryptography is nothing more than a mathematical framework for discussing
various paranoid delusions." -- Don Alvarez
http://www.lightconsulting.com/~travis/ -><-
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B
More information about the Gnutls-dev
mailing list