Speading up key generation
Rick van Rein
rick at openfortress.nl
Sat May 7 12:20:28 CEST 2016
Hi Dashamir,
Thanks for pointing to HAVEG(E), it was a new approach to me.
> One of the suggestions is to use haveged[1].
I don't think you meant to suggest this HAVEGE implementation [1] --
which is a PRNG based on the entrope of HAVEG -- but wanted to point out
a HAVEG implementation instead, right?
[1] http://www.issihosts.com/haveged/
> I havn't seen any criticizm
> about it yet. Is it really safe?
Then let me be the first ;-)
There is no really solid basis for the entropy measured; it is
suggestively shown (which the authors say in their ACM paper [2]) on
statistical tests which are known to not provide certainty about the
randomness of the sequence.
[2] http://www.irisa.fr/caps/projects/hipsor/publi.php
The guessed #bits per interrupt is rather high (8-64 kB) but this would
depend heavily on CPU architecture. This means that there is no
reliable manner of having a portable implementation of HAVEG.
I also have my doubts if they actually collect this much information --
as they are only looking at the CPU counter as a summary of the complex
internal CPU state. Different states will inevitably merge into similar
measurements, and there's probably a probabilistic distribution for
measurements -- which the statistical tests might even show if they had
been tailored to the chunk size of these measurements, and when the
measurements hadn't been shuffled in a PRNG-ish style. The paper could
(and therefore should) have taken away this concern, IMHO.
Finally, I'm afraid that this algorithm might have solid fixpoints,
namely when an interrupt flushes all the state of the CPU.
I do like the idea of harvesting CPU-internal data, but do not feel free
of worries about this implementation style. There should be a more
accurate model of its entropy before I'd trust it. But once it has
this, the principle might expand to include much more probably switches,
such as through multiple processes or threading.
> If yes, why it is not used by default in
> gpg?
As far as I'm concerned: (1) because key generation is a rare event and
good entropy is worth waiting for in these special circumstances, and
(2) because this may not work as reliably as possible on a system that
is booting in a reliable manner, especially from a relatively simple CPU
(perhaps MIPS or ARM).
> Because it indeed improves the time of key generation greatly.
>
IMHO, this time is much less important than properly generating the keys
that you are (often) going to rely on for years.
If you want a speedup, you might look into key generation/signing
algorithms, such as ECDSA. They need less randomness and should (in
theory) be faster to generate keys from.
I'm curious if anyone has different opinions on this!
Cheers,
-Rick
More information about the Gnupg-devel
mailing list