Prime Generation

Pete Chown Pete.Chown at
Mon Nov 15 18:19:51 CET 1999

I was having a look at gnupg's prime generation code over the
weekend.  As I understand it, it works something like this:

1.  Choose a random number.
2.  Make the number odd and ensure that it has the correct number of
3.  Check against a list of known, small primes.
4.  Iterate a probabilistic primality test enough times to have
    confidence that the number is really prime.
5.  If any of the tests fail, go back to step 1.

My question is, doesn't this use up more random data than is
necessary?  Suppose instead of step 5, you just tested the next odd
number for primality rather than starting again.

Of course this would bias the algorithm towards primes which are
preceded by a lot of composite numbers.  I don't know how significant
the bias would be, since we know that prime numbers are fairly evenly

One approach to avoiding the bias would be to pick two random numbers
rather than one -- a starting point and a "step".  Then each time a
number fails the primality test, you add on the step rather than two.

Sorry if this has already been discussed on the list or if I have
misunderstood the code.  It is my first foray into the gnupg source,
so I may have got something wrong.

      phone +44 (0) 20 8542 7856, fax +44 (0) 20 8543 0176, post:
  Skygate Technology Ltd, 8 Lombard Road, Wimbledon, London, SW19 3TZ

More information about the Gnupg-devel mailing list