schneier on bernstiens work

Douglas F. Calvert douglist@anize.org
Sat Mar 16 09:42:01 2002


--=-HzW9Sfsy+NUP87Y3mwgy
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable

Hello.
 I recently posted a question about bernsteins facorting work. Schneier
talks about the paper in the latest cryptogram. It is a little off topic
for the list but still interesting...

go to counterpane.com for latest cryptogram (i am too lazy to link)

--BEGIN CRYPTO-GRAM
Bernstein's Factoring Breakthrough?



Last fall, mathematician Dan Bernstein circulated a paper discussing
improvements in integer factorization, using specialized parallel
hardware.  The paper didn't get much attention until recently, when
discussions sprang up on SlashDot and other Internet forums about the
results.  A naive read of the paper implies that factoring is now
significantly easier using the machine described in the paper, and that
keys as long as 2048 bits can now be broken.

This is not the case.  The improvements described in Bernstein's paper
are unlikely to produce the claimed speed improvements for practically
useful numbers.

Currently the fastest factoring algorithm is the Number Field Sieve
(NFS), which supplanted the Quadratic Sieve several years ago.=20
Basically, the NFS has two phases.  The first is a search for equations
that satisfy certain mathematical properties.  This step is highly
parallelizable, and today is routinely done with thousands of
computers.  The second step is a large matrix calculation, which
eventually produces the prime factors of the target number.

Bernstein attempts to improve the efficiency of both steps.  There are
some good observations here that will result in some minor speedups in
factoring, but the enormous improvements claimed are more a result of
redefining efficiency than anything else.  Bernstein positions his
results as an effect of massive parallization.  To me, this is
misleading.  You can always simulate a parallel machine on a single
computer by using a time-sliced architecture.  In his model, the "cost"
of factoring is a product of time and space, and he claims that he can
reduce the cost of parallel sorting from a factor of m^4 to m^3.=20
Bernstein justifies his assumptions by claiming that a single processor
needs m^2 memory, whereas an array of m^2 processors only needs constant
memory.  This may be true, but neglects to factor in the cost associated
with connecting those processors: tying a million simple processors
together is much more expensive than using a single processor of the
same design with a mi
llion bits of memory.  Again, it is not clear that this technique will
buy you anything for practical sized numbers.

To be sure, Bernstein does not say anything different.  (In fact, I
commend him for not being part of the hyperbole.)  His result is
asymptotic.  This means that it is eventually true, as the size of the
number factored approaches infinity.  This says nothing about how much
more efficient Bernstein's algorithm is, or even whether or not it is
more efficient than current techniques.  Bernstein himself says this in
one of his posts:  "Protecting against [these techniques] means
switching from n-bit keys to f(n)-bit keys.  I'd like to emphasize that,
at this point, very little is known about the function f.  It's clear
that f(n) is approximately (3.009...)n for *very* large sizes n, but I
don't know whether f(n) is larger than n for *useful* sizes n."  What he
means is: at some bit length these techniques will be useful, but we
have no idea what that bit length is.

I don't believe in the factor of n - 3n length improvement.  Any
practical implementation of these techniques depends heavily on
complicated technological assumptions and tradeoffs.  Parallel computing
is much easier to say than it is to do, and there are always hidden
complexities.  I think when all the math is said and done, these other
complexities will even out his enhancements.

This is not to belittle Bernstein's work.  This is good research.  I
like his novel way of using sorting techniques to carry out the linear
algebra part.  This might be useful in a variety of other contexts, and
is likely to open up new research directions in the design of more
efficient sorting networks and sparse matrix algorithms.  There are
other speed improvements to the NFS in this paper, and they will most
definitely be researched further.

Over the past several decades factoring has steadily gotten easier, and
it's gotten easier faster than anyone would have believed.  Speed
improvements have come from four sources.  One, processors have gotten
faster.  Two, processors have gotten cheaper and easier to network in
parallel computations.  Three, there have been steady flows of minor
improvements to the factoring algorithms.  And four, there have been
fundamental advances in the mathematics of factoring.

I believe that Bernstein's work falls under the third category, and
takes advantage of ancillary improvements in the second category.  And
if history is any guide, it will be years before anyone knows exactly
whether, and how, this work will affect the actual factoring of
practical numbers.


Bernstein Paper:
<http://cr.yp.to/papers/nfscircuit.ps>

--=20
+---------------+-----------------------------------+
|Douglas Calvert|       http://anize.org/dfc        |
| dfc@anize.org |       http://imissjerry.org       |
+---------------+-----------------------------------+
|   If you use envelopes, why not use encryption?   |
|         http://anize.org/dfc/dfc-keys.asc         |
| 0817 30D4 82B6 BB8D 5E66 06F6 B796 073D C954 1FB2 |
+-------------| http://www.gnupg.org |--------------+


--=-HzW9Sfsy+NUP87Y3mwgy
Content-Type: application/pgp-signature; name=signature.asc
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQA8kwbPt5YHPclUH7IRAnQVAJ9vKoxYOuOMUg3gex9XK3ckNZC6FgCghtZ8
KVqEteqo6JC03EE6Vu9oM6Q=
=7tLI
-----END PGP SIGNATURE-----

--=-HzW9Sfsy+NUP87Y3mwgy--