D. J. Bernstein
Integer factorization
Circuits for integer factorization

The most serious Lenstra-Shamir-Tomlinson-Tromer error

The June 2002 Lenstra-Shamir-Tomlinson-Tromer paper claims to ``show that 1024-bit RSA keys are as secure as many believed them to be.'' Let's look at the logic behind this claim:
  1. Why are 1024-bit RSA keys secure? Because, they claim, it's difficult to carry out a 1024-bit NFS calculation.

    Presumably ``secure'' is supposed to mean ``secure against known factorization methods.'' All the other known methods appear to be much slower than NFS.

  2. Why is it difficult to carry out a 1024-bit NFS calculation? Because, they claim, 1024-bit NFS sieving is difficult.

    There are other stages of NFS, notably the NFS linear-algebra stage, but the authors estimate that a particular NFS linear-algebra circuit (which they falsely claim to be new) will finish ``within a day'' and cost ``a few thousand dollars'' for 1024-bit inputs. The authors explicitly claim that sieving is the bottleneck.

  3. Why is it difficult to carry out 1024-bit NFS sieving? Because, they claim, conventional NFS sieving is difficult.

    The Lenstra-Shamir-Tomlinson-Tromer paper spends one paragraph, on page 14, estimating (on the basis of o(1) abuse) that conventional NFS sieving ``would require about a year on about 30 million 1GHz PCs with large memories,'' and leaping to the conclusion that 1024-bit keys provide a ``reasonable level of security.''

But what about circuits for NFS sieving?

The authors claim to understand (and falsely claim to have understood for years) that circuit NFS sieving is much more cost-effective than conventional NFS sieving for large input sizes. However, when faced with 1024-bit keys, they don't even consider the possibility of 1024 being large and of conventional sieving being the wrong approach.

This is a gaping hole in the Lenstra-Shamir-Tomlinson-Tromer analysis.

The authors are implicitly assuming, without justification, that conventional sieving is the most cost-effective approach to sieving for 1024-bit keys. In particular, they are implicitly assuming, without justification, that circuits (mesh sorting, mesh routing, early-abort rho+ECM, etc.; see my October 2001 grant proposal) would not be much less expensive. The authors conclude, without justification, that sieving by itself is a bottleneck for 1024-bit keys.

Maybe special-purpose circuits will have a dramatic impact on the difficulty of 1024-bit and 1536-bit and 2048-bit factorizations. Maybe they'll have no impact at all. I don't know yet. I do know that ignoring the question doesn't help us find the answer.