Simple operations can vary widely in cost. A tiny circuit is (when bought in bulk) much less expensive than a large chunk of memory. A fast operation that fits inside a tiny circuit is, correspondingly, much more cost-effective than random access to a large chunk of memory.
Minimizing operation count is not the same as minimizing cost. For example, for large NFS factorizations, my circuits involve substantially more operations than conventional computers, but they are much more cost-effective: they are less expensive to build, and they complete the entire calculation more quickly. This is a familiar phenomenon from the VLSI literature.
The 2001 Lenstra-Verheul ``Selecting cryptographic key sizes'' paper, for example, says that N operations take at least N/450000000 seconds on a $100 450MHz PC. (More generally, at least (N/M)/450000000 seconds on a collection of M such computers.) This alleged lower bound on cost is simply wrong. It ignores the possibility of replacing a large chunk of RAM with a similarly priced collection of active processing cells.
Similarly, the June 2002 Lenstra-Shamir-Tomlinson-Tromer paper estimates that 1024-bit NFS sieving takes at least a year on a collection of 30 million 1GHz computers, and therefore is currently out of reach. This estimate implicitly assumes that the most cost-effective way to carry out N billion operations is to run a fairly expensive 1GHz computer for N seconds. But the VLSI literature already has many examples of operations being performed at vastly lower cost.
The Lenstra-Shamir-Tomlinson-Tromer paper says that these ``traditional'' uses of operation counts ``underestimate the difficulty of factoring.'' That is false. The authors are overestimating the cost of each operation.
A truly conservative estimate for cost, starting from operation count, would have to assume a much smaller cost per operation, shielding itself from improvements in actual circuit designs. The conclusions would provide no comfort for users of 1024-bit RSA.
Method | Computation cost |
---|---|
Conventional NFS, literature | M^(1.5+o(1)) |
Conventional NFS, optimized | M^(1.45...+o(1)) |
Circuit NFS | M^(1.038...+o(1)) |
Define B as the number of key bits necessary to protect against a machine costing M^(1+o(1)). Conventional NFS and circuit NFS are more expensive than that, so fewer key bits are necessary for protection:
Method | Key bits needed |
---|---|
Conventional NFS, literature | B/(3.375+o(1)) |
Conventional NFS, optimized | B/(3.05...+o(1)) |
Circuit NFS | B/(1.12...+o(1)) |
The Lenstra-Shamir-Tomlinson-Tromer paper says that these results provide ``compelling new evidence'' to support the use of operation counts in evaluating security, and that one should not ``rely too much on supposedly more accurate cost-based estimates for the NFS.''
That assessment ignores the crucial difference between o(1) and 0. My circuits close most of the gap for very large input sizes, but this does not mean that they will have any effect upon 1024-bit or 1536-bit or 2048-bit factorizations.
It might turn out that 1024-bit keys are fairly difficult to break, even though lower bounds based accurately on operation counts would demand substantially larger keys. Or it might turn out that 1024-bit keys are easy to break! The only way to find out is to do a careful cost analysis.