Integer factorization

Circuits for integer factorization

For example, in a traditional model of computation, a standard bubble sort will take n^(2+o(1)) seconds to sort a typical list of n small numbers. This means that

- there is some input size (maybe 1000? maybe 1000000? I haven't told you) for which the time is between n^(1.9) and n^(2.1) for all n above that size; and
- there is some input size (again, I haven't told you what the size is) for which the time is between n^(1.99) and n^(2.01) for all n above that size;

**WARNING:**
If you start from only the o(1) information,
you *cannot* make predictions about any specific input size.
In particular, replacing o(1) with 0 almost never produces correct results.
For example:

- Quick sort takes only n^(1+o(1)) seconds
to sort a typical list of n small numbers,
so it is faster than bubble sort by a factor of n^(1+o(1)).
This does
*not*mean that it is faster by a factor of n, or that it is faster for, say, n=8; the notation n^(1+o(1)) could mean n/(5 log_2 (4n)) just as easily as n. The simplest formula is rarely the most accurate formula. - The quadratic sieve (QS) takes time exp((1+o(1)) (log n)^(1/2) (log log n)^(1/2)) to (try to) factor n. The number field sieve (NFS) takes time exp((1.901...+o(1)) (log n)^(1/3) (log log n)^(2/3)). Some people, after mindlessly replacing o(1) with 0 and comparing exp((log n)^(1/2) (log log n)^(1/2)) with exp((1.901...) (log n)^(1/3) (log log n)^(2/3)), publicly predicted that NFS would be slower than QS for n up to 380 bits. In fact, NFS appears to already be several times faster than QS for n around 380 bits.

On the bright side, statements using o(1) are typically easier, often much easier, to prove than statements that provide more information.