D. J. Bernstein
Internet publication
publicfile

Notes on performance

Speed

On a Pentium II-350 under OpenBSD, according to various HTTP benchmarking tools, publicfile handles 200-300 small single-fetch connections per second. That's 20 million connections per day. (I used the default installation and configuration, except that I compiled publicfile statically: add -static to conf-ld.)

publicfile achieves similar results under other operating systems, except Solaris. Solaris adds an incredible amount of bloat to every process invocation.

Memory use

An active HTTP connection consumes server memory in several ways: The outgoing TCP buffer space is occupied for at least one round-trip time: typically 100ms or more for a connection from a home computer. (HTTP ``simultaneous user'' benchmarks usually ignore this crucial fact.) In contrast, the process lasts for only a few milliseconds for an HTTP/1.0 connection (or an HTTP/1.1 connection where the client sends FIN immediately) if the result fits into the outgoing TCP buffer.

Misguided clients that hold HTTP/1.1 connections open with nothing to do are pointlessly chewing up memory for thousands of milliseconds. If this practice spreads, servers will have to impose extremely low timeouts on HTTP/1.1 connections.

configure normally allows 100 simultaneous HTTP processes and 40 simultaneous FTP processes. It imposes a soft data-segment limit of 50K on each process.