D. J. Bernstein
Notes on writing papers

# The devil's guide to choosing new mathematical terminology

## Terminology that conflicts with widespread terminology

For more than fifty years the word pseudorandom has been widely used to mean ``passes some statistical tests.'' Consider, for example, the 1949 quote ``a pseudo-random number lying between 0 and 1 is computed arithmetically'' in the Oxford English Dictionary; or the 1968 paper by Good and Churchhouse on ``pseudorandom features of the Moebius sequence''; or the 1989 paper by Niederreiter on ``congruential pseudorandom numbers generated by inversions''; or the 1989 paper by Boyar predicting ``sequences produced by certain pseudo-random number generators''; or the Handbook of Applied Cryptography, Chapter 5, which discusses ``pseudorandom bit generators'' ranging from linear congruential generators through the ``Blum-Blum-Shub pseudorandom bit generator''; or hundreds of thousands of web pages easily located through Google.

Of course, pseudorandomness is pointless as a yes-no question: the answer is always ``Yes, this generator passes some statistical tests.'' But as a quantitative question---``Which statistical tests does this generator pass?''---it is an extremely useful concept.

A 1988 paper by Luby and Rackoff stole the word ``pseudorandom'' for a much narrower concept. This paper says that a generator is ``pseudorandom'' only if it passes all statistical tests within a time limit. According to that definition, linear congruential generators are not ``pseudorandom,'' and the Blum-Blum-Shub generator is merely conjectured to be ``pseudorandom.''

The narrower concept is useful in cryptography, and it deserves a name (such as ``unpredictable''). But the broader concept is also useful, and the use of ``pseudorandom'' for that concept isn't going away. Reusing the word ``pseudorandom'' for the narrower concept creates unnecessary confusion: what happens when a reader feeds a ``linear congruential pseudorandom number generator'' to a theorem about ``pseudorandom generators''?

By creating new terminological conflicts, you can encourage serious errors, annoy knowledgeable readers, and isolate your research area from the rest of the community. If you're challenged, claim that your new concept is more important than the old concept, that anyone who wants to use the old concept can say something else, and that staying compatible with ancient terminology will interfere with progress.

Every positive integer is an integer. Every continuous function is a function. Every genus-2 hyperelliptic curve is a hyperelliptic curve. Every bounded-grimace jabberwocky over a finite ring is a jabberwocky over a finite ring.

When the reader sees an adjective (or adjectival phrase) such as ``positive'' or ``continuous'' or ``genus-2'' or ``bounded-grimace,'' he knows that it is restrictive: every adj x is an x. Even if the reader has never seen the phrase ``genus-2'' before, he knows that the class of genus-2 hyperelliptic curves is contained in the class of hyperelliptic curves.

There are some exceptions to this rule---some non-restrictive adjectives. Here are a few examples:

• Can you conclude that a ``partial function from A to B'' is a function from A to B? No. It is, by definition, a function from a subset of A to B.
• Can you conclude that an ``algebraic integer'' is an integer? No. It is, by definition, a root in \C of a monic integer polynomial.
• Can you conclude that a ``random element of X'' is an element of X? No. It is, by definition, a measurable function to X from the probability space \Pr of possible universes.
Every exception imposes a cost on the mathematical community: everyone has to learn all of the exceptions or risk drawing incorrect conclusions. For example, I once bumped into a cryptographer who had completely misunderstood various theorems because---having never seen the definition of the word ``random''---he incorrectly assumed that a ``random function from S to T'' was a function from S to T.

What does it mean for an algorithm to be ``probabilistic''?

Is it true that deterministic algorithms are probabilistic algorithms? Or is an algorithm required to flip at least one coin to qualify as a probabilistic algorithm?

The first possibility---deterministic algorithms are probabilistic algorithms by definition---sounds weird. It isn't how people talk. People often write things like ``this algorithm is probabilistic'' to emphasize that there are coin flips.

Other people define ``probabilistic algorithm'' without this requirement, and then state theorems about ``probabilistic algorithms'' to cover algorithms that might flip coins and algorithms that don't. This definition changes the meaning of the statement ``this algorithm is probabilistic''; it removes all content from the word ``probabilistic.''

Of course, this compatibility failure is entirely unnecessary. The broad class can be (and often is) simply called ``algorithms.'' Theorems refer to ``algorithms.'' An algorithm is ``deterministic'' if it has no coin flips. End of problem.

As another example, some papers define a broad class of ``existential forgeries'' (rather than simply ``forgeries'') and a subclass of ``selective forgeries,'' breaking the common use of the phrase ``existential forgeries'' to refer to forgeries that are not selective.

You can create further problems of this type by inserting content-free adjectives into your own definitions. Here's a model for you to follow: rather than defining a set of ``integers'' of which some are ``nonnegative integers,'' you can define a set of ``signed integers'' of which some are ``nonnegative integers.''