There are many different variants of RSA and RW signatures. Some variants are easy to break. Some variants have proofs saying that attacks working for all choices of hash function ("generic-hash attacks", aka "ROM attacks") imply, with a bounded loss of efficiency, attacks against RSA inversion or against factorization. Some proofs are "tight" proofs where the loss of efficiency is, e.g., 2; some proofs are "loose" proofs where the loss of efficiency is, e.g., the number of calls to the hash function.
The chart on the front page of my rwtight paper summarizes what has been proven. The distinctions that matter for proofs are in the following three dimensions:
Overall there are 18 systems in the chart. The following 10 have tight proofs: large B, whether fixed or variable, whether RSA or principal RW or unstructured RW; B=1, fixed, whether RSA or principal RW or unstructured RW; B=0, fixed, unstructured RW. The following 6 have loose proofs, with tight proofs as an open question: B=1, variable, RSA or principal RW; B=0, fixed or variable, RSA or principal RW. The following 2 are easily broken: B=0 or B=1, variable, unstructured RW.
In 1986, Oded Goldreich suggested modifications to the GMR signature scheme, including the following: "the names of the nodes in the REF tree should be generated using a pseudorandom function (applied to the location of the node), rather than randomly. This way, we guarantee that the same names are always used for the same nodes, without having to store these names. (The idea to use pseudorandom functions to eliminate this part of the storage requirement in the GMR scheme was suggested by Leonid Levin.) Assuming the existence of one-way permutations, pseudorandom functions can be efficiently constructed [GGM]. ... the identity of the (k-level) leaf to be used can be computed by applying a pseudorandom function to the message to be signed. Thus, no coin tosses are required during the signing process. One can easily show that the security feature is preserved." This is an example of fixed signatures, with GMR as the signature scheme and GGM as the underlying function.
In 1993, Mihir Bellare and Phillip Rogaway gave loose proofs for RSA for B=0.
In 1996, Bellare and Rogaway gave tight proofs for large B, both for RSA and for unstructured RW. They specifically disclaimed applicability to other RW variants: "We stress that a random root is chosen; a fixed one won't do."
In 1997, George Barwood and independently John Wigley gave a general statement of fixed signatures, replacing signature randomness in "DSA / NR / most other algo's" with a deterministic function of the message being signed, such as a hash of the private key and the message. Followups by Bryan Olson outlined how to relate security to the PRF security of the hash function if one uses "a different key" for the hashing.
In 1999, Kaoru Kurosawa and Wakaha Ogata claimed a tight proof for principal RW with B=0. This "proof" is fatally flawed (see full details below), and there is no reason to think that the "theorem" is provable.
The paper has more problems than that. The first time I looked at this paper, I added it to my online bibliography with the following note: "Reinvention of what the rest of us call the Rabin-Williams signature system: specifically, the idea of allowing a multiplier in {-2,-1,1,2}, with primes in 3+8Z and 7+8Z. You can find this idea in an ISO standard from 1991, for example, and in some code I published in 1997."
In 2001, Jean-Sébastien Coron identified obstacles to tight block-box proofs for RSA with B=0 ("FDH").
In 2003, Jonathan Katz and Nan Wang gave a tight proof for fixed RSA with B=1. The original version of this paper claimed novelty for the entire idea of fixed signatures (for example: "Our proposed modification easily extends to eliminate the need for a random salt in, e.g., [7, 18] as well"). I pointed the authors to the 1997 postings by Barwood and Wigley. The authors added a footnote saying "Others have previously suggested replacing the random salt used in some other signature schemes by a deterministic (but unpredictable) function of the message" and citing those postings. The paper by Katz and Wang was then merged with a paper by Eu-Jin Goh and Stanislaw Jarecki; the merged paper goes back to giving zero credit to the 1997 postings.
In 2003, I posted a tight proof for fixed |principal| RW for B=1, following the approach from Katz and Wang.
In November 2006, I posted slides giving a tight proof for fixed unstructured RW for B=0, following a new approach, along with outlining how the 2003 approach works for principal RW for B=1 and for |principal| RW for B=1.
I updated my paper in February 2007 to cover everything. I explained in the introduction of the paper why the new approach for B=0 was only for unstructured RW:
Principal RW (and |principal| RW) don't make any random choices for B=0, so this loophole doesn't apply to them. How, then, can the 1999 paper have claimed a tight proof for principal RW with B=0? The answer is that the "proof" is wrong (see full details below).
Also in February 2007, Ogata and Naoya Matsumoto published a followup to the 1999 paper from Kurosawa and Ogata:
I submitted my paper to Crypto 2007. The paper was rejected, on the basis of the theorem supposedly having already been proven by Kurosawa and Ogata—"with a much shorter and cleaner proof", one reviewer wrote; "check the relevant scientific literature before submitting a paper", another reviewer wrote.
Whatever re-checking the reviewers did of the supposedly short and clean "proof" that they were relying upon was, evidently, not careful enough to catch the error in that "proof".
Somehow the reviewers also blinded themselves to the tension between the 1999 "theorem" and Coron's results. One possible explanation for this blindness comes from a review claiming that the 1999 "theorem" was "identical" to my theorem. If the 1999 "theorem" had been for fixed unstructured RW with B=0 then, yes, this would have enabled the loophole explained in my paper; but the 1999 paper is for variable principal RW with B=0.
I objected (see full quote below). After discussion with the reviewers, the program chair wrote back: "You presented your case very clearly, and no one had objections to any of the technical claims in your email. It was indeed unfortunate that the Kurosawa-Ogata paper was not referenced in your paper, and that we did not notice that the Kurosawa-Ogata security proof was fallacious. ... You should definitely cite the Kurosawa-Ogata paper in your revision of the paper, and explain why the proof of Theorem 1 is false and likely cannot be fixed. We think that your explanation will strengthen your paper and will highlight the subtleties with proving the security of different variants of Rabin-Wiliams. It will also be very useful to the referees of your next submission." Regarding my question "Does everyone agree that the Kurosawa-Ogata 'proof' is wrong?", the program chair answered "Yes".
Of course, for people who think that "science" means "refereed papers", this admission from the Crypto 2007 program committee doesn't matter. The status was still that the wrong "proof" from Kurosawa and Ogata (for variable principal RW with B=0) was a refereed paper, while my explanation of the flaws in that "proof" wasn't a refereed paper, and my correct proof (for fixed unstructured RW for B=0) wasn't a refereed paper.
The program chair also forwarded a side comment from a reviewer. As context, the most important flaw in the 1999 "proof" was an unjustified claim about probabilities, namely that the success chance of any attack against a particular simulator ("Pr[F successes]") is the same as the success chance against the legitimate signer ("ε(k)"). I had given an example (see full quote below) of an attack for which this claim is not just unjustified but simply wrong: the success probability is approximately 0 for the simulator, and 1 for the legitimate signer. The reviewer responded with the following weaseling:
In October 2007, I posted an update to my paper, including a detailed exposition of what was wrong with the "proof" from Kurosawa and Ogata.
In November 2007, Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan posted a paper that, without credit, uses my 2006 proof technique to prove a more general theorem. I wrote to them in January 2008 to say "my impression---correct me if I'm wrong---is that your Proposition 5.1 is a generalization of a result that I posted a year earlier" and to give links. Peikert wrote back saying "I agree that our abstraction and proof are a generalization. We will certainly cite your paper in our next revision and give some perspective regarding the similarities". That revision didn't appear online until 2010, and didn't give credit for the proof technique. This is plagiarism; as the AMS Policy Statement on Ethical Guidelines puts it, a "claim of independence may not be based on ignorance of widely disseminated results".
In January 2008, I received comments on the revised papers from reviewers for Eurocrypt 2008. Examples of the review comments:
The program chair said that my paper would be accepted if it was edited to the program chair's satisfaction. The February 2008 version of my paper has just one sentence summarizing the flaw in the 1999 "proof". The paper appeared at Eurocrypt 2008, and adds 1 to the citation count for the 1999 paper.
This is email that I sent 8 May 2007 04:04:55 -0000 to the Crypto 2007 program chair.
I am writing to formally request additional feedback from the program committee regarding my paper "Proving tight security for Rabin-Williams signatures." I am planning to submit this paper to next year's Crypto, and I think that a small amount of extra effort by people who reviewed the paper this year will save quite a bit of time for next year's PC. My paper claims several contributions but highlights one of them, namely a tight security proof for "fixed unstructured B=0" signatures. Here I'm using the terminology in my paper: Rabin-Williams signers insert B bits of randomness before hashing; "principal" signers choose square roots that are squares; "unstructured" signers choose uniform random square roots; "fixed" signers choose the same square root again if the same message appears again. The edited reports from the program committee clearly indicate that my paper was rejected on the basis of prior art, specifically a paper "Efficient Rabin-type digital signature scheme" by Kurosawa and Ogata in 1999. I am writing this message after carefully reviewing the paper by Kurosawa and Ogata, and in particular after carefully reviewing the security argument presented in Section 6 of that paper. The signature system considered by Kurosawa and Ogata is what I call "principal B=0" Rabin-Williams signatures. Specifically, Kurosawa and Ogata discuss * a "basic scheme" that changes Rabin's x^2 to any of x^2, alpha_1 x^2, alpha_2 x^2, alpha_1 alpha_2 x^2---the well-known signature version of Williams' modification of Rabin's idea; * an "improved scheme" that specifies alpha_1 = -1 and alpha_2 = 2--- a well-known simplification, no difference in security analysis; * a "further improvement" (Section 7) that uses two bits to transmit the choice between x^2, -x^2, 2x^2, -2x^2---a standard improvement, no difference in security analysis. Kurosawa and Ogata hash messages directly, without randomization; see "Step 1" in the signing algorithms. They choose principal square roots; see, for example, equation (21) et seq. They emphasize in the abstract that all of these systems are "deterministic". (They also incorrectly claim novelty for these systems, but this is a side issue.) After introducing principal B=0 Rabin-Williams signatures, Kurosawa and Ogata claim tight provable security for those signatures. See "Theorem 1" in Section 6. I'm using "provable" in its usual sense here: generic forgery algorithms can be converted into factorization algorithms at comparable speed. Readers will have noticed a critical feature of the Kurosawa-Ogata security "theorem" at this point: Kurosawa and Ogata claim tight security for _principal_ B=0 signatures. Question for the program committee: Does everyone agree that the Kurosawa-Ogata security "theorem" is for deterministic signatures? For comparison, my paper proves tight security for fixed _unstructured_ B=0 signatures, but it does _not_ prove tight security for _principal_ B=0 signatures, and it explains a huge obstacle to handling deterministic cases such as the principal case. I don't know how to prove the Kurosawa-Ogata security "theorem," and I'm quite sure that nobody else knows how to prove it either. Anyone reading this message can predict that, after this windup, I am going to say that I don't believe the Kurosawa-Ogata security "proof." Readers probably want me to back this up by pointing out a gaping flaw in the "proof." Readers might even want me to provide an example showing that the Kurosawa-Ogata "proof" simply doesn't work. Okay, here we go! To answer a hash query ("give me a hash value" at the top of page 62), the Kurosawa-Ogata simulator picks r in {0,...,pq-1} and i in {0,1,2,3} "randomly" and returns r^2/alpha_i as the hash value. To answer a signing query with that hash value, the simulator returns r. Kurosawa and Ogata claim that the success chance of an attack using this simulator ("Pr[F successes]") is the same as the success chance against the legitimate signer ("epsilon(k)"), and that the resulting forgery r^2/alpha_i = s^2/alpha_i produces a factor of pq when r != +-s. The big problem with this "proof" (I'll ignore all of the minor problems with the occasional numbers having factors in common with pq) is that the distribution of signatures r from the simulator is _not_ the same as the distribution of signatures from the legitimate signer. It isn't even close. The simulator produces _unstructured_ square roots, whereas the legitimate signer produces _principal_ square roots. Because the simulator produces the wrong signature distribution, there is no justification for the claim that the simulator doesn't change the attack's success probability. Maybe the attack learns something useful about pq from the legitimate signer's principal square roots. There is no reason to believe that the same attack will work with unstructured square roots. To see that the claim is not merely unproven but in fact unprovable, consider an artificial attacker that * collects many signatures; * for simplicity, focuses on the signatures that use alpha_0 = 1; * checks the Jacobi symbols of those signatures; * gives up if any of the symbols are not 1; * factors pq with the number-field sieve; and * uses the factorization to forge signatures. This attack works against the real signer at a quite impressive speed, namely the speed of the number-field sieve. It doesn't work at all against the simulator. The simulator has dramatically changed the attack's success probability, disproving the Kurosawa-Ogata claim. Questions to the program committee: Does everyone agree that the Kurosawa-Ogata "proof" is wrong? Does anyone see a way to prove the "theorem" claimed by Kurosawa and Ogata? If everyone is in agreement, I would appreciate a formal retraction of the following statements from the program committee: * "Another reviewer found an earlier paper with the same result"; * "They don't make a big deal about it, but they do prove it."; * "An identical result was already known, with a much shorter and cleaner proof". I would also appreciate any advice from the program committee regarding what I should say about the Kurosawa-Ogata paper in my next submission of this paper. Finally, I would appreciate hearing any useful comments from the committee discussion, if the authors are willing to release those comments to me. In particular, several of the reports seem to have been edited under the incorrect assumption of prior art, and the pre-edited reports could be quite helpful for my resubmission.