Facts: You can trivially set an alternative SOA contact address, or any other SOA information you want. The general Z-line syntax is explained in the tinydns-data documentation, and an example is covered in detail in the upgrade documentation under Administration.
The contact address ``hard-coded within tinydns'' is simply a convenient default contact address: hostmaster at the domain. What kind of idiot would treat this helpful abbreviation as if it were a limitation?
(There is an unauthorized third-party patch that lets the user specify, in one line, a different default contact address for all subsequent zones. This is another feature that isn't available in BIND. If you're curious why I have rejected the patch: It's clear that the same feature should instead be provided by higher-level data-editing tools. Putting the feature into the data syntax would make those higher-level tools unnecessarily difficult to write.)
Tip: When Knowles says something is impossible to do, check the official documentation. Chances are that you'll find out exactly how to do it.
Brad Knowles, 2002.11.09: ``[djbdns] natively supports very limited set of record types: SOA, NS, A, MX, PTR, TXT, CNAME.''
Facts: tinydns allows all record types, including SRV, through a generic record syntax. The syntax is explained in the tinydns-data documentation.
(There is an unauthorized third-party patch that adds a special syntax for SRV records. I have rejected this patch for the reasons mentioned above.)
Facts: It is trivial to run both services on separate IP addresses on the same machine. There are several examples of this in the documentation.
Facts: tinydns fully supports client differentiation. You can, for example, provide records only to internal clients, making them invisible to external clients. This feature is more powerful than BIND's ``views'': tinydns's client differentiation has per-line granularity, while ``views'' have per-zone granularity.
Of course, separate servers are supported too.
Brad Knowles, 2002.11.09: ``Without third-party patch, [tinydns cannot] listen to more than one IP address.''
Facts: You can trivially tell tinydns to listen to another address on the same machine. Addresses can share configurations or have separate configurations; both setups are fully supported.
Brad Knowles, 2002.11.09: ``Without third-party patch, [dnscache cannot] listen to more than one IP address.''
Facts: As with tinydns, you can trivially tell dnscache to listen to another address on the same machine.
Facts: When djbdns truncates a response, it sets the TC bit, exactly as required by the protocol.
Facts: The NOTIFY protocol specifically prohibits modification of the zone structure; in particular, creation of new zones. The BIND implementation of NOTIFY has the same prohibition.
Facts: dnscache doesn't ``suffer'' at all. The internal cache data structure in dnscache handles ``garbage collection and memory flushing'' for free. BIND's problems in this area are caused by BIND's naive choice of data structures, not by any inherent difficulty in the problem.
Brad Knowles, 2002.03.25: ``... local disk typically has a latency measured in terms of single digit milliseconds. Contrariwise, DNS queries that have to go out to the Internet and come back are frequently measured in terms of tens, hundreds, or even thousands of milliseconds. Therefore, you will have traded a known serious problem with local swap thrashing for an unknown and quite probably much, much more serious remote data thrashing.''
Facts: Knowles is making a really stupid mistake here. The big problem with physical disk thrashing is throughput, not latency.
Yes, a 10-millisecond disk access has an order of magnitude lower latency than a 100-millisecond DNS query resolution. However, a typical machine can do only 100 disk accesses in a second, while it can easily do 1000 query resolutions! (Never mind the fact that, on a thrashing BIND system, the number of disk accesses exceeds the number of queries.)
The point is that disk access occupies a precious resource: the number of simultaneous disk accesses is at most the number of disks in the system. In contrast, a query resolution occupies a relatively small portion of RAM, so many queries can be resolved simultaneously.
Facts: dnscache does not hide this information. It puts this information into statistical reports in the log. The meaning of the statistics is explained in the documentation.
Brad Knowles, 2002.11.09: ``Peak 500 qps, according to TinyDNS FAQ ... Personal testing: Real-world Internet demonstrated tinydns to ~250 qps. Private servers demonstrated tinydns to ~340 qps.''
Facts: In February 2002, Matt Simerson publicly reported an upgrade from BIND to tinydns. Let's look at the numbers:
We recently converted from BIND 8 over to DJBDNS due to some major issues we were having with scaling our DNS system. BIND started having all sorts of problems when we exceeded 125,000 zones. ... On the night of the cut over we updated the Server Irons, shifted all the traffic over to the new tinydns farm (4 servers) and sat back and watched in awe as each of the tinydns instances rocketed up to between 4,000 and 6,000 connections per second. Our Network Operations team was able to report back how many connections came into the system and also how many were being returned. The fastest machine (Dual 1GHz) was pumping out the 6,000/sec and each Dual 600 was cranking out a little over 4,000. The dual 600's showed the tinydns process eating about 80% of one CPU and the dual 1GHz system showed tinydns using about 40% of one CPU. For the first time in months, we were answering every valid request that came in.
For comparison: In September 2001, Knowles publicly reported ``Nameserver performance'' results. He pointed out a tinydns server responding in hundreds of milliseconds, and a BIND server responding much more quickly. He failed to notice that the tinydns server was on a much slower network link. Clueless.
It's amusing to note that Knowles's ``personal testing,'' as summarized in his own November 2002 graphs, showed tinydns serving the 20MB .tv zone at 300 queries per second, BIND 8 at 50 queries per second, and BIND 9 at 20 queries per second. The real lesson here is not that tinydns is faster than BIND, but that Knowles has no clue how to test performance.
Facts: Matt Simerson has publicly reported a production dnscache machine handling 1.95 billion queries in 5 days (on average, 4500 queries per second) using under half of a Xeon-550. Note that caches such as dnscache have to do much more work per query than servers such as tinydns.
The real lesson here, again, is that Knowles has no clue how to test performance.
Facts: This continues to be a problem with BIND 9. When BIND starts (for example, after a reboot), it has to read and parse all your zone files. BIND can't answer queries from a zone until it has parsed the relevant zone file.
In contrast, with djbdns, the preparsed data is saved on disk, so all your data is accessible as soon as tinydns starts.
Facts: The djbdns security blurb explains many of the reasons for djbdns's perfect security record. It includes specific features such as ``tinydns and walldns never cache information'' and metafeatures such as ``Bug-prone coding practices and libraries have been systematically identified and rejected.''
Knowles's assertions here are false, but they are not patently false. How can I prove that I never said something? The best way to see that Knowles is lying is to demand that he back up his paraphrases with something verifiable: a complete quote and a reasonably precise reference.
As another example, Knowles responded to this page, 2002.03.27: ``Given the lies we've seen from Dan in the past, I won't bother to waste my time with this page.'' Notice the pattern here: Knowles says that I'm a liar, and doesn't have a shred of evidence; I say that Knowles is a liar, and I provide extensive evidence.
Knowles has presented his own ease-of-use comparisons, showing various procedures in djbdns and procedures in BIND. He's implicitly claiming that the procedures achieve the same results. That claim is simply not true. For example:
Many people are under the mistaken impression that ``RFC'' means ``required protocol specification.'' In fact, it would be impossible to make an Internet computer comply with every RFC. Some RFCs contradict each other! As RFC 1796 explains, most RFCs do not specify standard protocols: they specify experiments, or proposed standards, or sometimes draft standards, or they can be purely informational. Furthermore, most standard RFCs are not required protocols, and not even recommended protocols: they are usually entirely optional, and there are often good reasons that they should not be used.
In short, if Knowles suggests that failure to comply with an RFC is a problem per se, he's trying to fool you. Tip: When Knowles starts talking about protocols, ask him ``How exactly does this matter for my users?''
Brad Knowles, 2001.06.11: ``There are a number of problems with TinyDNS. For one, it does not hand out [root] referrals to questions that are asked of zones it does not control. ... I believe that this is a violation of the RFCs, at least in spirit if not in the letter.''
Brad Knowles, 2002.03.25: ``By default, tinydns does not hand out [root] referrals to questions it is asked about zones it does not control. I believe that this violates the spirt of the RFCs, if not the letter.''
Facts: tinydns provides data as instructed by the system administrator. If the system administrator gives tinydns the root addresses, tinydns will provide root referrals. Otherwise it won't.
Knowles is making a fool of himself when he suggests that this has anything to do with DNS interoperability. DNS clients and caches, including BIND, throw away root referrals. Why? Because servers simply aren't authorized to say where the roots are. Even if a cache has been incorrectly told to ask tinydns about somebody else's domain, the cache won't be fooled into accepting a root referral.
Knowles's hero Paul Vixie wrote on 2002.11.22 that authoritative servers should not provide root referrals: ``It's not reasonable to ask an authoritative server to have to fetch anything from anybody. It should not need a root.cache file...''
Facts: tinydns provides referrals if it is configured to do so, just like BIND. It is used on some TLD nameservers, and provides referrals accordingly, just like BIND.
Knowles's comment is as idiotic as saying ``By default, BIND does not publish any addresses, while normal nameservers spend most of their time publishing addresses.'' That's because, by default, BIND doesn't have any data to publish; you have to give it the data.
Facts: djbdns supports zone transfers, for the sites that need them. Zone transfers are disabled by default. This default is fully compliant with the RFCs.
RFC 1034 (part of the DNS standard), section 4.3.5, explicitly states that DNS data can be replicated through FTP or other protocols. It does not require zone transfers.
Brad Knowles, 2002.11.09: ``By default, [tinydns] does not support TCP,'' and therefore ``violates RFCs.''
Facts: djbdns supports TCP, for the sites that need it. TCP is disabled by default. This default is fully compliant with the RFCs.
Saying that tinydns doesn't support TCP is missing the point. There are two cooperating programs, tinydns and axfrdns, using the same database. UDP service is handled by tinydns. TCP service is handled by axfrdns, at the sites that need it.
RFC 1123 (the Host Requirements standard), section 6.1.3.2, requires UDP service. It does not require TCP service. The situations where you need TCP service are listed in the djbdns documentation.
Facts: BIND pauses before sending NOTIFY. You obtain quicker, more reliable updates by setting a fast schedule for the zone-transfer client.
Brad Knowles, 2002.11.09: ``[djbdns] provides strange responses to query types it does not support,'' and thus ``violates the `be liberal in what you accept, conservative in what you generate' principle.''
Facts: Clients do not send IQUERY. IQUERY is obsolete. Even the BIND company admits this. The primary use of IQUERY is by attackers trying to break into pre-8.1.2-t3b versions of BIND.
As for the be-liberal-in-what-you-accept principle: See RFC 1123 (the Host Requirements standard), section 1.2.2. The principle says that programs shouldn't crash when something unusual happens:
Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect.The principle does not say that programs should be polite to these malevolent entities.
Facts: Under the DNS protocol, queries from clients to caches set the RD bit, and queries from caches to servers clear the RD bit. The picture is quite clearly laid out in RFC 1035 (part of the DNS standard), page 6. A query to a cache without the RD bit means that the cache is being incorrectly used as a server. Queries of this type are bogus and have no relevance to DNS interoperability. BIND answers them as a cache snooping mechanism; dnscache discards them to help protect user privacy.
Brad Knowles, 2002.11.09: ``[djbdns] does not, and author's code will not, support new DNS features: DNSSEC, TSIG, IXFR, NOTIFY, EDNS0, IPv6, etc...''
Facts: IPSEC provides better security than TSIG. IPSEC is inherently easier to set up than TSIG: it has the big advantage of applying to all protocols, rather than being glued into the guts of one protocol. There are, similarly, superior alternatives to the DNS update protocol, IXFR, and NOTIFY.
EDNS0 currently doesn't accomplish anything. I'm not strongly opposed to it; there simply isn't any benefit for the users.
DNSSEC currently doesn't accomplish anything, even though it is falsely advertised as preventing forgeries. I'm not strongly opposed to it; there simply isn't any benefit for the users.
djbdns supports IPv6 records, just like records of any other type. However, making servers reachable through IPv6 currently doesn't accomplish anything. I'm not strongly opposed to it; there simply isn't any benefit for the users.