Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Apr 2002 17:01:59 -0700
From:      Terry Lambert <tlambert2@mindspring.com>
To:        Mate Wierdl <mw@thales.memphis.edu>
Cc:        freebsd-chat@freebsd.org
Subject:   Re: qmail (Was: Maintaining Access Control Lists )
Message-ID:  <3CB4D277.51F744B8@mindspring.com>
References:  <20020403144539.A11798@thales.memphis.edu> <3CAB7860.EB8DF505@mindspring.com> <20020410163728.A25502@thales.memphis.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
Mate Wierdl wrote:
> It certainly can happen that the advice given in an rfc turns out to
> be ill in the wild after a time.  DNS over TCP, for example, turns out
> to be prone to DOS attacks, and is much slower.

UDP and TCP themselves are prone to DOS attacks, so you aren't saying
anything here.

The payload limitation on UDP of 512 bytes makes it impossible to
comply with recent DNS RFC's, 1123 not withstanding, without
explicitly supporting TCP.


> Now djbdns certainly implement DNS over TCP---it just leaves it up to
> the admin to, in fact, enable it as the need arises.  The argument
> boils down to what is enabled by default.  Security, performance,
> requirements registry, and knowing the size of my records dictates no
> or limited TCP service availability.

Practically, security, in the form of firewall response handling
for requests sent out, mean that responses transiting the firewall
need to be verifiably a result of request sent from local machines,
which is not possible with UDP.


> > > Realizing the disadvantages of "axfr", the djbdns package allows the
> > > sysadmin to use other, more secure, reliable and readily available
> > > tools _in addition to_ "axfr".  What is wrong with this flexibility?
> >
> > Firewalls.
> 
> I do not follow: so just keep axfr, and get rid off the additional
> possibilities?

The other tools use ports other than the DNS port.  A hole in the
firewall to permit DNS traffic will not permit these out of band
mechanisms.  And the out of band mechanisms can't use the same
hole, without a different IP/port pairing, since the server can
not bind to the same port as another server already there, without
a MUX of some sort.


> > > Which rfc describing the DNS standards requires NOTIFY?
> >
> > RFC 1996.
> 
> I am not clear on this (probably I did not ask the question clearly):
> does rfc 1996 mandate the implementation of NOTIFY for servers?

It's standards track.  Compliance with RFC 1996 mandates implementation
of NOTIFY for servers.


> > The issue isn't about the amount of data that has to be
> > transferred, it's about the stall barrier, when any data
> > has to be reloaded.
> 
> In case of tinydns, there is no data to be reloaded: data is stored in
> a file.  Is "reloading data" defined to be the same as "looking up a
> record from a file"?  And pushing the new data to the slave happens
> immediately after the update on the master.

What about *during* the push?  You add latency to the time it
takes for a change to take effect.  For practical application,
e.g. an "ETRN" from a dial-on-demand transiently connected mail
server, this implies an intentional latency between the "demand"
(the contacting of the remote mail server for the purposes of
"ETRN", which is what causes the link to be brought up) and the
execution of the remainder of the act that initiated the "demand".
In other words, you have to put an arbitrary delay that is 2*N + 1,
where N is the latency in making the transient DNS record for the
dial-on-demand mail server's dynamically assigned IP address
visible to the DNS serving the remote mail server.

"Immediately after" is insufficient.  It *must* happen "before the
client's request for the operation is acknowledged to the client".

And, no, if one of your arguments is "performance", as it was above,
then "reloading data" is *not* defined as "looking up a record from
a file".

> > Here's my argument:
> >
> >       "All DNS data transfers should take place over the
> >        DNS protocol."
> 
> Well, this requirement results in complexity, and lots of reinventing
> the wheel.

Oh well.  Here's my favorite reformulation of Occam's Razor:

	"Anything that works is better than anything that doesn't"

> In case of tinydns, transferring data is equivalent with transferring
> a file.  Perhaps you suggest that the transfer should take place over
> the DNS protocol because of firewall considerations.

Yes.

> But exactly the
> added complexity will necessarily result in security problems.

I have yet to see a proof of this assumption that complexity is
a sufficient (or even necessary) condition for insecurity.


> Your requirement seems to be potentially an enormous burden.

I guess that means that not just any high school kid can write a
DNS server, then.  I'm willing to accept that restriction.


> I suppose you agree that the security tools associated with DNS
> data transfer should also be implemented inside the DNS protocol.

Yes, or in the underlying transport mechanism.

> But how about requirements that are needed for these security tools
> to work?  For example, for TSIG to work, you need to synchronize
> time between the master and slave.

That's because it's badly designed.  TSIG has an exploit window
in the timeout on the signature.

NFS has similar vulnerabilities on timing, which normally manifest
as problems between clients and servers, rather than as attempts
to explout: they are user visible without a cracker.  The NFS
problems could be instantly resolved by sending the local idea
of the current time with every request that takes time information,
allowing the remote system to calculate the time as a delta time
relative to its local clock, without losing the ability to set
specific times (with a client local time of 0, the delta becomes
an absolute on the server).  No more needing to synchronize clocks
for file locking or "make" to work.


> Should this time synchronization be done over the DNS protocol?
> Afterall, the slave can be behind a firewall...

You appear to be objecting to an implementation detail of something
which I agree is badly designed, and then trying to conclude from
it that there is no possbile good design that would address the
problem without having to use your approach.


> And then there is DNSSEC.  It seems to be so complex that it may
> defeat its own purpose: improve security.  For example, at
> 
> http://www.oreilly.com/catalog/dns4/chapter/ch11.html
> 
> I read:
> 
>   We realize that DNSSEC is a bit, er, daunting. (We nearly fainted
>   the first time we saw it.)
> 
> Indeed, even without DNSSEC, apparently 24% of .com servers have
> misconfigured delegations.

That's mostly because servers with misconfigured delegations
aren't automatically considered non-authoritative (effectively
diking them out of the internet).  Having your servers diked off
the internet is a wonderous incentive toward correctness.  It
even works for SPAM.


> > > In case of a trusting slave, though, rsync will push the changes over
> > > to the slave as soon as they happen on the master.
> >
> > Plus the notice latency on the master, as the changes are polled,
> > rather than event-triggered, plus the notice latency on the slave,
> > for the same reasons.
> >
> 
> Does not happen between two tinydns servers, though.  The problem I
> see is that of authentication: the recommended scheme for pushing new
> data from a tinydns master to a tinydns slave assumes that the slave
> trusts the master.  What if the master is taken over by an attacker?

Then you are screwed.

Let's drop the master/slave relationship, and ask the same question:
what if one of your DNS servers is taken over by an attacker?

Then you are screwed.


> > > Which currently in use client sends IQUERY?  What does the 01/2002 draft
> > >
> > > http://www.ietf.org/internet-drafts/draft-ietf-dnsext-obsolete-iquery-03.txt
> > >
> > > say about IQUERY?
> >
> > Whatever it says is irrelevent, until it is Standards Track.
> 
> It is not irrelevant because it does hint at the problems with IQUERY,
> and at the fact that clients do not send IQUERY anymore.  Hence it is
> unlikely that users will suffer from this lack of compliance.

It is better to comply with a bad standard, than to be in limbo
between standards.

FreeBSD pthreads were incredibly screwed up for a while, when they
were being moved from Draft 4 compliance to final standard compliance;
as Draft 4, they were usable; as standard, they would also be usable.
But in between... it was nearly impossible to make them work.

In the realm of the internet, the moral equivalent is compliance
with standards documents: without compliance, you lose all features
which derive from interoperability, and if that interoperability
failure is between clients and servers, rather than servers, you
lose everything.

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3CB4D277.51F744B8>