Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 8 Jul 2001 22:50:14 -0700
From:      Dragos Ruiu <dr@kyx.net>
To:        Mike Silbersack <silby@silby.com>
Cc:        <cjclark@alum.mit.edu>, Darren Reed <avalon@coombs.anu.edu.au>, Yonatan Bokovza <Yonatan@xpert.com>, "'freebsd-security@freebsd.org'" <freebsd-security@FreeBSD.ORG>
Subject:   Re: FW: Small TCP packets == very large overhead == DoS?
Message-ID:  <0107082333531I.08020@smp.kyx.net>
In-Reply-To: <20010708213736.C26132-100000@achilles.silby.com>
References:  <20010708213736.C26132-100000@achilles.silby.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 08 Jul 2001, Mike Silbersack wrote:
> There's nothing wrong with questioning the correctness of RFCs.  They
> were, after all, written by ordinary mortals like everyone in this
> discussion.

Certainly agreed.  But I think the right way to do this is via changing 
the RFC in the IETF so that all the implementations are consistent rather 
than building yet another non-standard implementation of the standard.
(Insert Tannenbaum standards quote "Nice thing about standards, there
are so many to choose from." :-)
 
> Maybe 256 is too high, perhaps 128 would be more reasonable.  64 seems way
> too small in any case.

I'm of a differing opinion.  Using standard 20 byte headers instead of the more
fanciful maximal IP option headers this gives 44 bytes of payload.  I recall
one standards group that argued for months about packetsize issues
where for some applications the representatives argued that 64 bytes was 
too large a packetsize (this particular debate was over 32 or 64 byte cells,
and oddly enough they agreed on 48 for no particular reason other 
than to stop arguing :-).

> Your arguement about latency isn't relevant here.  If you were writing a 
> latency-sensitive app, you wouldn't be running tcp.  Also, as I understand
> it, we're setting a minimum on the maximum, not a minimum on the minimum.

Why wouldn't I want to use an error corrected channel on a latency
sensitive app? I have written an applicaiton that needs this.  It is exactly
"optimizations" like these that drive streaming implementers to need to reinvent
the wheel and develop their own error correction protocols on top of UDP. 

IMHO It's actually a minimum... If I understand MSS application (which I feel
I do) it's what the packet sizes the OS will fragment the data into to convey
it efficiently across the channel, which works out into pretty much being the
minimum packet size the OS will segment streams into. Please correct me if I
have made an error, as I too am merely human.

But that's just my two cents, you gentlemen are free to implement what you
choose, and I'm free to move my encrypted streaming app to whatever
OSes I choose or force my users to patch kernels to use it accordingly.
It's just less messy for me if this isn't changed. It may in practice not even
be a big deal, but again, I question the logic of changing the implementations
to forestall a hypothetical attack of marginal effect on another OS.

cheers,
--dr

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-security" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0107082333531I.08020>