From owner-freebsd-hackers Sun Jan 11 18:15:04 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.7/8.8.7) id SAA07202 for hackers-outgoing; Sun, 11 Jan 1998 18:15:04 -0800 (PST) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from scanner.worldgate.com (scanner.worldgate.com [198.161.84.3]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id SAA07186 for ; Sun, 11 Jan 1998 18:14:54 -0800 (PST) (envelope-from marcs@znep.com) Received: from znep.com (uucp@localhost) by scanner.worldgate.com (8.8.7/8.8.7) with UUCP id TAA21807; Sun, 11 Jan 1998 19:14:23 -0700 (MST) Received: from localhost (marcs@localhost) by alive.znep.com (8.7.5/8.7.3) with SMTP id TAA14037; Sun, 11 Jan 1998 19:12:45 -0700 (MST) Date: Sun, 11 Jan 1998 19:12:45 -0700 (MST) From: Marc Slemko To: David Greenman cc: hackers@FreeBSD.ORG Subject: Re: why 100 byte TCP segments? In-Reply-To: <199801120035.QAA04285@implode.root.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk On Sun, 11 Jan 1998, David Greenman wrote: > >Yup, this seems to happen only with data between one and two mbuffs > >in size, ie. 101-207 bytes or so. Smaller and it fits in one > >packet, between 208 and the MTU it is properly sent in one packet. > > It's a known problem with the socket code that has existed in BSD forever. > I think Garrett put a work-around in by fudging some thresholds, but I may be > mistaken. There is: wollman 96/01/05 13:41:56 Modified: sys/kern uipc_socket2.c Log: Eliminate the dramatic TCP performance decrease observed for writes in the range [210:260] by sweeping the problem under the rug. This change has the following effects: 1) A new MIB variable in the kern branch is defined to allow modification of the socket buffer layer's ``wastage factor'' (which determines how much unused-but-allocated space in mbufs and mbuf clusters is allowed in a socket buffer). 2) The default value of the wastage factor is changed from 2 to 8. The original value was chosen when MINCLSIZE was 7*MLEN (!), and is not appropriate for an environment where MINCLSIZE is much less. The real solution to this problem is to scrap both mbufs and sockbufs and completely redesign the buffering mechanism used at both levels. Revision Changes Path 1.8 +6 -2 src/sys/kern/uipc_socket2.c but that doesn't appear to do anything related to the problem with this particular range of sizes. Good thing most HTTP requests are too bloated to matter. Well, and many clients probably disable Nagle which doesn't help slow start but does help persistent connections.