From owner-freebsd-net@FreeBSD.ORG Tue Jan 20 13:26:07 2004 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id BD9AE16A4CE; Tue, 20 Jan 2004 13:26:07 -0800 (PST) Received: from duke.cs.duke.edu (duke.cs.duke.edu [152.3.140.1]) by mx1.FreeBSD.org (Postfix) with ESMTP id E5CE743D48; Tue, 20 Jan 2004 13:26:03 -0800 (PST) (envelope-from gallatin@cs.duke.edu) Received: from grasshopper.cs.duke.edu (grasshopper.cs.duke.edu [152.3.145.30]) by duke.cs.duke.edu (8.12.10/8.12.10) with ESMTP id i0KLQ15P022276 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 20 Jan 2004 16:26:01 -0500 (EST) Received: (from gallatin@localhost) by grasshopper.cs.duke.edu (8.12.9p2/8.12.9/Submit) id i0KLPuKa098055; Tue, 20 Jan 2004 16:25:56 -0500 (EST) (envelope-from gallatin) From: Andrew Gallatin MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <16397.40164.341384.651639@grasshopper.cs.duke.edu> Date: Tue, 20 Jan 2004 16:25:56 -0500 (EST) To: David Borman In-Reply-To: References: <16397.36782.415899.626311@grasshopper.cs.duke.edu> <400D9271.1259CBC8@freebsd.org> <16397.38155.418523.634400@grasshopper.cs.duke.edu> X-Mailer: VM 6.75 under 21.1 (patch 12) "Channel Islands" XEmacs Lucid cc: freebsd-net@freebsd.org cc: Andre Oppermann Subject: Re: tcp mss MCLBYTES restriction X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Jan 2004 21:26:07 -0000 David Borman writes: > On the sending side, you'll tend to get your best performance when the > socket buffer is a multiple of the amount of TCP data per packet, and > the users writes are a multiple of the socket buffer. This keeps > everything neatly aligned, minimizing the number of data copies that > need to be done, and improving the chance of doing page flips. Yes, this was very handy when doing the zero-copy receives. > Rounding down a 1500 byte ethernet packet to a 1K boundary looses too > much data, but for larger MTUs, the win of keeping everything neatly > aligned can exceed the cost of not packing each packet with the maximum > amount of data. Since applications that are writing large amounts of > data to a socket will tend to be using buffers aligned on a K boundary, > using a K aligned amount of TCP data increases the chances that > everything stays aligned. Good point. But how would you feel about making it optional with it defaulting as it is now? There are special cases. For example, I think its killing me on an experimental network interface which stripes data across 2 links. Drew