From owner-freebsd-hackers@FreeBSD.ORG Thu Apr 21 22:06:33 2005 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1A7BB16A4CE; Thu, 21 Apr 2005 22:06:33 +0000 (GMT) Received: from xor.cs.umd.edu (xor.cs.umd.edu [128.8.128.118]) by mx1.FreeBSD.org (Postfix) with ESMTP id 88B9543D39; Thu, 21 Apr 2005 22:06:32 +0000 (GMT) (envelope-from capveg@cs.umd.edu) Received: (from capveg@localhost) by xor.cs.umd.edu (8.12.10/8.12.5) id j3LM6Vwh005980; Thu, 21 Apr 2005 18:06:31 -0400 (EDT) Date: Thu, 21 Apr 2005 18:06:31 -0400 From: Rob To: Andre Oppermann Message-ID: <20050421220631.GP14341@xor.cs.umd.edu> References: <20050421204726.GK14341@xor.cs.umd.edu> <42681699.8040904@freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42681699.8040904@freebsd.org> User-Agent: Mutt/1.4.1i cc: freebsd-hackers@freebsd.org cc: Rob Subject: Re: FreeBSD Network Implementation Question X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Apr 2005 22:06:33 -0000 > You figured it out correctly. However at that moment TCP flow control > would kick in and save you from local packet loss so to say. Hi, Thanks for the response, but you have actually confused me more. It is my understanding that TCP doesn't have flow control (i.e., local to the node), it has congestion control, which is end-to-end across the network. So it is entirely possible to drop packets locally in this method with a highband width, high latency (so called "long-fat") connection. For example, if there were a giga-bit/second link, with a latency of 100 miliseconds rtt, and window scaling set to 14 (the max), tcp could in theory open it's congestion window up to 2^16*2^14 or 2^30 bytes, which could be ACK'd more quickly than the net.inet.ip.intr_queue_max queue would allow for, causing packets to be dropped locally. Basically, the bandwidth-delay product dictates the size the buffer/queue should be, and in the above (extreme) example, it should be 0.1s*1Gb/s=12.5MB which is larger than the 50 packets at 1500 bytes each that you get with net.inet.ip.intr_queue_max=50. In otherwords, this is the reason for the net.inet.ip.intr_queue_drops counter, right? I'm surprised that more of the tuning guides don't suggest increasing net.inet.ip.intr_queue_max to a higher value - am I missing something? The equivalent setting in Linux is 1000, and Windows 2k appears to be 1500 (not that this should be necessarily taken as any sort of endorsement). If my understanding is incorrect, please let me know. In any case, thanks for the help (and thanks to those that have replied off list). - Rob .