From owner-freebsd-net Thu Oct 10 11: 4:55 2002 Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7463037B401 for ; Thu, 10 Oct 2002 11:04:54 -0700 (PDT) Received: from kali.avantgo.com (shadow.avantgo.com [64.157.226.66]) by mx1.FreeBSD.org (Postfix) with ESMTP id 33DF443EAC for ; Thu, 10 Oct 2002 11:04:54 -0700 (PDT) (envelope-from scott@avantgo.com) Received: from river.avantgo.com ([10.11.30.114]) by kali.avantgo.com with Microsoft SMTPSVC(5.0.2195.3779); Thu, 10 Oct 2002 11:04:54 -0700 Date: Thu, 10 Oct 2002 11:04:49 -0700 (PDT) From: Scott Hess To: freebsd-net@freebsd.org Subject: net.inet.tcp.inflight_enable and dynamic mbuf caps. Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-OriginalArrivalTime: 10 Oct 2002 18:04:54.0131 (UTC) FILETIME=[89022430:01C27087] Sender: owner-freebsd-net@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org The 4.7 release notes have: "The tcp(4) protocol now has the ability to dynamically limit the send-side window to maximize bandwidth and minimize round trip times. The feature can be enabled via the net.inet.tcp.inflight_enable sysctl." I recall being interested in this back when it was being discussed on the lists. We have a set of intelligent proxies which mediate between connections from the wild and the server cluster. One issue we had was that we wanted big send buffers, because the kernel should be able to manage the buffering much better than userland. Unfortunately, we also have to limit the send buffer size to reduce DoS issues. What would be nice is if the send buffer size could be dynamically tuned to cause the overall mbuf usage asymptotically approach some conf value. For instance, if 75% of the mbuf space is used, then the maximum send buffer could be scaled to 25% of the normal size. [I'm not certain how to derive better numbers, so that's a guess.] Basically, what I'm suggesting is that until, say, 25% of the mbufs are used, each connection can get 100% of the configured send buffers, then from 25%-50% used they can only get 75%, then from 50%-75% used they can only get 50%, and so on. You never _quite_ get to the point where you can't make new connections due to mbuf starvation, but you might end up with very constricted pipe. [It would be slicker to do that in a per-host or per-network way, but that would probably be significantly more complex.] Thanks, scott To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message