Date: Thu, 10 Oct 2002 11:04:49 -0700 (PDT) From: Scott Hess <scott@avantgo.com> To: freebsd-net@freebsd.org Subject: net.inet.tcp.inflight_enable and dynamic mbuf caps. Message-ID: <Pine.LNX.4.44.0210101053410.6270-100000@river.avantgo.com>
next in thread | raw e-mail | index | archive | help
The 4.7 release notes have: "The tcp(4) protocol now has the ability to dynamically limit the send-side window to maximize bandwidth and minimize round trip times. The feature can be enabled via the net.inet.tcp.inflight_enable sysctl." I recall being interested in this back when it was being discussed on the lists. We have a set of intelligent proxies which mediate between connections from the wild and the server cluster. One issue we had was that we wanted big send buffers, because the kernel should be able to manage the buffering much better than userland. Unfortunately, we also have to limit the send buffer size to reduce DoS issues. What would be nice is if the send buffer size could be dynamically tuned to cause the overall mbuf usage asymptotically approach some conf value. For instance, if 75% of the mbuf space is used, then the maximum send buffer could be scaled to 25% of the normal size. [I'm not certain how to derive better numbers, so that's a guess.] Basically, what I'm suggesting is that until, say, 25% of the mbufs are used, each connection can get 100% of the configured send buffers, then from 25%-50% used they can only get 75%, then from 50%-75% used they can only get 50%, and so on. You never _quite_ get to the point where you can't make new connections due to mbuf starvation, but you might end up with very constricted pipe. [It would be slicker to do that in a per-host or per-network way, but that would probably be significantly more complex.] Thanks, scott To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.LNX.4.44.0210101053410.6270-100000>