From owner-cvs-src@FreeBSD.ORG Mon Sep 11 06:20:44 2006 Return-Path: X-Original-To: cvs-src@FreeBSD.org Delivered-To: cvs-src@FreeBSD.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5426916A494 for ; Mon, 11 Sep 2006 06:20:44 +0000 (UTC) (envelope-from silby@silby.com) Received: from relay03.pair.com (relay03.pair.com [209.68.5.17]) by mx1.FreeBSD.org (Postfix) with SMTP id 3C15543D55 for ; Mon, 11 Sep 2006 06:20:42 +0000 (GMT) (envelope-from silby@silby.com) Received: (qmail 89145 invoked from network); 11 Sep 2006 06:20:40 -0000 Received: from unknown (HELO localhost) (unknown) by unknown with SMTP; 11 Sep 2006 06:20:40 -0000 X-pair-Authenticated: 209.68.2.70 Date: Mon, 11 Sep 2006 01:21:14 -0500 (CDT) From: Mike Silbersack To: Ruslan Ermilov In-Reply-To: <20060906150506.GA7069@rambler-co.ru> Message-ID: <20060911005435.A23530@odysseus.silby.com> References: <200609061356.k86DuZ0w016069@repoman.freebsd.org> <20060906091204.B6691@odysseus.silby.com> <20060906143204.GQ40020@FreeBSD.org> <20060906093553.L6691@odysseus.silby.com> <20060906150506.GA7069@rambler-co.ru> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: cvs-src@FreeBSD.org, Gleb Smirnoff , cvs-all@FreeBSD.org, src-committers@FreeBSD.org Subject: Re: cvs commit: src/sys/netinet in_pcb.c tcp_subr.c tcp_timer.c tcp_var.h X-BeenThere: cvs-src@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: CVS commit messages for the src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Sep 2006 06:20:44 -0000 Ok, I started looking through the mess that is in_pcb.c, and I came up with a simpler idea than trying to improve upon my old heuristic. What if we just build upon what Gleb did in revision 1.256, and change the size of the tcptw zone? Instead of scaling it to maxsockets / 5, let's scale it to max((ipport_lastauto - ipport_firstauto)/2, 500). We'll have to rescale it whenever the port ranges are changed, but those sysctls are already handled by a function, so it'll be easy. This means that we'll be keeping around fewer time_wait sockets than we do at present, but I don't think that's a big problem for anyone. On the positive side, it means that time_wait sockets can't starve out ephemeral ports unless you have more than 50% active connections. One slightly more complex solution would be to use one tcptw bucket for connections with local ports >= 1024 and a seperate bucket for connections with local ports < 1024. Assuming that our front end web proxy answers on ports < 1024, that would ensure that we keep one pool of time_wait sockets for our connections from clients and another pool for our connections to the backend web servers. I guess that would be slightly more "correct". What do you guys think? Mike "Silby" Silbersack