From owner-freebsd-ipfw@FreeBSD.ORG Sun Nov 7 10:36:38 2004 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C25E216A4CE for ; Sun, 7 Nov 2004 10:36:38 +0000 (GMT) Received: from shellma.zin.lublin.pl (shellma.zin.lublin.pl [212.182.126.68]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7CA2743D2D for ; Sun, 7 Nov 2004 10:36:38 +0000 (GMT) (envelope-from pawmal-posting@freebsd.lublin.pl) Received: by shellma.zin.lublin.pl (Postfix, from userid 1018) id 4C1EE347BA8; Sun, 7 Nov 2004 11:37:42 +0100 (CET) Date: Sun, 7 Nov 2004 11:37:42 +0100 From: Pawel Malachowski To: freebsd-ipfw@freebsd.org Message-ID: <20041107103742.GA74864@shellma.zin.lublin.pl> References: <1099819314.652.13.camel@Mobile1.276NET> <20041107094433.GA56141@shellma.zin.lublin.pl> <1099822179.652.18.camel@Mobile1.276NET> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-2 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1099822179.652.18.camel@Mobile1.276NET> User-Agent: Mutt/1.4.2i Subject: Re: Dummynet dynamically assigned bandwidth X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list Reply-To: freebsd-ipfw@freebsd.org List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 07 Nov 2004 10:36:38 -0000 On Sun, Nov 07, 2004 at 01:09:39PM +0300, Martes Wigglesworth wrote: > Thanks for the reply, however, I have a subnet with eight clients and > whenever I have the queued rule enabled, there is a significant latency > increase, and the queues do not get full access to the pipe. I have done > tests online, and it is fine for about the first few minutes, however, > as the other clients use the net, the tests drop from 39KByts/s to like > 20KByte/s and lower. The only thing left is that the queues are > assigning static bandwidth that is not changing in the upward direction. > Anymore input is welcome. I'm not sure if this will be source of Your problem, but I'm sure You are footshooting with default size of pipe and queues, which is 50 slots. With 128kbit/s (as showed in previous mail, BTW, this is 16KB/s) transmitting of 50 full size packets (with MTU 1500B) requires: 50*1.5KB/16KBpS = ~4,68 seconds This will kill TCP throughput. Try adding something like `queue 5KBytes' parameter both to pipe and queue definitions. -- Paweł Małachowski