From owner-freebsd-net@FreeBSD.ORG Mon Aug 2 09:11:47 2010 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 05D53106564A for ; Mon, 2 Aug 2010 09:11:47 +0000 (UTC) (envelope-from luigi@onelab2.iet.unipi.it) Received: from onelab2.iet.unipi.it (onelab2.iet.unipi.it [131.114.59.238]) by mx1.freebsd.org (Postfix) with ESMTP id BF8278FC17 for ; Mon, 2 Aug 2010 09:11:46 +0000 (UTC) Received: by onelab2.iet.unipi.it (Postfix, from userid 275) id 08CB773098; Mon, 2 Aug 2010 11:22:07 +0200 (CEST) Date: Mon, 2 Aug 2010 11:22:07 +0200 From: Luigi Rizzo To: Patrick Mahan Message-ID: <20100802092207.GC2054@onelab2.iet.unipi.it> References: <4C535B18.8020205@mahan.org> <20100730233053.GA12554@onelab2.iet.unipi.it> <4C544ADC.2050109@mahan.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4C544ADC.2050109@mahan.org> User-Agent: Mutt/1.4.2.3i Cc: freebsd-net@freebsd.org Subject: Re: AltQ throughput issues (long message) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Aug 2010 09:11:47 -0000 On Sat, Jul 31, 2010 at 09:10:04AM -0700, Patrick Mahan wrote: ... > >part of it can be explained because AltQ counts the whole packet > >(eg. 1514 bytes for a full frame) whereas iperf only considers the > >UDP payload (e.g. 1470 bytes in your case). > > > > Okay, but that only accounts for 3% and I am seeing around 11%, any > idea what might be accounting for the remaining 8%? no, sometimes the extra icmp traffic plays a role, sometimes it is just the shaper that is not precise and has systematic errors (due to rounding in computing intervals and delays). I cannot comment precisely on AltQ because i don't know enough about its internals. > >The other thing you should check is whether there is any extra > >traffic going through the interface that competes for the bottleneck > >bandwidth. You have such huge drop rates in your tests that i > >would not be surprised if you had ICMP packets going around > >trying to slow down the sender. ... > Where do you see the drop? If you are looking at the end of the pfctl iperf reports show that it is dropping tons of packets (see below, Lost/Total) > >>npx8# iperf -c 172.16.13.10 -p 7788 -u -b 25M -u -i 30 -t 200 > >>------------------------------------------------------------ > >>Client connecting to 172.16.13.60, UDP port 7788 > >>Sending 1470 byte datagrams > >>UDP buffer size: 9.00 KByte (default) > >>------------------------------------------------------------ > >>[ 3] local 172.16.13.60 port 7788 connected with 172.16.38.80 port 41064 > >>[ ID] Interval Transfer Bandwidth Jitter Lost/Total > >>Datagrams > >>[ 3] 0.0-30.0 sec 5.96 MBytes 1.67 Mbits/sec 0.710 ms 59453/63706 > >>(93%) > >>[ ID] Interval Transfer Bandwidth Jitter Lost/Total > >>Datagrams > >>[ 3] 30.0-60.0 sec 5.95 MBytes 1.66 Mbits/sec 0.736 ms 59616/63859 > >>(93%) > > > >BTW have you tried dummynet in your config? > > > > How would you suggest using dummynet? Is it workable for a QoS solution? Very workable. see http://info.iet.unipi.it/~luigi/dummynet/ especially the video/slides at the top of the page. cheers luigi