From owner-freebsd-net@FreeBSD.ORG Sun Mar 2 06:31:44 2008 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 98F111065670 for ; Sun, 2 Mar 2008 06:31:44 +0000 (UTC) (envelope-from smithi@nimnet.asn.au) Received: from gaia.nimnet.asn.au (nimbin.lnk.telstra.net [139.130.45.143]) by mx1.freebsd.org (Postfix) with ESMTP id D963D8FC1B for ; Sun, 2 Mar 2008 06:31:41 +0000 (UTC) (envelope-from smithi@nimnet.asn.au) Received: from localhost (smithi@localhost) by gaia.nimnet.asn.au (8.8.8/8.8.8R1.5) with SMTP id RAA12378; Sun, 2 Mar 2008 17:16:48 +1100 (EST) (envelope-from smithi@nimnet.asn.au) Date: Sun, 2 Mar 2008 17:16:47 +1100 (EST) From: Ian Smith To: Peter Jeremy In-Reply-To: <20080301224847.GU67687@server.vk2pj.dyndns.org> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Cc: freebsd-net@freebsd.org, Juri Mianovich Subject: Re: simple, adaptive bandwidth throttling with ipfw/dummynet ? X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 02 Mar 2008 06:31:44 -0000 On Sun, 2 Mar 2008, Peter Jeremy wrote: > On Fri, Feb 29, 2008 at 02:28:04PM -0800, Juri Mianovich wrote: > >"after 30 minutes of maxed dummynet rule, add X mbps > >to the rule for every active TCP session, with a max > >ceiling of Y mbps" > > > >and: > > > >"after 30 minutes of less than max usage, subtract X > >mbps from the rule every Y minutes, with a minimum > >floor of Z" > > > >Make sense ? > > It doesn't really make sense to me but it's your firewall and you are > free to implement whatever rules you like. :) > >If I wanted to do this myself with a shell script, is > >there any way to test a particular dummynet rule for > >its current "fill rate" - OR - a simple way to test if > >a particular dummynet rule is currently in enforcement > >? > > The system doesn't maintain stats on the instantaneous "fill rate" > of pipes/queues. All it will report is total counts of traffic > through and in the pipe/queue. Since the format wasn't clear to > me from a quick read of the man page, the following is a breakdown > of the output, with added notes: > fwall# ipfw pipe list > 00001: 6.400 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 192.168.123.200/56599 150.101.135.3/61455 122097 6353558 0 0 397 > |----- dummynet accumulation bucket details -----|---- Totals ---|Queued | > 'dummynet accumulation bucket details' is the details of the most recent > (I think) packet matching the specific bucket mask Yes, but I'm not sure if it's the last packet into or out of the queue. > 'Totals' is total bytes and packets through that particular bucket > 'Queued' refer to bytes and packets for that bucket currently queued > 'Drp' is the number of packets dropped. > > You would need to calculate a rate by periodically sampling the > counts. You can get a rough idea of if a particular dummynet rule is > restricting traffic flow by looking for non-zero queued counts (though > keep in mind that it is normal for a packet to occasionally be queued). Also if there's any burstiness in the flow (ie letting the queue fully or partially empty), you could easily misinterpret the overall flow. > Assuming you have the TCP sessions spread across distinct buckets > (either with multiple pipes/queues or with masks to split them up), my I think this would be the way to go. Juri said he only has one pipe defined, and managing multiple sessions through that has to be handled by some tricky out of band means. Personally I've found it easier to monitor recv/sent throughput per host over a period by parsing the output of ipfw show on rules numbered by IP address than trying to parse ipfw pipe show output, using sh rather than perl, but everyone's mileage varies. An extract: subnet="192.168.0" base='27000' # ctc 'preweb' skipto rules if [ $ip -eq 1 ]; then ip="*"; recvrule='26890'; sentrule='26900' else recvrule=$(($base + $ip * 10)); sentrule=$(($recvrule + 5)); fi getbytes() { echo -n `ipfw show $1 2>/dev/null | awk '{print $3}'` } oldrx=`getbytes $recvrule` ; oldtx=`getbytes $sentrule` [..] > suggestion would be a perl script that regularly does 'ipfw pipe list' > or 'ipfw queue list' and use change_in_total_bytes/time to calculate > average throughput per session. Then use a leaky bucket on the > average throughput to trigger pipe/queue re-configurations as desired. Please explain 'leaky bucket'? Someone on questions@ recently mentioned using one pipe with masks to limit traffic per-host, then fed through another pipe limiting overall bandwidth for the lot or for distinct subgroups, but due to a crash I'm several days behind and haven't yet caught up with how that's done, or indeed if that can be done on a filtering bridge using ipfw1 and old bridge(4) on 4.8-RELEASE, which I'm stuck with using for a while yet. cheers, Ian