From owner-freebsd-ipfw@FreeBSD.ORG Fri Nov 28 14:44:34 2003 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id B745216A4CE for ; Fri, 28 Nov 2003 14:44:34 -0800 (PST) Received: from mx01.bos.ma.towardex.com (a65-124-16-8.svc.towardex.com [65.124.16.8]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4A25343F93 for ; Fri, 28 Nov 2003 14:44:33 -0800 (PST) (envelope-from haesu@mx01.bos.ma.towardex.com) Received: by mx01.bos.ma.towardex.com (TowardEX ESMTP 3.0p11_DAKN, from userid 1001) id 317162F92A; Fri, 28 Nov 2003 17:44:36 -0500 (EST) Date: Fri, 28 Nov 2003 17:44:36 -0500 From: Haesu To: Vector , freebsd-ipfw@freebsd.org Message-ID: <20031128224436.GA97746@scylla.towardex.com> References: <054c01c3b45d$d0cc8b50$fe3d10ac@VECTOR> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <054c01c3b45d$d0cc8b50$fe3d10ac@VECTOR> User-Agent: Mutt/1.4.1i Subject: Re: multiple pipes cause slowdown X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Nov 2003 22:44:34 -0000 try doing src-port 0xFFFF ? -hc -- Haesu C. TowardEX Technologies, Inc. Consulting, colocation, web hosting, network design and implementation http://www.towardex.com | haesu@towardex.com Cell: (978)394-2867 | Office: (978)263-3399 Ext. 170 Fax: (978)263-0033 | POC: HAESU-ARIN On Wed, Nov 26, 2003 at 01:42:31PM -0700, Vector wrote: > I've got a FreeBSD system setup and I'm using dummynet to manage bandwidth. > Here is what I am seeing: > > We are communicating with a server on a 100Mbit ethernet segment in the > freebsd box as fxp0 and an 11Mbit wireless client that is getting throttled > with ipfw pipes. > If I add two pipes limiting my two clients A and B to 1Mbit each then here > is what happens. > > Client A does a transfer to/from the server and gets 1Mbps up and 1Mbps down > Client B does a transfer to/from the server and gets 1Mbps up and 1Mbps down > Clients A & B do simultaneous transfers to the server and each get between > 670 and 850 Kbps > > If I delete the pipes and the firewall rules, they behave like regular > 11Mbit unthrottled clients sharing the available wireless bandwidth > (although not necessarily equally). > > It gets worse when I start doing 3 or 4 clients each at 1Mbit, I've also > tried setting up 4 clients at 512Kbps and the performance does the same > thing, essentially gets cut significantly the more pipes we have. Here are > the rules I'm using: > > ipfw add 100 pipe 100 all from any to 192.168.1.50 xmit wi0 > ipfw add 100 pipe 5100 all from 192.168.1.50 to any recv wi0 > ipfw pipe 100 config bw 1024Kbits/s > ipfw pipe 5100 config bw 1024Kbits/s > > ipfw add 101 pipe 101 all from any to 192.168.1.51 xmit wi0 > ipfw add 101 pipe 5101 all from 192.168.1.51 to any recv wi0 > ipfw pipe 101 config bw 1024Kbits/s > ipfw pipe 5101 config bw 1024Kbits/s > > I've played with using in/out instead of recv/xmit and even not specifying a > direction at all (which makes traffic to the client get cut in half but > traffic from the client remains as high as if I specify which interface to > throttle on). ipfw pipe list shows no dropped packets and looks like it's > behaving normally, other than the slowdown for multiple clients. I'm not > specifying a delay and latency does not seem abnormally high. > > I am using 5.0 Release and I have HZ=1000 compiled in the kernel. > Here are my sysctl vars: > net.inet.ip.fw.enable: 1 > net.inet.ip.fw.autoinc_step: 100 > net.inet.ip.fw.one_pass: 0 > net.inet.ip.fw.debug: 0 > net.inet.ip.fw.verbose: 0 > net.inet.ip.fw.verbose_limit: 1 > net.inet.ip.fw.dyn_buckets: 256 > net.inet.ip.fw.curr_dyn_buckets: 256 > net.inet.ip.fw.dyn_count: 2 > net.inet.ip.fw.dyn_max: 4096 > net.inet.ip.fw.static_count: 72 > net.inet.ip.fw.dyn_ack_lifetime: 300 > net.inet.ip.fw.dyn_syn_lifetime: 20 > net.inet.ip.fw.dyn_fin_lifetime: 1 > net.inet.ip.fw.dyn_rst_lifetime: 1 > net.inet.ip.fw.dyn_udp_lifetime: 10 > net.inet.ip.fw.dyn_short_lifetime: 5 > net.inet.ip.fw.dyn_keepalive: 1 > net.link.ether.bridge_ipfw: 0 > net.link.ether.bridge_ipfw_drop: 0 > net.link.ether.bridge_ipfw_collisions: 0 > net.link.ether.bdg_fw_avg: 0 > net.link.ether.bdg_fw_ticks: 0 > net.link.ether.bdg_fw_count: 0 > net.link.ether.ipfw: 0 > net.inet6.ip6.fw.enable: 0 > net.inet6.ip6.fw.debug: 0 > net.inet6.ip6.fw.verbose: 0 > net.inet6.ip6.fw.verbose_limit: 1 > > > net.inet.ip.dummynet.hash_size: 64 > net.inet.ip.dummynet.curr_time: 99067502 > net.inet.ip.dummynet.ready_heap: 16 > net.inet.ip.dummynet.extract_heap: 16 > net.inet.ip.dummynet.searches: 0 > net.inet.ip.dummynet.search_steps: 0 > net.inet.ip.dummynet.expire: 1 > net.inet.ip.dummynet.max_chain_len: 16 > net.inet.ip.dummynet.red_lookup_depth: 256 > net.inet.ip.dummynet.red_avg_pkt_size: 512 > net.inet.ip.dummynet.red_max_pkt_size: 1500 > > Am I just doing something stupid or does the dummynet/QoS implementation in > FreeBSD need some work. If so, I may be able to help and contribute. > Thanks, > > vec > > > _______________________________________________ > freebsd-ipfw@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe@freebsd.org"