From owner-freebsd-current@FreeBSD.ORG Sun Oct 24 14:37:34 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7BAC916A4D0 for ; Sun, 24 Oct 2004 14:37:34 +0000 (GMT) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.FreeBSD.org (Postfix) with ESMTP id A054143D54 for ; Sun, 24 Oct 2004 14:37:33 +0000 (GMT) (envelope-from andre@freebsd.org) Received: (qmail 3498 invoked from network); 24 Oct 2004 14:35:38 -0000 Received: from unknown (HELO freebsd.org) ([195.134.148.7]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 24 Oct 2004 14:35:38 -0000 Message-ID: <417BBE2C.A285792B@freebsd.org> Date: Sun, 24 Oct 2004 16:37:32 +0200 From: Andre Oppermann X-Mailer: Mozilla 4.8 [en] (Windows NT 5.0; U) X-Accept-Language: en MIME-Version: 1.0 To: luigi@freebsd.org References: <429af92e041020205510c66168@mail.gmail.com> <4177B899.5EC32F5F@freebsd.org> <429af92e04102114472add0e51@mail.gmail.com> <417835C7.7060808@freebsd.org> <429af92e04102404115bc7bc80@mail.gmail.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit cc: Vincent Poy cc: freebsd-current@freebsd.org Subject: Re: Traffic Shaping not working correctly after ipfw coverted touse pfil_hooks API X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 24 Oct 2004 14:37:34 -0000 [bouncing over to Luigi] Luigi, do you have any idea what might be going wrong here? -- Andre Vincent Poy wrote: > > On Fri, 22 Oct 2004 00:18:47 +0200, Andre Oppermann wrote: > > Vincent Poy wrote: > > > On Thu, 21 Oct 2004 15:24:41 +0200, Andre Oppermann wrote: > > > > > >>Vincent Poy wrote: > > >> > > >>>However, after the latest -CURRENT upgrade, it will do 200KB/sec down > > >>>and 52KB/sec up. If I only download only, then it does show > > >>>650KB/sec. Normally, when I change the bandwidth to a number lower > > >>>than 480Kbps for the pipe, the download speeds would go up when > > >>>downloading. However, I have tried in 10kbps steps down to 350kbps > > >>>but it still did not top 200KB/sec in downloading. > > >> > > >>Interesting. I have just looked through the ipfw to pfil_hooks changes > > >>as they relate to dummynet. The only change to dummynet is to remove a > > >>stored pointer to the rtentry. This doesn't influence the shaping and > > >>limiting of dummynet in any way. Other than that the way ipfw gets > > >>called has changed and thus how dummynet is invoked too. > > >> > > >>Can you verify that all dummynet queues and pipes are in use? The only > > >>thing I can imagine is that somehow the dummynet info gets mangled and > > >>everything goes into the same queue/pipe. Although that is unlikely. > > > > > > > > > Yeah, it's weird since I was trying to fine tune the bandwidth size of > > > the upstream pipe but noticed the download side was now only > > > delivering 1/3rd the speed it used to no matter what I set the > > > upstream side to since I'm only using ipfw/dummynet on the upstream > > > side as the downstream packets go directly from my ISP to the other > > > machines on the /29. How do I verify all dummynet queues and pipes > > > are in use though? this is the output from ipfw show: > > > > ipfw pipe show > > ipfw queue show > > > > will do the trick. > > Here's the output... > > root@bigbang [3:35pm][/home/vince] >> ipfw pipe show > 00001: 480.000 Kbit/s 0 ms 50 sl. 0 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > q00001: weight 100 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 208.201.244.225/3254 64.12.185.119/80 2298723 > 1664167302 0 0 6116 > q00002: weight 66 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 udp 208.201.244.225/2979 217.12.4.104/53 346608 32488287 0 0 0 > q00003: weight 33 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 208.201.244.225/3254 64.12.185.119/80 36965 11308730 0 0 60 > q00004: weight 1 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 208.201.244.226/3746 216.155.193.173/5050 10058 3530197 0 0 0 > root@bigbang [3:37pm][/home/vince] >> ipfw queue show > q00001: weight 100 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 208.201.244.225/3254 64.12.185.119/80 2298737 > 1664167862 0 0 6116 > q00002: weight 66 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 udp 208.201.244.225/2979 217.12.4.104/53 346608 32488287 0 0 0 > q00003: weight 33 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 208.201.244.225/3254 64.12.185.119/80 36965 11308730 0 0 60 > q00004: weight 1 pipe 1 50 sl. 1 queues (1 buckets) droptail > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp > 0 tcp 208.201.244.226/3746 216.155.193.173/5050 10058 3530197 0 0 0 > root@bigbang [3:37pm][/home/vince] >> > > Cheers, > Vince