From owner-freebsd-net@FreeBSD.ORG Mon Oct 5 11:29:06 2009 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 035C4106568D for ; Mon, 5 Oct 2009 11:29:06 +0000 (UTC) (envelope-from rihad@mail.ru) Received: from mx75.mail.ru (mx75.mail.ru [94.100.176.90]) by mx1.freebsd.org (Postfix) with ESMTP id 9223A8FC1C for ; Mon, 5 Oct 2009 11:29:05 +0000 (UTC) Received: from [217.25.27.27] (port=60065 helo=[217.25.27.27]) by mx75.mail.ru with asmtp id 1MulkV-0006cZ-00; Mon, 05 Oct 2009 15:29:03 +0400 Message-ID: <4AC9D87E.7000005@mail.ru> Date: Mon, 05 Oct 2009 16:29:02 +0500 From: rihad User-Agent: Mozilla-Thunderbird 2.0.0.22 (X11/20090706) MIME-Version: 1.0 To: Luigi Rizzo References: <4AC8A76B.3050502@mail.ru> <20091005025521.GA52702@svzserv.kemerovo.su> <20091005061025.GB55845@onelab2.iet.unipi.it> <4AC9B400.9020400@mail.ru> <20091005090102.GA70430@svzserv.kemerovo.su> <4AC9BC5A.50902@mail.ru> <20091005095600.GA73335@svzserv.kemerovo.su> <4AC9CFF7.3090208@mail.ru> <20091005110726.GA62598@onelab2.iet.unipi.it> In-Reply-To: <20091005110726.GA62598@onelab2.iet.unipi.it> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam: Not detected X-Mras: Ok Cc: freebsd-net@freebsd.org Subject: Re: dummynet dropping too many packets X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Oct 2009 11:29:06 -0000 Luigi Rizzo wrote: > On Mon, Oct 05, 2009 at 03:52:39PM +0500, rihad wrote: >> Eugene Grosbein wrote: >>> On Mon, Oct 05, 2009 at 02:28:58PM +0500, rihad wrote: >>> >>>> Still not sure why increasing queue size as high as I want doesn't >>>> completely eliminate drops. >>> The goal is to make sources of traffic to slow down, this is the only >>> way to descrease drops - any finite queue may be overhelmed with traffic. >>> Taildrop does not really help with this. GRED does much better. >>> >> Alright, so I changed to gred by adding to each config command: >> ipfw ... gred 0.002/900/1000/0.1 queue 1000 >> and reconfigured. Still around 300-400 drops per second, which was >> typical at this load level before with taildrop anyway. There are around >> 3-5 mbit/s being wasted according to systat -ifstat. >> >> Should I now increase slots to 5-10-20k? >> Very strange. >> >> "ipfw pipe show" correctly shows that gred is at work. For example: >> 00512: 512.000 Kbit/s 0 ms 1000 sl. 79 queues (64 buckets) >> GRED w_q 0.001999 min_th 900 max_th 1000 max_p 0.099991 >> mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 >> ... > > you keep omitting the important info i.e. whether individual > pipes have drops, significant queue lenghts and so on. > Sorry. Almost everyone has 0 in the last Drp column, but some have above zero. I'm not just sure how this can be helpful to anyone. 05120: 5.120 Mbit/s 0 ms 5000 sl. 66 queues (64 buckets) GRED w_q 0.001999 min_th 4500 max_th 5000 max_p 0.099991 mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 1 131 0 0 0 1 ip 0.0.0.0/0 39 53360 0 0 0 2 ip 0.0.0.0/0 382206 418022848 0 0 0 3 ip 0.0.0.0/0 34 2008 0 0 0 4 ip 0.0.0.0/0 4868510 6277077787 15 20452 9 5 ip 0.0.0.0/0 14 16675 0 0 0 5 ip 0.0.0.0/0 3 4158 0 0 0 6 ip 0.0.0.0/0 38 43576 0 0 0 7 ip 0.0.0.0/0 1265954 1475400663 0 0 0 8 ip 0.0.0.0/0 1081461 1247681879 0 0 749 9 ip 0.0.0.0/0 6186589 8737048919 0 0 19243 10 ip 0.0.0.0/0 21607 5636447 0 0 5 11 ip 0.0.0.0/0 437 94576 0 0 0 12 ip 0.0.0.0/0 22915 18634779 0 0 0 13 ip 0.0.0.0/0 557988 688051579 0 0 0 14 ip 0.0.0.0/0 50339 65685647 0 0 0 15 ip 0.0.0.0/0 554835 546223485 0 0 140 16 ip 0.0.0.0/0 32 13104 0 0 0 17 ip 0.0.0.0/0 2034099 2719966792 0 0 0 18 ip 0.0.0.0/0 282 36551 0 0 0 19 ip 0.0.0.0/0 8351766 8947643162 0 0 0 20 ip 0.0.0.0/0 4 624 0 0 0 21 ip 0.0.0.0/0 22391 29922375 0 0 0 22 ip 0.0.0.0/0 9 424 0 0 0 23 ip 0.0.0.0/0 750322 935365326 0 0 0 24 ip 0.0.0.0/0 1 40 0 0 0 25 ip 0.0.0.0/0 3617690 3501375619 0 0 602 26 ip 0.0.0.0/0 12116 12039435 0 0 0 27 ip 0.0.0.0/0 524311 653399507 0 0 8 28 ip 0.0.0.0/0 3 417 0 0 0 29 ip 0.0.0.0/0 16 2034 0 0 0 30 ip 0.0.0.0/0 64 82661 3 4432 0 31 ip 0.0.0.0/0 946389 1175221367 0 0 66 32 ip 0.0.0.0/0 1 168 0 0 0 32 ip 0.0.0.0/0 28 41776 0 0 0 33 ip 0.0.0.0/0 6 6433 0 0 0 34 ip 0.0.0.0/0 1 536 0 0 0 35 ip 0.0.0.0/0 2021 2641048 0 0 0 36 ip 0.0.0.0/0 350 264039 0 0 0 37 ip 0.0.0.0/0 167578 137763107 0 0 0 38 ip 0.0.0.0/0 250404 128905757 0 0 0 39 ip 0.0.0.0/0 385139 287006012 0 0 0 40 ip 0.0.0.0/0 49 68696 0 0 0 41 ip 0.0.0.0/0 23 1813 0 0 0 42 ip 0.0.0.0/0 129 135256 0 0 0 43 ip 0.0.0.0/0 3232 2191027 0 0 0 44 ip 0.0.0.0/0 27935157 24307287646 0 0 18802 45 ip 0.0.0.0/0 2166 212635 0 0 0 46 ip 0.0.0.0/0 1127307 1392467620 0 0 3 47 ip 0.0.0.0/0 1216900 1258200836 0 0 0 48 ip 0.0.0.0/0 2 2984 1 1492 0 49 ip 0.0.0.0/0 1 112 0 0 0 50 ip 0.0.0.0/0 1409 326389 0 0 0 51 ip 0.0.0.0/0 46674 47291021 10 14920 0 52 ip 0.0.0.0/0 86667 66834983 0 0 0 53 ip 0.0.0.0/0 434998 302827189 0 0 0 54 ip 0.0.0.0/0 542 277669 0 0 0 55 ip 0.0.0.0/0 1088072 919495021 0 0 0 56 ip 0.0.0.0/0 64 81240 0 0 0 57 ip 0.0.0.0/0 41028 59193278 0 0 0 58 ip 0.0.0.0/0 1 210 0 0 0 59 ip 0.0.0.0/0 4 310 0 0 0 60 ip 0.0.0.0/0 2 2984 0 0 0 61 ip 0.0.0.0/0 42874 36616688 0 0 0 62 ip 0.0.0.0/0 4 498 0 0 0 63 ip 0.0.0.0/0 530137 717027403 0 0 0