Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Feb 2010 15:10:44 +0400
From:      rihad <rihad@mail.ru>
To:        Luigi Rizzo <rizzo@iet.unipi.it>
Cc:        freebsd-net@freebsd.org
Subject:   Re: Slow speeds experienced with Dummynet
Message-ID:  <4B7FC334.7000108@mail.ru>
In-Reply-To: <20100220095941.GA82976@onelab2.iet.unipi.it>
References:  <4B7EDD00.5080308@mail.ru> <20100220095941.GA82976@onelab2.iet.unipi.it>

next in thread | previous in thread | raw e-mail | index | archive | help
Luigi Rizzo wrote:
> On Fri, Feb 19, 2010 at 10:48:32PM +0400, rihad wrote:
>> Hi, all,
>>
>> Recalling my old posting "dummynet dropping too many packets" dated 
>> October 4, 2009, the problem isn't over just yet. This time, there are 
>> no interface i/o drops (just a reminder: we have 2 bce(4) GigE cards 
>> connected to a Cisco router, one for input, and one for output. The box 
>> itself does some traffic accounting and enforces speed limits w/ 
>> ipfw/dummynet. There are normally around 5-6k users online).
> 
> If i remember well, the previous discussion ended when you
> raised the intr_queue_maxlen (and perhaps increased HZ) to
> avoid that the bursts produced by periodic invocation of
> dummynet_io() could overflow that queue.
> 
I've never seen intr_queue_drops rise. It was HZ=2000 and changing 
if_bce.c to force ifp->if_snd.ifq_drv_maxlen = 4096 (8192 now) stopped 
the output drops back then. Lately we've been experiencing 
interface-level input drops as well (as showed by netstat -i Ierrs 
column), setting HZ=4000 solved the issue.

>>From the rest of your post it is not completely clear if you have
> not found any working configuration, or there is some setting (e.g.
> with "queue 1000" or larger) which do produce a smooth experience
> for your customers.

It's pretty hard to set slots in terms of seconds, as slots may vary in 
size. I'll try setting queue sizes to Kbytes for the duration of 1-2 
seconds as you suggested, and use the taildrop queuing algorithm with it.
> 
> Another thing i'd like to understand is whether all of your pipes
> have a /32 mask, or there are some which cover multiple hosts.
> Typical TCP connections have around 50 packets in flight when the
> connection is fully open (which in turn is hard to happen on a 512k
> pipe) so a queue of 100-200 is unlikely to overflow.
> 
Yes, all masks are currently /32. 512 isn't used that much, speeds of 
several mbps are.

> In fact, long queues are very detrimental for customers because
> they increase the delay of the congestion control loop -- as a rule
> of thumb, you should try to limit the queue size to at most 1-2s
> of data.
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4B7FC334.7000108>