Date: Sat, 21 Sep 1996 10:28:08 +0200 (MET DST) From: Luigi Rizzo <luigi@labinfo.iet.unipi.it> To: freebsd-isp@freebsd.org, freebsd-hackers@freebsd.org Cc: Luigi Rizzo <luigi@labinfo.iet.unipi.it> Subject: Re: IP queue size limitations. (fwd) Message-ID: <Pine.BSF.3.91.960921102724.25183A-100000@labinfo.iet.unipi.it>
next in thread | raw e-mail | index | archive | help
---------- Forwarded message ----------
Date: Sat, 21 SEP 1996 10:14:49 +0200
From: Luigi Rizzo <luigi@labinfo.iet.unipi.it>
Newgroups: comp.protocols.tcp-ip
Subject: Re: IP queue size limitations.
On 20 Sep 1996, Vernon Schryver wrote:
> In article <51ub5h$8ul@noao.edu>, W. Richard Stevens <rstevens@noao.edu> wrote:
> >> Now my question(s):
> >>
> >> + is it possible, in IP implementations (e.g. BSD), to set the
> >> size of output queues differently on each interface ?
> >
> >It's certainly "possible" since each interface has its own ifq_maxlen
> >value (in the ifqueue{} in the ifnet{}), but I've never seen an
> >implementation that does this. You could always patch this by hand
> At least one UNIX vendor's SLIP and PPP implementations bring
> control of ifq_maxlen out to the user interface. I also think the
(very clear and detailed explaination omitted -- thanks for including
that, it is important that people becomes aware of this problem)
I have digged through the FreeBSD sources (should not be much
different from 4.4Lite2). ifq_maxlen is, for most interfaces, set
to the global varialbe ifqmaxlen which in turn is set to IFQ_MAXLEN
which defaults to 50. (this kind of chains is very usual in the
BSD code). IFQ_MAXLEN can be defined in the configuration file,
but it is the same for all interfaces. The exceptions (from memory)
are slip and ppp (ifq_maxlen hardwired to explicit constants in
the code, 32 and 50 I believe). At first sight, iijppp (user space
ppp via the tun device) uses a queue length of 20.
The approximate time needed to flush a queue is as follows:
Interface MTU b/s qlen T (1 MSS) T (qlen MSS)
====================================================================
Ethernet 1500 10M 50 1.25ms 62.5ms
FDDI ~4KB ~100M 50 330us 16.5ms
Fast Ethernet 1500(?) 100M 50 125us 6.25ms
T1 1500(?) 1.5M 50 8ms 400ms
ISDN 1500 64K 50 188ms 9.4s
ISDN 576 64K 50 72ms 3.6s
PPP 1500 28.8 20 415ms 8.3s
PPP 576 28.8 20 160ms 3.2s
PPP 576 28.8 20 320ms 6.4s
PPP 1500 14.4 20 830ms 16.6s
Considering the default timer granularity, I believe that a queue
lenght of up to 1s is acceptable for the slowest networks,
but not much more than that. On the other hand, a busy router
connecting networks of different speeds could benefit from larger
sizes on the faster interfaces.
For the slowest networks, I think it is fundamental to compute queue
lengths in bytes (after compression, if available), not in maximum
sized packets.
Note that while this is a router issue, if the bottleneck is at the
source more efficient solutions are available, such as a modified
quench() call which shrinks the congestion window to a small (but
larger than 1) segment.
> fills and the delays increase. When the queue finially overflows,
> it generally does more than than fast-retransmission can recover,
> forcing a timeout and a slow-start, which causes the measured
> latency on the link to drop to only the transmission delay.
of course, all traffic is delayed by such long queues, the loss
is detected after a *long* timeout and the process repeats
forever.
Luigi
====================================================================
Luigi Rizzo Dip. di Ingegneria dell'Informazione
email: luigi@iet.unipi.it Universita' di Pisa
tel: +39-50-568533 via Diotisalvi 2, 56126 PISA (Italy)
fax: +39-50-568522 http://www.iet.unipi.it/~luigi/
====================================================================
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.91.960921102724.25183A-100000>
