Date: Tue, 28 Nov 2000 10:21:54 -0800 From: Lars Eggert <larse@ISI.EDU> To: Alfred Perlstein <bright@wintelcom.net> Cc: questions@FreeBSD.ORG Subject: Re: send(2) and resuming after ENOBUFS Message-ID: <3A23F7C2.EC30C6E9@isi.edu> References: <3A2326E6.AD835284@isi.edu> <20001127220700.W8051@fw.wintelcom.net> <3A23EE64.D50FD149@isi.edu> <20001128094820.I8051@fw.wintelcom.net>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] Alfred Perlstein wrote: > > * Lars Eggert <larse@ISI.EDU> [001128 09:41] wrote: > > Alfred Perlstein wrote: > > > > I am running this with maxusers=128, and as far as I can tell, mbufs are > > not the problem: > > > > [root@hbo: /nfs/ruby/larse/projects/thesis/bin] netstat -m > > 139/2192/10240 mbufs in use (current/peak/max): > > 132 mbufs allocated to data > > 7 mbufs allocated to packet headers > > 128/1166/2560 mbuf clusters in use (current/peak/max) > > 2880 Kbytes allocated to network (37% of mb_map in use) > > 0 requests for memory denied > > 0 requests for memory delayed > > 0 calls to protocol drain routines > > > > Looking at the send(2) man page, I agree that ENOBUFS doesn't mean that the > > send buffer is full: > > > > [ENOBUFS] The system was unable to allocate an internal buffer. > > The operation may succeed when buffers become avail- > > able. > > > > [ENOBUFS] The output queue for a network interface was full. > > This generally indicates that the interface has > > stopped sending, but may be caused by transient con- > > gestion. > > > > However, since a too low maxusers is not the problem, I'm still looking for > > a way of detecting when to resume sending after ENOBUFS, without > > spinning... > > You'll have to spin, or at least call nanosleep or something to back > off for a bit. Are you getting full network utilization even when > getting ENOBUFS? Like ~11MB/sec on 100mbit full duplex? Or are > you falling short of full throughput? What network card are you > using? Some sender host/card info: CPU: Pentium III/Pentium III Xeon/Celeron (731.47-MHz 686-class CPU) xl0: <3Com 3c905C-TX Fast Etherlink XL> I'm seeing about 96Mb/s on a 100Mb/s full-duplex link, so I'm at link capacity. The problem with using nanosleep() is that I have no idea how long to sleep for in the general case. (Sure, I can pick good value based on the link speed/message size, but not if those vary.) Lars -- Lars Eggert <larse@isi.edu> Information Sciences Institute http://www.isi.edu/larse/ University of Southern California [-- Attachment #2 --] 0# *H 010 + 0 *H 00A#0 *H 010 UZA10UWestern Cape10UDurbanville10 U Thawte10UCertificate Services1(0&UPersonal Freemail RSA 1999.9.160 000824203008Z 010824203008Z0T10 UEggert1 0U*Lars10ULars Eggert10 *H larse@isi.edu00 *H 0 \p9 H;vr∩6"C?mxfJf7I[3CF́L I - zHRVA怤2]0-bL)%X>nӅ w0u0*+e!0 00L2uMyffBNUbNJJcdZ2s0U0 larse@isi.edu0U0 0U#0`fUXFa#Ì0 *H _3 F=%nWY-HXD9UOc6ܰwf@uܶNԄR?Pr}E1֮23mFhySwM_h|d yR=$P 00}0 *H 010 UZA10UWestern Cape10U Cape Town10U Thawte Consulting1(0&UCertification Services Division1$0"UThawte Personal Freemail CA1+0) *H personal-freemail@thawte.com0 990916140140Z 010915140140Z010 UZA10UWestern Cape10UDurbanville10 U Thawte10UCertificate Services1(0&UPersonal Freemail RSA 1999.9.1600 *H 0 iZz]!#rLK~r$BRW{azr98e^eyvL>hput ,O 1ArƦ]D.Mօ>lx~@эWs0FO 7050U0 0U#0rIs4Uvr~wƲ0 *H kY1rr`HU{gapm¥7؝(V\uoƑlfq|ko!6- -mƃRt\~ orzg,ks nΝc) ~U100010 UZA10UWestern Cape10UDurbanville10 U Thawte10UCertificate Services1(0&UPersonal Freemail RSA 1999.9.16#0 + 0 *H 1 *H 0 *H 1 001128182154Z0# *H 1lTc~H0R *H 1E0C0 *H 0*H 0+0 *H @0 *H (0 *H {FO[hcr 4~ NPQL-P>xQ1j_v8x& P|Zc*55vF-hY؏Ɩ 3wg֊@
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3A23F7C2.EC30C6E9>
