Date: Mon, 18 Sep 2000 18:11:53 -0700 From: Lars Eggert <larse@ISI.EDU> To: hackers@freebsd.org Subject: implementing idle-time networking Message-ID: <39C6BD59.4C1AFD02@isi.edu>
next in thread | raw e-mail | index | archive | help
[-- Attachment #1 --] Hi, as part of my thesis research, I'm implementing something similar to the POSIX idle-time CPU scheduler for other resource types, one being network I/O. The basic idea is to substitute two-level queues for the standard ones. I'm seeing some unexpected things (explained below), but let me first outline what I'm doing exactly: 1. I extend the ifnet structure to contain a second ifqueue, for idle-time traffic; and also declare a new flag for mbufs, to indicate whether network idle-time processing should be done or not. 2. In sosend(), I check if the sending process is running at a POSIX idle-time priority. If so, I set the idle-time flag in the mbuf. 3. In ether_output_frame(), I check if the idle-time flag is set on an mbuf, and if so, enqueue it in the interface's idle-time queue (default queue otherwise.) 4. In xl_start() (my onboard chip happens to use the xl driver), I first check the default queue for any mbufs ready to send. If there are none, I try the idle-time queue. If an mbuf could be dequeued from either queue, I continue with normal outbound processing (have mbuf be picked up by NIC). Unfortunately, this scheme does not work. Some first experiments have shown that idle-time network performance is practically identical to regular-priority. I measured it going from a slower (10Mb/s) to a faster (100Mb/s) host through a private switch, so the NIC should be the bottleneck (the processors are both 800Mhz PIII). The new code is in fact executed, I have traced it heavily. Closer inspection revealed that both the ifnet ifqueues as well as the driver transmission chain are always empty upon enqueue/dequeue. Thus, even though my fancy queuing code is executed, it has no effect, since there never are any queues. Can someone shed some light on if this is expected behavior? Wouldn't that mean that as packets are being generated by the socket layer, they are handed down through the kernel to the driver one-by-one, incurring at interrupt for each packet? Or am I missing the obvious? Thanks, Lars -- Lars Eggert <larse@isi.edu> Information Sciences Institute http://www.isi.edu/larse/ University of Southern California [-- Attachment #2 --] 0# *H 010 + 0 *H 00A#0 *H 010 UZA10UWestern Cape10UDurbanville10 U Thawte10UCertificate Services1(0&UPersonal Freemail RSA 1999.9.160 000824203008Z 010824203008Z0T10 UEggert1 0U*Lars10ULars Eggert10 *H larse@isi.edu00 *H 0 \p9 H;vr∩6"C?mxfJf7I[3CF́L I - zHRVA怤2]0-bL)%X>nӅ w0u0*+e!0 00L2uMyffBNUbNJJcdZ2s0U0 larse@isi.edu0U0 0U#0`fUXFa#Ì0 *H _3 F=%nWY-HXD9UOc6ܰwf@uܶNԄR?Pr}E1֮23mFhySwM_h|d yR=$P 00}0 *H 010 UZA10UWestern Cape10U Cape Town10U Thawte Consulting1(0&UCertification Services Division1$0"UThawte Personal Freemail CA1+0) *H personal-freemail@thawte.com0 990916140140Z 010915140140Z010 UZA10UWestern Cape10UDurbanville10 U Thawte10UCertificate Services1(0&UPersonal Freemail RSA 1999.9.1600 *H 0 iZz]!#rLK~r$BRW{azr98e^eyvL>hput ,O 1ArƦ]D.Mօ>lx~@эWs0FO 7050U0 0U#0rIs4Uvr~wƲ0 *H kY1rr`HU{gapm¥7؝(V\uoƑlfq|ko!6- -mƃRt\~ orzg,ks nΝc) ~U100010 UZA10UWestern Cape10UDurbanville10 U Thawte10UCertificate Services1(0&UPersonal Freemail RSA 1999.9.16#0 + 0 *H 1 *H 0 *H 1 000919011153Z0# *H 1[䌑)<ͺz0R *H 1E0C0 *H 0*H 0+0 *H @0 *H (0 *H J=xUqr\ 8,NҢsTx혬Ĭisvm5)+;t+^o7quv tzvtʲ:ڦM1ehF:1
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?39C6BD59.4C1AFD02>
