Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 19 Sep 2000 08:29:47 +0200 (CEST)
From:      Luigi Rizzo <luigi@info.iet.unipi.it>
To:        Lars Eggert <larse@ISI.EDU>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: implementing idle-time networking
Message-ID:  <200009190629.IAA21747@info.iet.unipi.it>
In-Reply-To: <39C6BD59.4C1AFD02@isi.edu> from Lars Eggert at "Sep 18, 2000 06:11:53 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

i believe there are two things here that you need to consider before
you can see any queue build up in ipq:

 1. you should generate packets (way) faster than the card is able
    to handle them;
 2. the network card itself might be able to queue multiple packets in
    the "transmit ring";

to check if #2 is true you should either look at the driver, or trace
how fast ipq is drained (e.g. take timestamps) and see if it happens
faster than the packet transmission time.

re. #1, remember that on a 100Mbit net a full-sized packet goes
out in some 100us, which is fast. Maybe you have already done this,
but just in case, you should run your tests preferably with reasonably
long (that might mean some 50-100 packets if there is queueing in
the card) bursts full-sized UDP packets and on a 10Mbit/s link to
see queues build up in ipq.

	cheers
	luigi

> 
> as part of my thesis research, I'm implementing something similar to the
> POSIX idle-time CPU scheduler for other resource types, one being network
> I/O. The basic idea is to substitute two-level queues for the standard
> ones. I'm seeing some unexpected things (explained below), but let me first
> outline what I'm doing exactly:
> 
> 1. I extend the ifnet structure to contain a second ifqueue, for idle-time
> traffic; and also declare a new flag for mbufs, to indicate whether network
> idle-time processing should be done or not.
> 
> 2. In sosend(), I check if the sending process is running at a POSIX
> idle-time priority. If so, I set the idle-time flag in the mbuf.
> 
> 3. In ether_output_frame(), I check if the idle-time flag is set on an
> mbuf, and if so, enqueue it in the interface's idle-time queue (default
> queue otherwise.)
> 
> 4. In xl_start() (my onboard chip happens to use the xl driver), I first
> check the default queue for any mbufs ready to send. If there are none, I
> try the idle-time queue. If an mbuf could be dequeued from either queue, I
> continue with normal outbound processing (have mbuf be picked up by NIC).
> 
> Unfortunately, this scheme does not work. Some first experiments have shown
> that idle-time network performance is practically identical to
> regular-priority. I measured it going from a slower (10Mb/s) to a faster
> (100Mb/s) host through a private switch, so the NIC should be the
> bottleneck (the processors are both 800Mhz PIII). The new code is in fact
> executed, I have traced it heavily.
> 
> Closer inspection revealed that both the ifnet ifqueues as well as the
> driver transmission chain are always empty upon enqueue/dequeue. Thus, even
> though my fancy queuing code is executed, it has no effect, since there
> never are any queues.
> 
> Can someone shed some light on if this is expected behavior? Wouldn't that
> mean that as packets are being generated by the socket layer, they are
> handed down through the kernel to the driver one-by-one, incurring at
> interrupt for each packet? Or am I missing the obvious?
> 
> Thanks,
> Lars
> -- 
> Lars Eggert <larse@isi.edu>                 Information Sciences Institute
> http://www.isi.edu/larse/                University of Southern California
Content-Description: S/MIME Cryptographic Signature

[Attachment, skipping...]



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200009190629.IAA21747>