Date: Sun, 07 Oct 2001 00:56:44 -0700 From: Terry Lambert <tlambert2@mindspring.com> To: "Kenneth D. Merry" <ken@kdm.org> Cc: current@FreeBSD.ORG Subject: Re: Why do soft interrupt coelescing? Message-ID: <3BC00ABC.20ECAAD8@mindspring.com> References: <3BBF5E49.65AF9D8E@mindspring.com> <20011006144418.A6779@panzer.kdm.org>
next in thread | previous in thread | raw e-mail | index | archive | help
"Kenneth D. Merry" wrote: > [ I don't particularly want to get involved in this thread...but... ] > > Can you explain why the ti(4) driver needs a coalescing patch? It already > has in-firmware coalescing paramters that are tuneable by the user. It > also already processes all outstanding BDs in ti_rxeof() and ti_txeof(). The answer to your question is that the card will continue to DMA into the ring buffer, even though you are in the middle of the interrupt service routine, and that the amount of time taken in ether input is long enough that you can have more packets come in while you are processing (this is actually a good thing). This is even *more* likely with hardware interrupt coelescing, since the default setting is to coelesce 32 packets into a single interrupt, meaning that you have up to 32 iterations of ether input to call, and thus the amount of time spent processing them actually affords *more* time for additional packets to come in. In my own personal situation, I have also implemented Lazy Receiver Processing (per the research done by Rice University and in the "Click Router" project; no relation to "ClickArray"), which does all stack processing at the hardware interrupt, rather then queueing between the hardware interrupt and NETISR, so my processing path is actually longer; I get more benefit from the change than you would, but on a heavily loaded system, you would also get some benefit, if you were able to load the wire heavily enough. The LRP implementation should be considered by FreeBSD as well, since it takes the connection rate from ~7,000/second up to ~23,000/second, by avoiding the NetISR. Rice University did an implementation in 2.2.x, and then another one (using resource containers -- I recommend against this one, not only because of license issues with the second implementation) for 4.2; both sets of research were done in FreeBSD. Unfortunately, neither implementation was production quality (among other things, they broke RFC 1323, and they have to run a complete duplicate stack as a different protocol family because some of their assumptions make it non-interoperable with other protocol stacks). > It isn't terribly clear what you're doing in the patch, since it isn't a > context diff. It's a "cvs diff" output. You could always check out a sys tree, apply it, and then cvs diff -c (or -u or whatever your favorite option is) to get a diff more to your tastes. > You also never gave any details behind your statement last week: > "Because at the time the Tigon II was released, the jumbogram > wire format had not solidified. Therefore cards built during > that time used different wire data for the jumbogram framing." > > I asked, in response: > > "Can you give more details? Did someone decide on a different ethertype > than 0x8870 or something? > > That's really the only thing that's different between a standard ethernet > frame and a jumbo frame. (other than the size)" I believe it was the implementation of the length field. I would have to get more information from the person who did the interoperability testing for the autonegotiation (which failed between the Tigon II and the Intel Gigabit cards). I can assure you anecdotally, however, that autonegotiation _did_ fail. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3BC00ABC.20ECAAD8>