From owner-freebsd-current Tue Oct 9 12:27:25 2001 Delivered-To: freebsd-current@freebsd.org Received: from scaup.mail.pas.earthlink.net (scaup.mail.pas.earthlink.net [207.217.121.49]) by hub.freebsd.org (Postfix) with ESMTP id DDA8637B403 for ; Tue, 9 Oct 2001 12:27:16 -0700 (PDT) Received: from mindspring.com (dialup-209.247.137.25.Dial1.SanJose1.Level3.net [209.247.137.25]) by scaup.mail.pas.earthlink.net (EL-8_9_3_3/8.9.3) with ESMTP id MAA07383; Tue, 9 Oct 2001 12:27:11 -0700 (PDT) Message-ID: <3BC34FC2.6AF8C872@mindspring.com> Date: Tue, 09 Oct 2001 12:28:02 -0700 From: Terry Lambert Reply-To: tlambert2@mindspring.com X-Mailer: Mozilla 4.7 [en]C-CCK-MCD {Sony} (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: "Kenneth D. Merry" Cc: current@FreeBSD.ORG Subject: Re: Why do soft interrupt coelescing? References: <3BBF5E49.65AF9D8E@mindspring.com> <20011006144418.A6779@panzer.kdm.org> <3BC00ABC.20ECAAD8@mindspring.com> <20011008231046.A10472@panzer.kdm.org> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG "Kenneth D. Merry" wrote: [ ... soft interrupt coelescing ... ] > As you say above, this is actually a good thing. I don't see how this ties > into the patch to introduce some sort of interrupt coalescing into the > ti(4) driver. IMO, you should be able to tweak the coalescing parameters > on the board to do what you want. I have tweaked all tunables on the board, and I have not gotten anywhere near the increased performance. The limit on how far you can push this is based on how much RAM you can have on the card, and the limits to coelescing. Here's the reason: when you receive packets to the board, they get DMA'ed into the ring. No matter how large the ring, it won't matter, if the ring is not being emptied asynchronously relative to it being filled. In the case of a full-on receiver livelock situation, the ring contents will be continuously overwritten. This is actually what happens when you put a ti card into a machine with a slower processor, and hit it hard. In the case of interrupt processing, where you jam the data up through ether input at interrupt time, the buffer will be able to potentially overrun, as well. Admittedly, you can spend a huge percentage of your CPU time in interrupt processing, and if your CPU is fast enough, unload the queue very quickly. But if you then look at doing this for multiple gigabit cards at the same time, you quickly reach the limits... and you spend so much of your time in interrupt processing, that you spend none running NETISR. So you have moved your livelock up one layer. In any case, doing the coelescing on the board delays the packet processing until that number of packets has been received, or a timer expires. The timer latency must be increased proportionally to the maximum number of packets that you coelesce into a single interrupt. In other words, you do not interleave your I/O when you do this, and the bursty conditions that result in your coelescing window ending up full or close to full are the conditions under which you should be attempting the maximum concurrency you can possibly attain. Basically, in any case where the load is high enough to trigger the hardware coelescing, the ring would need to be the next power of two larger to ensure that the end does not overwrite the beginning of the ring. In practice, the firmware on the card does not support this, so what you do instead is push a couple of packets that may have been corrupted through DMA occurring during the fact -- in other words, you drop packets. This is arguably "correct", in that it permits you to shed load, _but_ the DMAs still occur into your rings; it would be much better if the load were shed by the card firmware, based on some knowledge of ring depth instead (RED Queueing), since this would leave the bus clear for other traffinc (e.g. communication with main memory to provide network content for the cards for, e.g., and NFS server, etc.). Without hacking firmware, the best you can do is to ensure that you process as much of all the traffic as you possibly can, and that means avoiding livelock. [ ... LRP ... ] > That sounds cool, but I still don't see how this ties into the patch you > sent out. OK. LRP removes NETISR entirely. This is the approach Van Jacobson stated he used in his mythical TCP/IP stack, which we may never see. What this does is push the stack processing down to the interrupt time for the hardware interrupt. This is a good idea, in that it avoids the livelock for the NETISR never running because you are too busy taking hardware interrupts to be able to do any stack processing. The way this ties into the patch is that doing the stack processing at interrupt time increases the per ether input processing cycle overhead up. What this means is that you get more benefit in the soft interrupt coelescing than you otherwise would get, when you are doing LRP. But, you do get *some* benefit from doing it anyway, even if your ether input processing is light: so long as it is non-zero, you get benefit. Note that LRP itself is not a panacea for livelock, since it just moves the scheduling problem from the IRQ<->NETISR scheduling into the NETISR<->process scheduling. You end up needing to implement weighted fair share or other code to ensure that the user space process is permitted to run, so you end up monitoring queue depth or something else, and deciding not to reenable interrupts on the card until you hit a low water mark, indicating processing has taken place (see the papers by Druschel et. al. and Floyd et. al.). > > > It isn't terribly clear what you're doing in the patch, since it isn't a > > > context diff. > > > > It's a "cvs diff" output. You could always check out a sys > > tree, apply it, and then cvs diff -c (or -u or whatever your > > favorite option is) to get a diff more to your tastes. > > As Peter Wemm pointed out, we can't use non-context diffs safely without > the exact time, date and branch of the source files. This introduces an > additional burden for no real reason other than you neglected to use -c or > -u with cvs diff. I was chewed on before for context diffs. As I said before, I can provide tem, if that's the current coin of the realm; it doesn't matter to me. [ ... jumbogram autonegotiation ... ] > > I believe it was the implementation of the length field. I > > would have to get more information from the person who did > > the interoperability testing for the autonegotiation (which > > failed between the Tigon II and the Intel Gigabit cards). I > > can assure you anecdotally, however, that autonegotiation > > _did_ fail. > > I would believe that autonegotiation (i.e. 10/100/1000) might fail, > especially if you're using 1000BaseT Tigon II boards. However, I > would like more details on the failure. It's entirely possible > that it could be fixed in the firmware, probably without too much > trouble. Possibly. The problem I have is that you simply can't use jumbograms in a commercial product, if they can't be autonegotiated, or you will burn all your profit in technical support calls very quickly. > I find it somewhat hard to believe that Intel would ship a gigabit > board that didn't interoperate with the board that up until probably > recently was probably the predominant gigabit board out there. Intel can autonegotiate with several manufacturers, and the Tigon III. It can interiperate with the Tigon II, if you statically configure it for jumbograms. A big problem with jumbograms is that there are a number of cards with an 8k limit before they can't offload checksum processing. Another interesting thing is that it is often a much better idea to negotiate an 8k MTU for jumbograms. The reason for this is that it fits evenly into 4 mbuf clusters. There are actually some good arguments in there for having non-fixed-sized mbufs... -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message