Date: Tue, 07 Aug 2001 11:15:05 -0700 From: Terry Lambert <tlambert2@mindspring.com> To: Matt Dillon <dillon@earth.backplane.com> Cc: Mike Smith <msmith@freebsd.org>, Zhihui Zhang <zzhang@cs.binghamton.edu>, freebsd-hackers@freebsd.org Subject: Re: Allocate a page at interrupt time Message-ID: <3B703029.2BB6D25A@mindspring.com> References: <200108070739.f777dmi08218@mass.dis.org> <3B6FB0AE.8D40EF5D@mindspring.com> <200108071655.f77Gt9M32808@earth.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Matt Dillon wrote: > :What "this", exactly? > : > :That "virtual wire" mode is actually a bad idea for some > :applications -- specifically, high speed networking with > :multiple gigabit ethernet cards? > > All the cpu's don't get the interrupt, only one does. I think that you will end up taking an IPI (Inter Processor Interrupt) to shoot down the cache line during an invalidate cycle, when moving an interrupt processing thread from one CPU to another. For multiple high speed interfaces (disk or network; doesn't matter), you will end up burining a *lot* of time, without a lockdown. You might be able to avoid this by doing some of the tricks I've discussed with Alfred to ensure that there is no lock contention in the non-migratory case for KSEs (or kernel interrupt threads) to handle per CPU scheduling, but I think that the interrupt masking will end up being very hard to manage, and you will get the same effect as locking the interrupt to a particular CPU... if you asre lucky. Any case which _did_ invoke a lock and resulted in contention would require at least a barrier instruction; I guess you could do it in a non-cacheable page to avoid the TLB interaction, and another IPI for an update or invalidate cycle for the lock, but then you are limited to memory speed, which is getting down to around a factor of 10 (133MHz) slower than CPU speed, these days, and that's actually one heck of a stall hit to take. > :That Microsoft demonstrated that wiring down interrupts > :to a particular CPU was a good idea, and kicked both Linux' > :and FreeBSD's butt in the test at ZD Labs? > > Well, if you happen to have four NICs and four CPUs, and > you are running them all full bore, I would say that > wiring the NICs to the CPUs would be a good idea. That > seems like a rather specialized situation, though. I don't think so. These days, interrupt overhead can come from many places, including intentional denial of service attacks. If you have an extra box around, I'd suggest that you install QLinux, and benchmark it side by side against FreeBSD, under an extreme load, and watch the FreeBSD system's performance fall off when interrupt overhead becomes so high that NETISR effectively never gets a chance to run. I also suggest using 100Base-T cards, since the interrupt coelescing on Gigabit cards could prevent you from observing the livelock from interrupt overload, unless you could load your machine to full wire speed (~950Mbits/S) so that your PCI bus transfer rate becomes a barrier. I know you were involved in some of the performance tuning that was attempted immediately after the ZD Labs tests, so I know you know this was a real issue; I think it still is. -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B703029.2BB6D25A>