Date: Fri, 10 Aug 2001 03:13:51 -0700 From: Terry Lambert <tlambert2@mindspring.com> To: Mike Smith <msmith@freebsd.org> Cc: Greg Lehey <grog@freebsd.org>, void <float@firedrake.org>, freebsd-hackers@freebsd.org Subject: Re: Allocate a page at interrupt time Message-ID: <3B73B3DF.EC098517@mindspring.com> References: <200108091708.f79H8Xq01162@mass.dis.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Mike Smith wrote: > The basic problem here is that you have decided what "interrupt threads" > are, and aren't interested in the fact that what FreeBSD calls "interrupt > threads" are not the same thing, despite being told this countless times, > and despite it being embodied in the code that's right under your nose. > > You believe that an interrupt results in a make-runnable event, and at > some future time, the interrupt thread services the interrupt request. > > This is not the case, and never was. The entire point of having > interrupt threads is to allow interrupt handling routines to block in the > case where the handler/driver design does not allow for nonblocking > synchronisation between the top and bottom halves. So enlighten me, since the code right under my nose often does not run on my dual CPU system, and I like prose anyway, preferrably backed by data and repeatable research results. What do interrupt threads buy you that isn't there in 4.x, besides being one hammer among dozens that can hit the SMP nail? Why don't I want to run my interrupt to completion, and want to use an interrupt thread to do the work instead? On what context do they block? Why is it not better to change the handler/driver design to allow for nonblocking synchronization? Personally, when I get an ACK from a SYN/ACK I sent in response to a SYN, and the connection completes, I think that running the stack at interrupt all the way up to the point of putting the completed new socket connection on the associated listening socket's accept list is the correct thing to do; likewise anything else that would result in a need for upper level processing, _at all_. This lets me process everything I can, and drop everything I can't, as early as possible, before I've invested a lot of futile effort in processing that will come to naught. This is what LRP does. This is what Van Jacobson's stack (van@packetdesign.com) does. Why are you right, and Mohit Aron, Jeff Mogul, Peter Druschel, and Van Jacobson, wrong? > Most of the issues you raise regarding livelock can be > mitigated with thoughtful driver design. Eventually, > however, the machine hits the wall, and something has to > break. You can't avoid this, no matter how you try; the > goal is to put it off as long as possible. > > So. Now you've been told again. Tell me why it has to break, instead of me disabling receipt of the packets by the card in order to shed load before it becomes an issue for the host machine's bus, interrupt processing system, etc.? Are you claiming that dropping packets that are physically impossible to handle, as early as possible, while handing _all_ packets that are physically possible to handle, is "broken", or is somehow "unpossible"? Thanks for any light you can shed on the subject, -- Terry PS: If you want to visit me at work, I'll show you code running in a significantly modified FreeBSD 4.3 kernel. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B73B3DF.EC098517>