Date: Sun, 16 Oct 2005 00:21:45 -0400 From: Garrett Wollman <wollman@csail.mit.edu> To: Bruce Evans <bde@zeta.org.au> Cc: net@freebsd.org Subject: Re: Call for performance evaluation: net.isr.direct (fwd) Message-ID: <17233.54617.89368.645866@khavrinen.csail.mit.edu> In-Reply-To: <20051016135234.T86712@delplex.bde.org> References: <17231.43525.446450.161986@grasshopper.cs.duke.edu> <13600.1129298731@critter.freebsd.dk> <17231.50841.442047.622878@grasshopper.cs.duke.edu> <20051015092141.F1403@epsplex.bde.org> <20051015194738.C66245@fledge.watson.org> <20051016135234.T86712@delplex.bde.org>
next in thread | previous in thread | raw e-mail | index | archive | help
<<On Sun, 16 Oct 2005 14:06:32 +1000 (EST), Bruce Evans <bde@zeta.org.au> said: > Probably the problem is largest for latency, especially in benchmarks. > Latency benchmarks probably have to start cold, so they have no chance > of queue lengths > 1, so there must be a context switch per packet and > may be 2. It has frequently been proposed that one of the deficiencies of the sockets model is that too much work must take place in interrupt context. Several alternatives have been suggested, including user-mode protocol processing (e.g., Exokernel) and event-driven receive (can't think of an example here from the literature). In an alternate universe with truly pervasive threading, one might require an application to "donate" a thread to protocol processing -- effectively combining the two approaches I mentioned -- which would be the way to "win" such latency benchmarks. (The application donating the processing power is then also able to donate its memory.) -GAWollman
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17233.54617.89368.645866>