Date: Sat, 27 Nov 1999 19:17:42 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: Tony Finch <dot@dotat.at> Cc: hackers@FreeBSD.ORG Subject: Re: mbuf wait code (revisited) -- review? Message-ID: <199911280317.TAA40584@apollo.backplane.com> References: <Pine.LNX.3.96.991118114107.30813W-100000@devserv.devel.redhat.com> <E11r4Cg-00050F-00@fanf.eng.demon.net>
next in thread | previous in thread | raw e-mail | index | archive | help
:of other connections. My solution was the same as Matt's :-)
:(I'm not happy about the extra context switching that it requires but
:I was more interested in working code than performance; I haven't
:benchmarked it.)
:
:Tony.
Yah, neither was I, but I figured that the overhead was (A) deterministic,
and (B) absorbed under heavy loads because the subprocess in question was
probably already in a run state under those conditions. So the method
scales to load quite well and gives us loads of other features. For
example, I could do realtime reverse DNS lookups with a single cache
(in the main acceptor process) and then a pool of DNS lookup subprocesses
which I communicated with over pipes. Thus the main load-bearing threads
had very small core loops which was good for the L1/L2 cpu caches.
It's kinda funny how something you might expect to generate more overhead
can actually generate less.
-Matt
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199911280317.TAA40584>
