From owner-freebsd-hackers Sat Nov 27 19:18: 2 1999 Delivered-To: freebsd-hackers@freebsd.org Received: from apollo.backplane.com (apollo.backplane.com [216.240.41.2]) by hub.freebsd.org (Postfix) with ESMTP id ECBE314EF6 for ; Sat, 27 Nov 1999 19:17:55 -0800 (PST) (envelope-from dillon@apollo.backplane.com) Received: (from dillon@localhost) by apollo.backplane.com (8.9.3/8.9.1) id TAA40584; Sat, 27 Nov 1999 19:17:42 -0800 (PST) (envelope-from dillon) Date: Sat, 27 Nov 1999 19:17:42 -0800 (PST) From: Matthew Dillon Message-Id: <199911280317.TAA40584@apollo.backplane.com> To: Tony Finch Cc: hackers@FreeBSD.ORG Subject: Re: mbuf wait code (revisited) -- review? References: Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG :of other connections. My solution was the same as Matt's :-) :(I'm not happy about the extra context switching that it requires but :I was more interested in working code than performance; I haven't :benchmarked it.) : :Tony. Yah, neither was I, but I figured that the overhead was (A) deterministic, and (B) absorbed under heavy loads because the subprocess in question was probably already in a run state under those conditions. So the method scales to load quite well and gives us loads of other features. For example, I could do realtime reverse DNS lookups with a single cache (in the main acceptor process) and then a pool of DNS lookup subprocesses which I communicated with over pipes. Thus the main load-bearing threads had very small core loops which was good for the L1/L2 cpu caches. It's kinda funny how something you might expect to generate more overhead can actually generate less. -Matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message