From owner-freebsd-hackers Mon Feb 15 12:04:29 1999 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id MAA26705 for freebsd-hackers-outgoing; Mon, 15 Feb 1999 12:04:29 -0800 (PST) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from iquest3.iquest.net (iquest3.iquest.net [209.43.20.203]) by hub.freebsd.org (8.8.8/8.8.8) with SMTP id MAA26695 for ; Mon, 15 Feb 1999 12:04:19 -0800 (PST) (envelope-from toor@y.dyson.net) Received: (qmail 11326 invoked from network); 15 Feb 1999 20:04:15 -0000 Received: from dyson.iquest.net (HELO y.dyson.net) (198.70.144.127) by iquest3.iquest.net with SMTP; 15 Feb 1999 20:04:15 -0000 Received: (from root@localhost) by y.dyson.net (8.9.1/8.9.1) id PAA01666; Mon, 15 Feb 1999 15:04:14 -0500 (EST) Message-Id: <199902152004.PAA01666@y.dyson.net> Subject: Re: Processor affinity? In-Reply-To: <864sonmqvm.fsf@not.oeno.com> from Ville-Pertti Keinonen at "Feb 15, 99 09:03:09 pm" To: will@iki.fi (Ville-Pertti Keinonen) Date: Mon, 15 Feb 1999 15:04:14 -0500 (EST) Cc: dyson@iquest.net, hackers@FreeBSD.ORG From: "John S. Dyson" Reply-To: dyson@iquest.net X-Mailer: ELM [version 2.4ME+ PL38 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Ville-Pertti Keinonen said: > > dyson@iquest.net (John S. Dyson) writes: > > > # of active CPU cycles. I do have affinity code for SMP, and it makes > > a positive difference in performance, even with the big-lock FreeBSD kernel. > > What does it do? > > In order to be more useful than a last-run-on hint (which is pretty > much useless for time-sharing), it needs to be able to select a > thread that doesn't have the highest priority to run when the best > processor for the highest-priority runnable (but not active) thread > is (momentarily) running a even higher-priority thread. > Yes. > > Doing something like this in a BSD scheduler is a bit difficult > because of the priority buckets. It seems to me that you either > give up O(1) thread selection (the Linux folks seem to be happy > with O(n), but I don't like the idea) or have to do something > moderately complex (such as per-processor run queues with load > balancing, like DEC did with OSF/1). > > Or did you find a more elegant solution? > Nope, but the key is to know when to give up. The code as it is bounces around with totally wild abandon. On a small number of processors, a little bit of extra work, isn't bad. Scanning all of the processor queues is not an option, but diddling the effective priorities a little bit is okay (IMO). For realtime, this is of course wrong. > > And with affinity, particularly if it is too strong, you'll > occasionally have far more latency associated with getting a thread > to run again when the right cpu wasn't available when the thread > would "naturally" have run. > Yes. > > I suspect that DEC's scheme does this too easily to interactive > threads, but haven't done any real testing. > IMO, my FreeBSD scheme does appear to improve things, and make performance more consistantly high. It isn't ready for prime-time, because my current kernel work is purely SMP only (I cannot build a working UP kernel!!!) lat_ctx from lmbench does show a significant (but not earth shattering) 20-30% improvement. lat_ctx is a worst nut-case example, so real world processes will see less. More work needs to be done in the pipe context switch area (and in fact, I had been working on the pipe code ending about 2-3mos ago for this reason.) My time in the last 3wks has been tied up doing what I am paid to do, but now I can look more towards really fun stuff. -- John | Never try to teach a pig to sing, dyson@iquest.net | it makes one look stupid jdyson@nc.com | and it irritates the pig. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message