From owner-freebsd-smp Sat Oct 5 14:47:44 1996 Return-Path: owner-smp Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id OAA10411 for smp-outgoing; Sat, 5 Oct 1996 14:47:44 -0700 (PDT) Received: from critter.tfs.com ([140.145.230.177]) by freefall.freebsd.org (8.7.5/8.7.3) with ESMTP id OAA10400; Sat, 5 Oct 1996 14:47:32 -0700 (PDT) Received: from critter.tfs.com (localhost.tfs.com [127.0.0.1]) by critter.tfs.com (8.7.5/8.7.3) with ESMTP id XAA04190; Sat, 5 Oct 1996 23:47:02 +0200 (MET DST) To: Peter Wemm cc: Chris Csanady , freebsd-smp@freebsd.org Subject: Re: Second processor does nothin?! In-reply-to: Your message of "Sun, 06 Oct 1996 04:43:20 +0800." <199610052043.EAA01848@spinner.DIALix.COM> Date: Sat, 05 Oct 1996 23:47:02 +0200 Message-ID: <4188.844552022@critter.tfs.com> From: Poul-Henning Kamp Sender: owner-smp@freebsd.org X-Loop: FreeBSD.org Precedence: bulk >I suspect that the reason things tend to run on cpu#1 first is because >cpu1 is never interrupted, except for traps generated by the process it >is currently executing. This probably biases things somewhat, since when >a user-mode process starts up, if it begins on cpu0 it won't be long before >it's quantum expires on #0, and cpu#1 grabs it.. And it'll stay there as >long as it pleases. This is probably enough to explain the bias. Actually I talked with a Very Old Man some time ago, and he said that we might not really want to change that habit. His argumentation was derived from computers I've never had to work with, but he sure knew where all his towele were. Basically what he told me was that with some number of CPUs you will want to dedicate some of them for "batch" kind of applications and some for interactive. Letting the batch cpus have larger, potentially infinite quantums will improve the benefit we get from caches and so on, not just because they concentrate on those jobs, but also because your heavy duty i/o interrupts end up in cache on the cpu they hit. The point is that interactive jobs that end up on a "batch" cpu don't really suffer, they will so something that deschedules them anyway, the one case where you suffer is when a low priority batch process gets on the CPU and a high-priority interactive job cannot get it, but according to him, that would be a rare thing indeed, since even "batch" jobs do a lot of I/O and generally deschedule at least several times per second because of that. He suggested keeping track of each processes "mean time between voluntary deschedule" and assign it to a cpu based on that. It's certainly not an uninteresting idea. He said that if he was involved (something I'm not very lucky at making happen) "he would make sure that he could tie each irq to a particular (group of) CPU(s) and that the quantum timers for all cpu's would be tweakable." Which I think is common sense :-) Even though the apic timer is quite junky from various points of view, it could be used for quantum counter, and thus be per cpu. Maybe we need to start measuring the rate of volutary vs. involuntary deschedules in FreeBSD. -- Poul-Henning Kamp | phk@FreeBSD.ORG FreeBSD Core-team. http://www.freebsd.org/~phk | phk@login.dknet.dk Private mailbox. whois: [PHK] | phk@ref.tfs.com TRW Financial Systems, Inc. Future will arrive by its own means, progress not so.