From owner-freebsd-smp Mon Jul 2 9:50:48 2001 Delivered-To: freebsd-smp@freebsd.org Received: from peorth.iteration.net (peorth.iteration.net [208.190.180.178]) by hub.freebsd.org (Postfix) with ESMTP id D196E37B403 for ; Mon, 2 Jul 2001 09:50:44 -0700 (PDT) (envelope-from keichii@iteration.net) Received: by peorth.iteration.net (Postfix, from userid 1001) id 58F6A59229; Mon, 2 Jul 2001 11:50:44 -0500 (CDT) Date: Mon, 2 Jul 2001 11:50:44 -0500 From: "Michael C . Wu" To: "E.B. Dreger" Cc: Alfred Perlstein , smp@FreeBSD.ORG Subject: Re: per cpu runqueues, cpu affinity and cpu binding. Message-ID: <20010702115044.C99436@peorth.iteration.net> Reply-To: "Michael C . Wu" References: <20010702093638.B96996@peorth.iteration.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: ; from eddy+public+spam@noc.everquick.net on Mon, Jul 02, 2001 at 04:42:51PM +0000 X-PGP-Fingerprint: 5025 F691 F943 8128 48A8 5025 77CE 29C5 8FA1 2E20 X-PGP-Key-ID: 0x8FA12E20 Sender: owner-freebsd-smp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org On Mon, Jul 02, 2001 at 04:42:51PM +0000, E.B. Dreger scribbled: | (thoughts from the sidelines) | | > Date: Mon, 2 Jul 2001 09:36:38 -0500 | > From: Michael C . Wu | | > First of all, we have two different types of processor affinity. | > 1. user specified CPU attachment, as you have implemented. | > 2. system-wide transparent processor affinity, transparent | > to all users, which I see some work below. | > | > In SMPng, IMHO, if we can do (2) well, a lot of the problems | > in performance can be solved. | | Not just keeping a given process on the same CPU... but what about a | "process type"? i.e., if different processes have the same ELF header, | run them _all_ on the CPU _unless_ it leaves another CPU excessively idle. | | Why waste [code] cache on multiple processors when you can keep things on | one? Because it is very difficult to worry about these things. And the performance gain might probably be less than the overhead of comparing the headers. | > Another problem is the widely varied application that we have. | > For example, on a system with many many PCI devices, (2)'s implementation | > will be very different from a system that is intended to run | > an Oracle database or a HTTP server. | | Could you please elaborate? Different situations require completely different things. Sometimes a router will have many interrupts for ether device management. And sometimes we have single purpose servers that only does one thing. | > I don't think doing per-thread affinity is a good idea. Because | > we want to keep threads lightweight. | | !!! Please elaborate. I don't understand what three exclamation marks are supposed to mean. | > You may want to take a look at this url about processor affinity: :) | > http://www.isi.edu/lsam/tools/autosearch/load_balancing/19970804.html | | So many of those links are 404. :-( | | > An actual empirical measurement is required in this case. | > When can we justify the cache performance loss to switch to another | > CPU? In addition, once this process is switched to another CPU, | > we want to keep it there. | | Unless two processes are running on CPU #1, and CPU #2 becomes idle. | Then switching a process to CPU #2 makes sense... unless the process | getting switched is "close" to completion. Please read my post again, I think I explained the idea that L1 will be busted very quickly. -- +-----------------------------------------------------------+ | keichii@iteration.net | keichii@freebsd.org | | http://iteration.net/~keichii | Yes, BSD is a conspiracy. | +-----------------------------------------------------------+ To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-smp" in the body of the message