Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 Jul 2001 11:50:44 -0500
From:      "Michael C . Wu" <keichii@iteration.net>
To:        "E.B. Dreger" <eddy+public+spam@noc.everquick.net>
Cc:        Alfred Perlstein <bright@sneakerz.org>, smp@FreeBSD.ORG
Subject:   Re: per cpu runqueues, cpu affinity and cpu binding.
Message-ID:  <20010702115044.C99436@peorth.iteration.net>
In-Reply-To: <Pine.LNX.4.20.0107021628210.14203-100000@www.everquick.net>; from eddy%2Bpublic%2Bspam@noc.everquick.net on Mon, Jul 02, 2001 at 04:42:51PM %2B0000
References:  <20010702093638.B96996@peorth.iteration.net> <Pine.LNX.4.20.0107021628210.14203-100000@www.everquick.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jul 02, 2001 at 04:42:51PM +0000, E.B. Dreger scribbled:
| (thoughts from the sidelines)
| 
| > Date: Mon, 2 Jul 2001 09:36:38 -0500
| > From: Michael C . Wu <keichii@iteration.net>
| 
| > First of all, we have two different types of processor affinity.
| > 1. user specified CPU attachment, as you have implemented.
| > 2. system-wide transparent processor affinity, transparent
| >    to all users, which I see some work below.
| > 
| > In SMPng, IMHO, if we can do (2) well, a lot of the problems
| > in performance can be solved. 
| 
| Not just keeping a given process on the same CPU... but what about a
| "process type"?  i.e., if different processes have the same ELF header,
| run them _all_ on the CPU _unless_ it leaves another CPU excessively idle.
| 
| Why waste [code] cache on multiple processors when you can keep things on
| one?

Because it is very difficult to worry about these things. And the performance
gain might probably be less than the overhead of comparing the headers.

| > Another problem is the widely varied application that we have.
| > For example, on a system with many many PCI devices, (2)'s implementation
| > will be very different from a system that is intended to run
| > an Oracle database or a HTTP server.
| 
| Could you please elaborate?

Different situations require completely different things.
Sometimes a router will have many interrupts for ether device management.
And sometimes we have single purpose servers that only does one thing.

| > I don't think doing per-thread affinity is a good idea.  Because
| > we want to keep threads lightweight.
| 
| !!!

Please elaborate.  I don't understand what three exclamation marks
are supposed to mean.

| > You may want to take a look at this url about processor affinity: :)
| > http://www.isi.edu/lsam/tools/autosearch/load_balancing/19970804.html
| 
| So many of those links are 404. :-(
| 
| > An actual empirical measurement is required in this case.
| > When can we justify the cache performance loss to switch to another
| > CPU?  In addition, once this process is switched to another CPU,
| > we want to keep it there.
| 
| Unless two processes are running on CPU #1, and CPU #2 becomes idle.
| Then switching a process to CPU #2 makes sense... unless the process
| getting switched is "close" to completion.

Please read my post again, I think I explained the idea that
L1 will be busted very quickly.
-- 
+-----------------------------------------------------------+
| keichii@iteration.net         | keichii@freebsd.org       |
| http://iteration.net/~keichii | Yes, BSD is a conspiracy. |
+-----------------------------------------------------------+

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010702115044.C99436>