Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Jun 2002 15:26:16 -0700
From:      Brooks Davis <brooks@one-eyed-alien.net>
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        Jonathan Lemon <jlemon@flugsvamp.com>, Julian Elischer <julian@elischer.org>, "Greg 'groggy' Lehey" <grog@FreeBSD.ORG>, arch@FreeBSD.ORG
Subject:   Re: Larry McVoy's slides on cache coherent clusters
Message-ID:  <20020627152616.A3450@Odin.AC.HMC.Edu>
In-Reply-To: <3D1B834E.70573706@mindspring.com>; from tlambert2@mindspring.com on Thu, Jun 27, 2002 at 02:27:42PM -0700
References:  <Pine.BSF.4.21.0206271044050.69706-100000@InterJet.elischer.org> <3D1B7391.38F10284@mindspring.com> <20020627152602.A1020@prism.flugsvamp.com> <3D1B834E.70573706@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--M9NhX3UHpAaciwkO
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 27, 2002 at 02:27:42PM -0700, Terry Lambert wrote:
> I think Larry persuasively demonstrates that there is a hierarchy
> in communications channels vs. CPU speed that is not accounted for
> in most OS design.  My scale ("Lambert's Interconnection Scale"?  8-))
> would be:
>=20
> ----	----	----------	-----------------------------------
> CPUS	DIES	SEPERATION	NAME
> ----	----	----------	-----------------------------------
> 1	1	0		Processing (8-))
> N	1	0		SMT
> N	M	1		SMP
> N	M	2		NUMA
> N	M	3		Distributed (full information)
> N	M	4		Distributed (partial information)
> N	M	5		Distributed (partial functionality)
> ----	----	----------	-----------------------------------

Where would you place single die, multiple core devices like the MIPS
R9000 or the dual core devices from IBM.  Today I suspect they are pretty
similare to SMP, but as SMP systems get faster clocks, distance on the
motherboard scale might add enough latency to matter.

> The 65,536 processor machine that Good Year built for modelling
> laminar airflow on the full shuttle airframe was purpose built
> hardware with a seperation of 2.  So were most of the Connection
> Machine series from Thinking Machines, Inc..

For things you can actually buy, anything over 2 CPUs from SGI falls into
this catagory (and many of the dual CPU systems are actually unconnected
dual nodes from larger systems.)

IIRC ASCI-Red (the first Teraflop supercomputer) actually runs
on something like the CC model.  It's made of dual CPU PII systems
(actually, it started with PPros and was upgraded with those weird PPro
form-factor PII Xeons) but acts something like a single system image.
It's a bit more complicated then that since the service portion runs an
OSF/1 derivative in a sort of single system image mode, but most nodes
run a lightweight dedicated OS.

The systems is connected in a sort of 2.5d mesh (the .5 is from dual node
boards) with a custom interconnect running at something like 400MB/sec.
The funny thing is that Intel ment the hardware to run MS Wolfpack
NT clusters.  Apparently that project died when housekeeping messaging
saturated the bus with only 13 nodes active (each rack had room for 32
boards).  In many way's I think this system is a predicessor to the blade
server concept everyone is trying to convince us is so revolutionary.

-- Brooks

--=20
Any statement of the form "X is the one, true Y" is FALSE.
PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4

--M9NhX3UHpAaciwkO
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE9G5EIXY6L6fI4GtQRAjOGAKDNhvyhHEDsv/XWNO6dAllfiUq4PQCfbcjh
VVOdTz5ZU8F9KktJOg3hqKE=
=vBmB
-----END PGP SIGNATURE-----

--M9NhX3UHpAaciwkO--

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020627152616.A3450>