From owner-freebsd-cluster@FreeBSD.ORG Tue Nov 23 22:45:41 2004 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5557D16A4CE for ; Tue, 23 Nov 2004 22:45:41 +0000 (GMT) Received: from bsdhosting.net (bsdhosting.net [65.39.221.113]) by mx1.FreeBSD.org (Postfix) with SMTP id 8801A43D5D for ; Tue, 23 Nov 2004 22:45:40 +0000 (GMT) (envelope-from jhopper@bsdhosting.net) Received: (qmail 64682 invoked from network); 23 Nov 2004 22:43:48 -0000 Received: from unknown (HELO ?192.168.1.2?) (jhopper@bsdhosting.net@65.39.221.113) by bsdhosting.net with SMTP; 23 Nov 2004 22:43:48 -0000 From: Justin Hopper To: freebsd-cluster@freebsd.org In-Reply-To: <200411231316.53793.alex@taskforce-1.com> References: <1101168686.3370.210.camel@work.gusalmighty.com> <20041122163244.M31380@knight.ixsystems.net> <1101172829.15634.5.camel@work.gusalmighty.com> <200411231316.53793.alex@taskforce-1.com> Content-Type: text/plain Message-Id: <1101249938.15634.87.camel@work.gusalmighty.com> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.4.6 Date: Tue, 23 Nov 2004 14:45:39 -0800 Content-Transfer-Encoding: 7bit Subject: Re: Clustering options X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Nov 2004 22:45:41 -0000 On Tue, 2004-11-23 at 13:16, Alex Pavlovic wrote: > On November 22, 2004 05:20 pm, Justin Hopper wrote: > > Is there no appliance that allows for the details of the hardware to be > > hidden from the OS and instead present to the OS a unified architecture, > > like it's just one machine, but with the ability to add more nodes to > > expand CPU, RAM, and disk? I guess this was my misunderstanding, as > > this is what I assumed the blade systems did. I assume it would be > > incredibly tricky to manage dynamically configurable hardware in the > > operating system, but I also assumed that somebody had pulled it off, > > but maybe not? > > There is. It's called SSI or single system image. Basically it provides you > with single root, init and process space. Currently I know of two open > source products that implement this ( OpenSSI and openMosix ). Unfortantely > they are both targeted toward linux. openMosix seems to be geared toward > computational aspects ( HPC ), while OpenSSI project is trying to unify > various cluster factions and provide a "one size fits all" solution. > > There are some other papers on FreeBSD clusters that people have designed, > my favourite is the one on a very nice general computing cluster published by > Brooks Davis ( Aerospace Corporation ), look here: > http://people.freebsd.org/~brooks/papers/bsdcon2003/fbsdcluster.pdf > There is also some information on the grid computing available here: > http://people.freebsd.org/~brooks/pubs/usebsd2004/fbsdgrids.pdf > > Just something on the side, Manex Visual Effects actually used a 32 node > FreeBSD cluster as the core rendering farm to make some of the special effects > for the "Matrix" movie. You can read the story if you haven't already here: > http://www.freebsd.org/news/press-rel-1.html Thank you, this was much more of what I was looking for. Very interesting articles, especially the one on the Aerospace Corporation. At least I have the answer that I was looking for, that the blade systems are not at all what I was thinking they were. I wonder what the Aerospace Corporation is doing now with the cluster? The paper had mentioned that they might look into the Opteron processor and amd64 / freebsd 5.x. It would be interesting to know what they were able to do with that technology and how their efforts on committing some of their research back to FreeBSD is going. -- Justin Hopper UNIX Systems Engineer BSDHosting.net Hosting Division of Digital Oasys Inc. http://www.bsdhosting.net