From owner-freebsd-cluster@FreeBSD.ORG Tue Nov 23 01:20:32 2004 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8226D16A4CE for ; Tue, 23 Nov 2004 01:20:32 +0000 (GMT) Received: from bsdhosting.net (bsdhosting.net [65.39.221.113]) by mx1.FreeBSD.org (Postfix) with SMTP id EF4CD43D4C for ; Tue, 23 Nov 2004 01:20:31 +0000 (GMT) (envelope-from jhopper@bsdhosting.net) Received: (qmail 81787 invoked from network); 23 Nov 2004 01:18:40 -0000 Received: from unknown (HELO ?192.168.1.2?) (jhopper@bsdhosting.net@65.39.221.113) by bsdhosting.net with SMTP; 23 Nov 2004 01:18:40 -0000 From: Justin Hopper To: Matt Olander In-Reply-To: <20041122163244.M31380@knight.ixsystems.net> References: <1101168686.3370.210.camel@work.gusalmighty.com> <20041122160912.L31380@knight.ixsystems.net> <1101170559.3370.223.camel@work.gusalmighty.com> <20041122163244.M31380@knight.ixsystems.net> Content-Type: text/plain Message-Id: <1101172829.15634.5.camel@work.gusalmighty.com> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.4.6 Date: Mon, 22 Nov 2004 17:20:30 -0800 Content-Transfer-Encoding: 7bit cc: freebsd-cluster@freebsd.org Subject: Re: Clustering options X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Nov 2004 01:20:32 -0000 On Mon, 2004-11-22 at 16:32, Matt Olander wrote: > On Mon, Nov 22, 2004 at 04:42:39PM -0800, Justin Hopper wrote: > > My term of "hardware clustering" might have been incorrect. I'm looking > > more for high availability, but a large pool of resources would be > > beneficial as well. It would be ideal to have a system where you can > > add new blades as more resources become necessary, instead of adding > > individual servers which each run their own OS and have their own pool > > of resources. > > hey Justin, > > There's a ton of stuff to help you. A few that come to mind for fail > over are pound, plb, and freevrrpd. They all might do something of what you > are looking for. > > For most blade setups, they typically come with some software that > allows you to setup an image server with various server images and > auto-deploy them to various blades. For instance, the Intel blades come > with Veritas OpForce to control image deployment. > > It's mostly for restoring from a bare-metal state though. As in, a blade > goes down and you have a rule setup that slot3 is a webserver. So, you > call the data-center tech and tell him to swap out the blade in slot3 > (in which you turn on a red indicator light on the front so he knows > which one) with a fresh one, preferrably from the few extra that you had > the foresight to purchase ;) > > The system detects that a new blade is in slot3 and deploys the > webserver image, as per your rule. If anything goes wrong, you remote > console in via the built-in management module. That's all in a perfect > world, of course. In reality, Veritas doesn't officially support FreeBSD > as an operating system for their OpForce software, but I'm talking to > them and we'll see if it goes anywhere :P Interesting. So most blade servers allow for each node in the cluster to run as it's own system, for example as a webserver, right? Is there no appliance that allows for the details of the hardware to be hidden from the OS and instead present to the OS a unified architecture, like it's just one machine, but with the ability to add more nodes to expand CPU, RAM, and disk? I guess this was my misunderstanding, as this is what I assumed the blade systems did. I assume it would be incredibly tricky to manage dynamically configurable hardware in the operating system, but I also assumed that somebody had pulled it off, but maybe not? -- Justin Hopper UNIX Systems Engineer BSDHosting.net Hosting Division of Digital Oasys Inc. http://www.bsdhosting.net