Date: Mon, 22 Nov 2004 17:20:30 -0800 From: Justin Hopper <jhopper@bsdhosting.net> To: Matt Olander <matt@offmyserver.com> Cc: freebsd-cluster@freebsd.org Subject: Re: Clustering options Message-ID: <1101172829.15634.5.camel@work.gusalmighty.com> In-Reply-To: <20041122163244.M31380@knight.ixsystems.net> References: <1101168686.3370.210.camel@work.gusalmighty.com> <20041122160912.L31380@knight.ixsystems.net> <1101170559.3370.223.camel@work.gusalmighty.com> <20041122163244.M31380@knight.ixsystems.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 2004-11-22 at 16:32, Matt Olander wrote: > On Mon, Nov 22, 2004 at 04:42:39PM -0800, Justin Hopper wrote: > > My term of "hardware clustering" might have been incorrect. I'm looking > > more for high availability, but a large pool of resources would be > > beneficial as well. It would be ideal to have a system where you can > > add new blades as more resources become necessary, instead of adding > > individual servers which each run their own OS and have their own pool > > of resources. > > hey Justin, > > There's a ton of stuff to help you. A few that come to mind for fail > over are pound, plb, and freevrrpd. They all might do something of what you > are looking for. > > For most blade setups, they typically come with some software that > allows you to setup an image server with various server images and > auto-deploy them to various blades. For instance, the Intel blades come > with Veritas OpForce to control image deployment. > > It's mostly for restoring from a bare-metal state though. As in, a blade > goes down and you have a rule setup that slot3 is a webserver. So, you > call the data-center tech and tell him to swap out the blade in slot3 > (in which you turn on a red indicator light on the front so he knows > which one) with a fresh one, preferrably from the few extra that you had > the foresight to purchase ;) > > The system detects that a new blade is in slot3 and deploys the > webserver image, as per your rule. If anything goes wrong, you remote > console in via the built-in management module. That's all in a perfect > world, of course. In reality, Veritas doesn't officially support FreeBSD > as an operating system for their OpForce software, but I'm talking to > them and we'll see if it goes anywhere :P Interesting. So most blade servers allow for each node in the cluster to run as it's own system, for example as a webserver, right? Is there no appliance that allows for the details of the hardware to be hidden from the OS and instead present to the OS a unified architecture, like it's just one machine, but with the ability to add more nodes to expand CPU, RAM, and disk? I guess this was my misunderstanding, as this is what I assumed the blade systems did. I assume it would be incredibly tricky to manage dynamically configurable hardware in the operating system, but I also assumed that somebody had pulled it off, but maybe not? -- Justin Hopper <jhopper@bsdhosting.net> UNIX Systems Engineer BSDHosting.net Hosting Division of Digital Oasys Inc. http://www.bsdhosting.net
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1101172829.15634.5.camel>