From owner-freebsd-cluster@FreeBSD.ORG Thu Jun 25 00:04:11 2009 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B3442106564A; Thu, 25 Jun 2009 00:04:11 +0000 (UTC) (envelope-from efinleywork@efinley.com) Received: from mail1.etv.net (mail1.etv.net [66.111.113.18]) by mx1.freebsd.org (Postfix) with ESMTP id 9898D8FC13; Thu, 25 Jun 2009 00:04:11 +0000 (UTC) (envelope-from efinleywork@efinley.com) Received: from ef04.etv.net ([74.214.237.51]) by mail1.etv.net with esmtpa (Exim 4.69 (FreeBSD)) (envelope-from ) id 1MJbhV-000JPm-Bi; Wed, 24 Jun 2009 17:16:21 -0600 Message-ID: <4A42B3C7.9000500@efinley.com> Date: Wed, 24 Jun 2009 17:16:23 -0600 From: Elliot Finley User-Agent: Thunderbird 2.0.0.22 (Windows/20090605) MIME-Version: 1.0 To: Freddie Cash References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-cluster@freebsd.org Subject: Re: Fail-over SAN setup: ZFS, NFS, and ...? X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Jun 2009 00:04:12 -0000 Why not take a look at gluster? Freddie Cash wrote: > [Not exactly sure which ML this belongs on, as it's related to both > clustering and filesystems. If there's a better spot, let me know and I'll > update the CC:/reply-to.] > > We're in the planning stages for building a multi-site, fail-over SAN setup > which will be used to provide redundant storage for a virtual machine setup. > The setup will be like so: > [Server Room 1] . [Server Room 2] > ----------------- . ------------------- > . > [storage server] . [storage server] > | . | > | . | > [storage switch] . [storage switch] > \----fibre----/ | > . | > . | > . [storage aggregator] > . | > . | > . /---[switch]---\ > . | | | > . | [VM box] | > . | | | > . [VM box] | | > . | | [VM box] > . | | | > . [network switch] > . | > . | > . [internet] > > Server room 1 and server room 2 are on opposite ends of town (about 3 km) > with a dedicated, direct-link, fibre link between them. There will be a set > of VM boxes at each site, that use the shared storage, and will act as > fail-over for each other. In theory, only 1 server room would ever be > active at a time, although we may end up migrating VMs between the two sites > for maintenance purposes. > > We've got the storage server side of things figured out (5U rackmounts with > 24 drive bauys, using FreeBSD 7.x and ZFS). We've got the storage switches > picked out (HP Procurve 2800 or 2900, depending on if we go with 1 GbE or 10 > GbE fibre links between them). We're stuck on the storage aggregator. > > For a single aggregator box setup, we'd use FreeBSD 7.x with ZFS. The > storage servers would each export a single zvol using iSCSI. The storage > aggregator would use ZFS to create a pool using a mirrored vdev. To expand > the pool, we put in two more storage servers, and add another mirrored vdev > to the pool. No biggie. The storage aggregator then uses NFS and/or iSCSI > to make storage available to the VM boxes. This is the easy part. > > However, we'd like to remove the single-point-of-failure that the storage > aggregator represents, and have a duplicate of it running at Server Room 1. > Right now, we can do this using cold-spares that rsync from the live box > every X hours/days. We'd like this to be a live, fail-over spare, though. > And this is where we're stuck. > > What can we use to do this? CARP? Heatbeat? ggate? Should we look at > Linux with DRBD or linux-ha or cluster-nfs or similar? Perhaps RedHat > Cluster Suite? (We'd prefer not to, as then storage management becomes a > nightmare again, requiring mdadm, lvm, and more.) Would a cluster > filessytem be needed? AFS or similar? > > We have next to no knowledge of fail-over clustering when it comes to > high-availability and fail-over. Any pointers to things to read online, or > tips, or even "don't do that, you're insane" comments greatly appreciated. > :)