Date: Wed, 1 Oct 2014 12:03:22 +0100 From: krad <kraduk@gmail.com> To: mexas@bristol.ac.uk Cc: freebsd-hackers@freebsd.org Subject: Re: cluster FS? Message-ID: <CALfReydSkufU84UftsQoJd9RrkTj0FEzSDOQVcQJ_c7HBg9Jbw@mail.gmail.com> In-Reply-To: <201E3A2E-B33D-4C63-AD81-8FFD5C2E0ED7@mail.turbofuzz.com> References: <201410010902.s9192Lhb084232@mech-as221.men.bris.ac.uk> <201E3A2E-B33D-4C63-AD81-8FFD5C2E0ED7@mail.turbofuzz.com>
next in thread | previous in thread | raw e-mail | index | archive | help
These are my definitions, hopefully it makes some stuff a little clearer Cluster file system: a file system that resides on a block device that multiple machines have rw access to, but that consistency is guaranteed. A good real world example of this is an VMware ESX datastore. ie a lun is presented to all the esxi hosts in the cluster, all of which can access it simultaneously. The key thing here is the guarantee of consistency. Distributed file system: A network file system that is created out of multiple nodes working together to provide a fault tolerant service. examples of this is luster, glusterfs, moosefs, p-nfs, openafs. One of the key things to understand here is that these file systems generally sit on top of the normal os file systems and each node has its own discreet storage. All replication is done via the network. Looking at your setup, if you want to provide a fault tolerant setup with your existing san there are two main paths I can think of. I am making the assumption the san is fault tolerant to your requirements option one create a set of LUNs and present them to your file server nodes. on one node create the file systems of your choice (prob zfs) setup carp in a master/slave setup with a vip, and import/export functions for the file systems export your file systems via nfs/cifs If you are dead set on using freebsd for this it will be more tricky to do this as a lot of work will have to be done by yourself. The main thing is making sure you dont have fs mounted in both nodes at once in a split brain scenario. If you can use other OS's something like sun cluster/veritas cluster/red hat cluster can do all of this for you. The advantages of this arch is that if you go for one of the commercial solutions you will have support and there are plenty of people out there with experience in this. option two use a distributed file system Basically here you would create 2x sets of luns and present one set to each node, and only one node. Format and mount up the luns to your preferences on each system. Install and configure the distributed file system of your choice and use the newly mounted file systems on each node as your datastores You should probably look at moosefs and glusterfs 1st, and then maybe openafs if you are going to use freebsd as the host system, but if you went for linux you would have a bigger choice at present On the two nodes of the distributed file system you would then want a relatively simple carp setup to float a VIP between the boxes. All clients would use this vip for their connection points. Also make sure the distributed FS is mounted back onto the Node as a normal mount point. This allows you to re-export it via cifs and NFS Finally for the clients. They have 3 basic ways of connecting to the vip. These should cover most eventualities 1. Native distributed fs client. 2. NFS. 3. CIFS The advantages of this over option one is it scales very well depending on your distributed fs of choice. It also means you can easily break away from you san over time if you want to, as all you need to do is add more nodes not on the san, and replicate to storage to them, then drop out the san nodes. I hoe this helps a little On 1 October 2014 10:38, Jordan Hubbard <jkh@mail.turbofuzz.com> wrote: > > > On Oct 1, 2014, at 12:02 PM, Anton Shterenlikht <mexas@bris.ac.uk> > wrote: > > > > So are you saying that the SAN model > > is not good for active/active failover > > with multiple nodes? > > Correct. SAN is active/passive. > > For more information on high availability solutions, I suggest you check > out the big file server vendors - there=E2=80=99s far more pertinent info= rmation in > their various whitepapers then you=E2=80=99ll ever get on freebsd-hackers= . :) > > - Jordan > > _______________________________________________ > freebsd-hackers@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org= " >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALfReydSkufU84UftsQoJd9RrkTj0FEzSDOQVcQJ_c7HBg9Jbw>