Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Mar 2010 23:45:25 +0000
From:      Michal <michal@ionic.co.uk>
To:        freebsd-stable@freebsd.org
Subject:   Re: Multi node storage, ZFS
Message-ID:  <4BAAA415.1000804@ionic.co.uk>
In-Reply-To: <hoe355$tuk$1@dough.gmane.org>
References:  <4BAA3409.6080406@ionic.co.uk>	<b269bc571003240920r3c06a67ci1057921899c36637@mail.gmail.com> <hoe355$tuk$1@dough.gmane.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 24/03/2010 22:19, Ivan Voras wrote:

> 
> For what it's worth - I think this is a good idea! iSCSI and ZFS make it
> extraordinarily flexible to do this. You can have a RAIS - redundant
> array of inexpensive servers :)
> 
> For example: each server box hosts 8-12 drives - use a hardware
> controller with RAID6 and a BBU to create a single volume (if FreeBSD
> booting issues allow, but that can be worked around). Export this volume
> via iSCSI. Repeat for the rest of the servers. Then, on the client,
> create a RAIDZ. or if you trust your setup that much. a straight striped
> ZFS volume. If you do it the RAIDZ way, one of your storage servers can
> fail completely.
> 
> As you need more space, add more servers in batches of three (if you did
> RAIDZ, else the number doesn't matter), add them to the client as usual.
> 
> The "client" in this case can be a file server, and you can achieve
> failover between several of those by using e.g. carp, heartbeat, etc. -
> if the master node fails, some other one can reconstitute the ZFS pool
> ad make it available.
> 
> But, you need very fast links between the nodes, and I wouldn't use
> something like this without extensively testing the failure modes.
> 

I do aswell :D The thing is, I see it two ways; I worked for a a huge
online betting company, and we had the money for HP MSA's and big
expensive SAN's, then we have a lot of SMB's with no where near the
budget for that but the same problem with lots of data and the need for
backend storage for databases. It's all well and good having 1 ZFS
server, but it's fragile in the the sense of no redundancy, then we have
1 ZFS server and a 2nd with DRBD, but that's a waste of money...think 12
TB, and you need to pay for another 12TB box for redundancy, and you are
still looking at 1 server. I am thinking a cheap solution but one that
has IO throughput, redundancy and is easy to manange and expand across
multiple nodes

A "NAS" based solution...one based on a single NAS device which has
single targets //nas1 //nas2 etc is ok, but has many problems. A "SAN"
based solution can overcome these, it does add cost, but the amount can
be minimised. I'll work on it over the next few days and get some notes
typed up as well as some run some performance numbers. I'll try and do
it modular by adding more RAM and sorting our ZLS and cache, comparing
how they effect performance



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BAAA415.1000804>