Date: Wed, 24 Mar 2010 23:19:16 +0100 From: Ivan Voras <ivoras@freebsd.org> To: freebsd-stable@freebsd.org Subject: Re: Multi node storage, ZFS Message-ID: <hoe355$tuk$1@dough.gmane.org> In-Reply-To: <b269bc571003240920r3c06a67ci1057921899c36637@mail.gmail.com> References: <4BAA3409.6080406@ionic.co.uk> <b269bc571003240920r3c06a67ci1057921899c36637@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Freddie Cash wrote: > On Wed, Mar 24, 2010 at 8:47 AM, Michal <michal@ionic.co.uk> wrote: > >> I wrote a really long e-mail but realised I could ask this question far >> far easier, if it doesn't make sense, the original e-mail is bellow >> >> Can I use ZFS to create a multinode storage area. Multiple HDD's in >> Multiple servers to create one target of, for example, //officestorage >> Allowing me to expand the storage space when needed and clients being >> able to retrieve data (like RAID0 but over devices not HDD) >> >> Here is an example I found which is where I'm getting some ideas from >> http://www.howtoforge.com/how-to-build-a-low-cost-san-p3 >> >> Horribly, horribly, horribly complex. But, then, that's the Linux world. > :) > > Server 1: bunch of disks exported via iSCSI > Server 2: bunch of disks exported via iSCSI > Server 3: bunch of disks exported via iSCSI > > "SAN" box: uses all those iSCSI exports to create a ZFS pool For what it's worth - I think this is a good idea! iSCSI and ZFS make it extraordinarily flexible to do this. You can have a RAIS - redundant array of inexpensive servers :) For example: each server box hosts 8-12 drives - use a hardware controller with RAID6 and a BBU to create a single volume (if FreeBSD booting issues allow, but that can be worked around). Export this volume via iSCSI. Repeat for the rest of the servers. Then, on the client, create a RAIDZ. or if you trust your setup that much. a straight striped ZFS volume. If you do it the RAIDZ way, one of your storage servers can fail completely. As you need more space, add more servers in batches of three (if you did RAIDZ, else the number doesn't matter), add them to the client as usual. The "client" in this case can be a file server, and you can achieve failover between several of those by using e.g. carp, heartbeat, etc. - if the master node fails, some other one can reconstitute the ZFS pool ad make it available. But, you need very fast links between the nodes, and I wouldn't use something like this without extensively testing the failure modes.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?hoe355$tuk$1>