Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Mar 2010 17:12:50 +0000
From:      Michal <michal@ionic.co.uk>
To:        freebsd-stable@freebsd.org
Subject:   Re: Multi node storage, ZFS
Message-ID:  <4BAA4812.8070307@ionic.co.uk>
In-Reply-To: <b269bc571003240920r3c06a67ci1057921899c36637@mail.gmail.com>
References:  <4BAA3409.6080406@ionic.co.uk> <b269bc571003240920r3c06a67ci1057921899c36637@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 24/03/2010 16:20, Freddie Cash wrote:
Horribly, horribly, horribly complex.  But, then, that's the Linux world.
>  :)

Yes I know, it's not very clean, but was trying to gather ideas and I
found that

> 
> Server 1:  bunch of disks exported via iSCSI
> Server 2:  bunch of disks exported via iSCSI
> Server 3:  bunch of disks exported via iSCSI
> 
> "SAN" box:  uses all those iSCSI exports to create a ZFS pool
> 
> Use 1 iSCSI export from each server to create a raidz vdev.  Or multiple
> mirror vdevs.  When you need more storage, just add another server full of
> disks, export them via iSCSI to the "SAN" box, and expand the ZFS pool.
> 
> And, if you need fail-over, on your "SAN" box, you can use HAST at the lower
> layers (currently only available in 9-CURRENT) to mirror the storage across
> two systems, and use CARP to provide a single IP for the two boxes.
> 
> ---------------------------------------------------------------------

This is pretty much what I have been looking for, I don't mind using a
SAN Controller server in which to deal with all of this in fact I
expected that, but I wanted to present the disks from a server full of
HDD's (which in effect is just a storage device) and then join them up.
I've briefly looked over RAIDz, will give it a good reading over later.
I'm thinking 6 disks in each server, and two raidz vdev created from 3
disks in each server. I can them serve them to the network. I've never
used ISCSI on FreeBSD however, I played with AOE on different *nix's so
I will give ISCSI a good looking over.

> Yes, you save space, but your throughput will be horribly horribly horribly
> low.  RAID arrays should be narrow (1-9 disks), not wide (30+ disks), and
> then combined into a larger array (multiple small RAID6 arrays joined into a
> RAID0 stripe).

Oh Yes I agree, was doing some very crude calculations and the
difference in space was quite a lot, but no I would never do that in
reality

> If you were to do something like this, I'd make sure to have a fast
>local ZIL (log) device on the head node.  That would reduce latency
>for writes, you might also do the same for reads.  Then your bulk
>storage comes from the iSCSI boxes.
>
>Just a thought.

I've not come across ZIL so I think I will have to do my research


>At least in theory you could use geom_gate and zfs I suppose, never
>tried it though.
>ggatec(8), ggated(8) are your friends for that.
>
>Vince

Just had a look at ggatec, I've not seen or heard of that so I will
continue looking through that.


Many thanks to all, if I get something solid working I will be sure to
update the list with what will hopefully be a very cheap (other then
HDD's) SAN





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4BAA4812.8070307>