Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 May 2016 16:52:24 +0200
From:      Rainer Duffner <rainer@ultra-secure.de>
To:        Palle Girgensohn <girgen@FreeBSD.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <284D58D1-1C62-4519-A46B-7D0E8326B86B@ultra-secure.de>
In-Reply-To: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help


> Am 16.05.2016 um 12:08 schrieb Palle Girgensohn <girgen@FreeBSD.org>:
> 
> Hi,
> 
> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime.
> 
> I can see a few of paths to follow.
> 
> 1. HAST + ZFS
> 
> 2. Some sort of shared storage, two machines sharing a JBOD box.
> 
> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
> 
> 4. using something else than ZFS, even a different OS if required.



There’s always GlusterFS.
Recently ported to FreeBSD and available as net/gulsterfs (10.3 recommended, AFAIK).

At work, we use it on Ubuntu - but not with so much data.
On Linux, I’d use it on top of XFS.

For our Cloud-Storage, we went with ScaleIO (which is Linux only).

You need more than two nodes with Gluster, though (for production use)
I think my co-worker said four at least.

If you have the money and don’t mind Linux, ScaleIO is probably the best you can buy at the moment.
While licensed at the GByte-Level (yeah, EMC…) it can be used free of charge, unsupported.






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?284D58D1-1C62-4519-A46B-7D0E8326B86B>