Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 May 2016 13:00:43 -0500
From:      Linda Kateley <lkateley@kateley.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <573B5C4B.80406@kateley.com>
In-Reply-To: <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net>

next in thread | previous in thread | raw e-mail | index | archive | help


On 5/17/16 11:13 AM, Joe Love wrote:
>> On May 16, 2016, at 5:08 AM, Palle Girgensohn <girgen@FreeBSD.org> wrote:
>>
>> Hi,
>>
>> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime.
>>
>> I can see a few of paths to follow.
>>
>> 1. HAST + ZFS
>>
>> 2. Some sort of shared storage, two machines sharing a JBOD box.
>>
>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
>>
>> 4. using something else than ZFS, even a different OS if required.
>>
>> My main concern with HAST+ZFS is performance. Google offer some insights here, I find mainly unsolved problems. Please share any success stories or other experiences.
>>
>> Shared storage still has a single point of failure, the JBOD box. Apart from that, is there even any support for the kind of storage PCI cards that support dual head for a storage box? I cannot find any.
>>
>> We are running with ZFS replication today, but it is just too slow for the amount of data.
>>
>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and also tools, scripts, backup all is using ZFS, but if there is no solution using ZFS, we're open to alternatives. Nexenta springs to mind, but I believe it is using shared storage for redundance, so it does have single points of failure?
>>
>> Any other suggestions? Please share your experience. :)
For true high availability there is an application RSF-1 that can get 
full HA. I am not sure the exact failover times, but the last time I 
talked to them, it was very low. They also run higher up in ZFS.
>>
>> Palle
>>
> I don’t know if this falls into the realm of what you want, but BSDMag just released an issue with an article entitled “Adding ZFS to the FreeBSD dual-controller storage concept.”
> https://bsdmag.org/download/reusing_openbsd/
>
> My understanding in this setup is that the only single point of failure for this model is the backplanes that the drives would connect to.
Most of the jbods you can buy also have the ability to have dual 
backplanes also
>   Depending on your controller cards, this could be alleviated by simply using multiple drive shelves, and only using one drive/shelf as part of a vdev (then stripe or whatnot over your vdevs).
>
> It might not be what you’re after, as it’s basically two systems with their own controllers, with a shared set of drives.  Some expansion from the virtual world to real physical systems will probably need additional variations.
> I think the TrueNAS system (with HA) is setup similar to this, only without the split between the drives being primarily handled by separate controllers, but someone with more in-depth knowledge would need to confirm/deny this.
>
> -Joe
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?573B5C4B.80406>