Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 11 Nov 2016 16:16:52 +0100
From:      Palle Girgensohn <girgen@FreeBSD.org>
To:        freebsd-fs@freebsd.org
Cc:        Sean Chittenden <sean@chittenden.org>, Julian Akehurst <julian@pingpong.net>
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <A7EF341C-C698-47E2-9EDE-04840A86CD4F@FreeBSD.org>
In-Reply-To: <5127A334-0805-46B8-9CD9-FD8585CB84F3@chittenden.org>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net> <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net> <5127A334-0805-46B8-9CD9-FD8585CB84F3@chittenden.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

Pinging this old thread.

We have revisited this question:

A simple stable solution for a redundant storage with little or no down =
time when a machine breaks. Storage is served using NFS only.


It seems true HA is always complicated. I'd rather go for a simple =
understandable solution and accept sub minute downtime rather than a =
complicated solution. For our needs, the pretty solution lined up in the =
FreeBSD Magazine seems a bit overly complicated.

So here's what we are pondering:

- one SAS dual port disk box

- connect a master host machine to one port and a slave host machine to =
the the other port

- one host is MASTER, it serves all requests

- one host is SLAVE, doing nothing but waiting for the MASTER to fail

- fail over would be handled with zpool export / zpool import, or just =
zpool import -F if the master dies.

- MASTER/SLAVE election and avoiding split brain using for example CARP.

This is not a real HA solution since zpool import takes about a minute. =
Is this true for a large array?

Would this suggestion work?

Are there better ideas out there?

Cheers,
Palle






> 18 maj 2016 kl. 09:58 skrev Sean Chittenden <sean@chittenden.org>:
>=20
> =
https://www.freebsdfoundation.org/wp-content/uploads/2015/12/vol2_no4_grou=
pon.pdf
>=20
> mps(4) was good to us.  What=E2=80=99s your workload?  -sc
>=20
> --
> Sean Chittenden
> sean@chittenden.org
>=20
>=20
>> On May 18, 2016, at 03:53 , Palle Girgensohn <girgen@pingpong.net> =
wrote:
>>=20
>>=20
>>=20
>>> 17 maj 2016 kl. 18:13 skrev Joe Love <joe@getsomewhere.net>:
>>>=20
>>>=20
>>>> On May 16, 2016, at 5:08 AM, Palle Girgensohn <girgen@FreeBSD.org> =
wrote:
>>>>=20
>>>> Hi,
>>>>=20
>>>> We need to set up a ZFS pool with redundance. The main goal is high =
availability - uptime.
>>>>=20
>>>> I can see a few of paths to follow.
>>>>=20
>>>> 1. HAST + ZFS
>>>>=20
>>>> 2. Some sort of shared storage, two machines sharing a JBOD box.
>>>>=20
>>>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
>>>>=20
>>>> 4. using something else than ZFS, even a different OS if required.
>>>>=20
>>>> My main concern with HAST+ZFS is performance. Google offer some =
insights here, I find mainly unsolved problems. Please share any success =
stories or other experiences.
>>>>=20
>>>> Shared storage still has a single point of failure, the JBOD box. =
Apart from that, is there even any support for the kind of storage PCI =
cards that support dual head for a storage box? I cannot find any.
>>>>=20
>>>> We are running with ZFS replication today, but it is just too slow =
for the amount of data.
>>>>=20
>>>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool =
and also tools, scripts, backup all is using ZFS, but if there is no =
solution using ZFS, we're open to alternatives. Nexenta springs to mind, =
but I believe it is using shared storage for redundance, so it does have =
single points of failure?
>>>>=20
>>>> Any other suggestions? Please share your experience. :)
>>>>=20
>>>> Palle
>>>=20
>>> I don=E2=80=99t know if this falls into the realm of what you want, =
but BSDMag just released an issue with an article entitled =E2=80=9CAdding=
 ZFS to the FreeBSD dual-controller storage concept.=E2=80=9D
>>> https://bsdmag.org/download/reusing_openbsd/
>>>=20
>>> My understanding in this setup is that the only single point of =
failure for this model is the backplanes that the drives would connect =
to.  Depending on your controller cards, this could be alleviated by =
simply using multiple drive shelves, and only using one drive/shelf as =
part of a vdev (then stripe or whatnot over your vdevs).
>>>=20
>>> It might not be what you=E2=80=99re after, as it=E2=80=99s basically =
two systems with their own controllers, with a shared set of drives.  =
Some expansion from the virtual world to real physical systems will =
probably need additional variations.
>>> I think the TrueNAS system (with HA) is setup similar to this, only =
without the split between the drives being primarily handled by separate =
controllers, but someone with more in-depth knowledge would need to =
confirm/deny this.
>>>=20
>>> -Jo
>>=20
>> Hi,
>>=20
>> Do you know any specific controllers that work with dual head?
>>=20
>> Thanks.,
>> Palle
>>=20
>>=20
>> _______________________________________________
>> freebsd-fs@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>=20




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A7EF341C-C698-47E2-9EDE-04840A86CD4F>