From owner-freebsd-fs@freebsd.org Fri Nov 11 15:17:00 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 34A18C3B97C for ; Fri, 11 Nov 2016 15:17:00 +0000 (UTC) (envelope-from girgen@FreeBSD.org) Received: from mail.pingpong.net (mail.pingpong.net [79.136.116.202]) by mx1.freebsd.org (Postfix) with ESMTP id D492A148F for ; Fri, 11 Nov 2016 15:16:59 +0000 (UTC) (envelope-from girgen@FreeBSD.org) Received: from [172.16.0.5] (citron.pingpong.net [195.178.173.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.pingpong.net (Postfix) with ESMTPSA id 83AE61C76F; Fri, 11 Nov 2016 16:16:52 +0100 (CET) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 10.1 \(3251\)) Subject: Re: Best practice for high availability ZFS pool From: Palle Girgensohn In-Reply-To: <5127A334-0805-46B8-9CD9-FD8585CB84F3@chittenden.org> Date: Fri, 11 Nov 2016 16:16:52 +0100 Cc: Sean Chittenden , Julian Akehurst Content-Transfer-Encoding: quoted-printable Message-Id: References: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net> <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net> <5127A334-0805-46B8-9CD9-FD8585CB84F3@chittenden.org> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.3251) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Nov 2016 15:17:00 -0000 Hi, Pinging this old thread. We have revisited this question: A simple stable solution for a redundant storage with little or no down = time when a machine breaks. Storage is served using NFS only. It seems true HA is always complicated. I'd rather go for a simple = understandable solution and accept sub minute downtime rather than a = complicated solution. For our needs, the pretty solution lined up in the = FreeBSD Magazine seems a bit overly complicated. So here's what we are pondering: - one SAS dual port disk box - connect a master host machine to one port and a slave host machine to = the the other port - one host is MASTER, it serves all requests - one host is SLAVE, doing nothing but waiting for the MASTER to fail - fail over would be handled with zpool export / zpool import, or just = zpool import -F if the master dies. - MASTER/SLAVE election and avoiding split brain using for example CARP. This is not a real HA solution since zpool import takes about a minute. = Is this true for a large array? Would this suggestion work? Are there better ideas out there? Cheers, Palle > 18 maj 2016 kl. 09:58 skrev Sean Chittenden : >=20 > = https://www.freebsdfoundation.org/wp-content/uploads/2015/12/vol2_no4_grou= pon.pdf >=20 > mps(4) was good to us. What=E2=80=99s your workload? -sc >=20 > -- > Sean Chittenden > sean@chittenden.org >=20 >=20 >> On May 18, 2016, at 03:53 , Palle Girgensohn = wrote: >>=20 >>=20 >>=20 >>> 17 maj 2016 kl. 18:13 skrev Joe Love : >>>=20 >>>=20 >>>> On May 16, 2016, at 5:08 AM, Palle Girgensohn = wrote: >>>>=20 >>>> Hi, >>>>=20 >>>> We need to set up a ZFS pool with redundance. The main goal is high = availability - uptime. >>>>=20 >>>> I can see a few of paths to follow. >>>>=20 >>>> 1. HAST + ZFS >>>>=20 >>>> 2. Some sort of shared storage, two machines sharing a JBOD box. >>>>=20 >>>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive) >>>>=20 >>>> 4. using something else than ZFS, even a different OS if required. >>>>=20 >>>> My main concern with HAST+ZFS is performance. Google offer some = insights here, I find mainly unsolved problems. Please share any success = stories or other experiences. >>>>=20 >>>> Shared storage still has a single point of failure, the JBOD box. = Apart from that, is there even any support for the kind of storage PCI = cards that support dual head for a storage box? I cannot find any. >>>>=20 >>>> We are running with ZFS replication today, but it is just too slow = for the amount of data. >>>>=20 >>>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool = and also tools, scripts, backup all is using ZFS, but if there is no = solution using ZFS, we're open to alternatives. Nexenta springs to mind, = but I believe it is using shared storage for redundance, so it does have = single points of failure? >>>>=20 >>>> Any other suggestions? Please share your experience. :) >>>>=20 >>>> Palle >>>=20 >>> I don=E2=80=99t know if this falls into the realm of what you want, = but BSDMag just released an issue with an article entitled =E2=80=9CAdding= ZFS to the FreeBSD dual-controller storage concept.=E2=80=9D >>> https://bsdmag.org/download/reusing_openbsd/ >>>=20 >>> My understanding in this setup is that the only single point of = failure for this model is the backplanes that the drives would connect = to. Depending on your controller cards, this could be alleviated by = simply using multiple drive shelves, and only using one drive/shelf as = part of a vdev (then stripe or whatnot over your vdevs). >>>=20 >>> It might not be what you=E2=80=99re after, as it=E2=80=99s basically = two systems with their own controllers, with a shared set of drives. = Some expansion from the virtual world to real physical systems will = probably need additional variations. >>> I think the TrueNAS system (with HA) is setup similar to this, only = without the split between the drives being primarily handled by separate = controllers, but someone with more in-depth knowledge would need to = confirm/deny this. >>>=20 >>> -Jo >>=20 >> Hi, >>=20 >> Do you know any specific controllers that work with dual head? >>=20 >> Thanks., >> Palle >>=20 >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20