From owner-freebsd-fs@freebsd.org Tue May 17 16:13:26 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 718DBB3FE99 for ; Tue, 17 May 2016 16:13:26 +0000 (UTC) (envelope-from joe@getsomewhere.net) Received: from prak.gameowls.com (prak.gameowls.com [IPv6:2001:19f0:5c00:950b:5400:ff:fe14:46b7]) by mx1.freebsd.org (Postfix) with ESMTP id 4CEF13FB1; Tue, 17 May 2016 16:13:26 +0000 (UTC) (envelope-from joe@getsomewhere.net) Received: from [IPv6:2001:470:c412:beef:135:c8df:2d0e:4ea6] (unknown [IPv6:2001:470:c412:beef:135:c8df:2d0e:4ea6]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by prak.gameowls.com (Postfix) with ESMTPSA id 1BC5118C3D; Tue, 17 May 2016 11:13:18 -0500 (CDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: Best practice for high availability ZFS pool From: Joe Love In-Reply-To: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> Date: Tue, 17 May 2016 11:13:18 -0500 Cc: freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net> References: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> To: Palle Girgensohn X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 May 2016 16:13:26 -0000 > On May 16, 2016, at 5:08 AM, Palle Girgensohn = wrote: >=20 > Hi, >=20 > We need to set up a ZFS pool with redundance. The main goal is high = availability - uptime. >=20 > I can see a few of paths to follow. >=20 > 1. HAST + ZFS >=20 > 2. Some sort of shared storage, two machines sharing a JBOD box. >=20 > 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive) >=20 > 4. using something else than ZFS, even a different OS if required. >=20 > My main concern with HAST+ZFS is performance. Google offer some = insights here, I find mainly unsolved problems. Please share any success = stories or other experiences. >=20 > Shared storage still has a single point of failure, the JBOD box. = Apart from that, is there even any support for the kind of storage PCI = cards that support dual head for a storage box? I cannot find any. >=20 > We are running with ZFS replication today, but it is just too slow for = the amount of data. >=20 > We prefer to keep ZFS as we already have a rather big (~30 TB) pool = and also tools, scripts, backup all is using ZFS, but if there is no = solution using ZFS, we're open to alternatives. Nexenta springs to mind, = but I believe it is using shared storage for redundance, so it does have = single points of failure? >=20 > Any other suggestions? Please share your experience. :) >=20 > Palle >=20 I don=E2=80=99t know if this falls into the realm of what you want, but = BSDMag just released an issue with an article entitled =E2=80=9CAdding = ZFS to the FreeBSD dual-controller storage concept.=E2=80=9D https://bsdmag.org/download/reusing_openbsd/ My understanding in this setup is that the only single point of failure = for this model is the backplanes that the drives would connect to. = Depending on your controller cards, this could be alleviated by simply = using multiple drive shelves, and only using one drive/shelf as part of = a vdev (then stripe or whatnot over your vdevs). It might not be what you=E2=80=99re after, as it=E2=80=99s basically two = systems with their own controllers, with a shared set of drives. Some = expansion from the virtual world to real physical systems will probably = need additional variations. I think the TrueNAS system (with HA) is setup similar to this, only = without the split between the drives being primarily handled by separate = controllers, but someone with more in-depth knowledge would need to = confirm/deny this. -Joe