From owner-freebsd-fs@freebsd.org Wed May 18 08:02:13 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E88C2B408B3 for ; Wed, 18 May 2016 08:02:13 +0000 (UTC) (envelope-from jg@internetx.com) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 76A29199F for ; Wed, 18 May 2016 08:02:13 +0000 (UTC) (envelope-from jg@internetx.com) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id BD3F345FC0D8; Wed, 18 May 2016 10:02:04 +0200 (CEST) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JYzL3Wdxyf6l; Wed, 18 May 2016 10:02:02 +0200 (CEST) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 343BA4C4C5E9; Wed, 18 May 2016 10:02:02 +0200 (CEST) Subject: Re: Best practice for high availability ZFS pool References: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net> <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net> To: Joe Love Cc: freebsd-fs@freebsd.org Reply-To: jg@internetx.com From: InterNetX - Juergen Gotteswinter Message-ID: <361f80cb-c7e2-18f6-ad62-f6f91aa7c293@internetx.com> Date: Wed, 18 May 2016 10:02:00 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.0 MIME-Version: 1.0 In-Reply-To: <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 May 2016 08:02:14 -0000 Am 5/18/2016 um 9:53 AM schrieb Palle Girgensohn: > > >> 17 maj 2016 kl. 18:13 skrev Joe Love : >> >> >>> On May 16, 2016, at 5:08 AM, Palle Girgensohn wrote: >>> >>> Hi, >>> >>> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime. >>> >>> I can see a few of paths to follow. >>> >>> 1. HAST + ZFS >>> >>> 2. Some sort of shared storage, two machines sharing a JBOD box. >>> >>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive) >>> >>> 4. using something else than ZFS, even a different OS if required. >>> >>> My main concern with HAST+ZFS is performance. Google offer some insights here, I find mainly unsolved problems. Please share any success stories or other experiences. >>> >>> Shared storage still has a single point of failure, the JBOD box. Apart from that, is there even any support for the kind of storage PCI cards that support dual head for a storage box? I cannot find any. >>> >>> We are running with ZFS replication today, but it is just too slow for the amount of data. >>> >>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and also tools, scripts, backup all is using ZFS, but if there is no solution using ZFS, we're open to alternatives. Nexenta springs to mind, but I believe it is using shared storage for redundance, so it does have single points of failure? >>> >>> Any other suggestions? Please share your experience. :) >>> >>> Palle >> >> I don’t know if this falls into the realm of what you want, but BSDMag just released an issue with an article entitled “Adding ZFS to the FreeBSD dual-controller storage concept.” >> https://bsdmag.org/download/reusing_openbsd/ >> >> My understanding in this setup is that the only single point of failure for this model is the backplanes that the drives would connect to. Depending on your controller cards, this could be alleviated by simply using multiple drive shelves, and only using one drive/shelf as part of a vdev (then stripe or whatnot over your vdevs). >> >> It might not be what you’re after, as it’s basically two systems with their own controllers, with a shared set of drives. Some expansion from the virtual world to real physical systems will probably need additional variations. >> I think the TrueNAS system (with HA) is setup similar to this, only without the split between the drives being primarily handled by separate controllers, but someone with more in-depth knowledge would need to confirm/deny this. >> >> -Jo > > Hi, > > Do you know any specific controllers that work with dual head? > > Thanks., > Palle go for lsi sas2008 based hba > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >