From owner-freebsd-fs@freebsd.org Tue May 17 01:48:14 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5F369B3DFC1 for ; Tue, 17 May 2016 01:48:14 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from smtp.simplesystems.org (smtp.simplesystems.org [65.66.246.90]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2F58B1744; Tue, 17 May 2016 01:48:13 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by smtp.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id u4H1hnHC008304; Mon, 16 May 2016 20:43:49 -0500 (CDT) Date: Mon, 16 May 2016 20:43:49 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Palle Girgensohn cc: freebsd-fs@freebsd.org Subject: Re: Best practice for high availability ZFS pool In-Reply-To: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> Message-ID: References: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> User-Agent: Alpine 2.20 (GSO 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (smtp.simplesystems.org [65.66.246.90]); Mon, 16 May 2016 20:43:49 -0500 (CDT) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 May 2016 01:48:14 -0000 On Mon, 16 May 2016, Palle Girgensohn wrote: > > Shared storage still has a single point of failure, the JBOD box. > Apart from that, is there even any support for the kind of storage > PCI cards that support dual head for a storage box? I cannot find > any. Use two (or three) JBOD boxes and do simple zfs mirroring across them so you can unplug a JBOD and the pool still works. Or use a bunch of JBOD boxes and use zfs raidz2 (or raidz3) across them with careful LUN selection so there is total storage redundancy and you can unplug a JBOD and the pool still works. Fiber channel (or FCoE) or iSCSI allows putting the hardware at some distance. Without completely isolated systems there is always the risk of total failure. Even with zfs send there is the risk of total failure if the sent data results in corruption on the receiving side. Decide if you really want to optimize for maximum availability or you want to minimize the duration of the outage if something goes wrong. There is a difference. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/