From owner-freebsd-fs@FreeBSD.ORG Sat May 19 10:13:41 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 09FC71065674 for ; Sat, 19 May 2012 10:13:41 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id 816D58FC0C for ; Sat, 19 May 2012 10:13:40 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.5/8.14.5) with ESMTP id q4JADSlI062051 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Sat, 19 May 2012 13:13:29 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <4FB77248.50709@digsys.bg> Date: Sat, 19 May 2012 13:13:28 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:10.0.4) Gecko/20120501 Thunderbird/10.0.4 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable Subject: Re: Mirror of Raidz for data reliability X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 May 2012 10:13:41 -0000 On 18.05.12 19:55, Trent Nelson wrote: > So, my thinking is=8A because both machines can see all disks, the=20 > master could import the zpool as normal, and the slave could import it = > read-only. (Or not import it at all...) The proper way of doing it is "not import it at all". ZFS is not an=20 shared filesystem. If you have the second host mount the zpool even if read-only, you only=20 guarantee that data on the pool will not be corrupted, but you cannot=20 avoid the second "read-only" host panic or otherwise crash if it tries=20 to access data which is no longer where it thinks it is, because the=20 second host doesn't have access to the primary host's in-memory metadata = about ZFS. Since ZFS is copy on write filesystem, chances are you will=20 be accessing data that is no longer valid. Refreshing the internal ZFS=20 state between two or more hosts is non-trivial (if it was, Sun would=20 have done this, as it suits their usage) and in any case performance=20 will suffer at least as much as an true networked filesystem does,=20 compared to "native" ZFS. Daniel