From owner-freebsd-fs@FreeBSD.ORG Sun May 13 17:56:46 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id D7BBD106566B for ; Sun, 13 May 2012 17:56:46 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 87BAD8FC0A for ; Sun, 13 May 2012 17:56:45 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id D165F47E1A; Sun, 13 May 2012 19:46:56 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [172.19.191.4] (c38-073.client.duna.pl [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id D9CD647E15 for ; Sun, 13 May 2012 19:46:54 +0200 (CEST) Message-ID: <4FAFF38C.3060002@platinum.linux.pl> Date: Sun, 13 May 2012 19:46:52 +0200 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.28) Gecko/20120306 Thunderbird/3.1.20 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: Mirror of Raidz for data reliability X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 May 2012 17:56:46 -0000 Wouldn't this accomplish the same? zpool create tank raidz da0 da1 da2 raidz da3 da4 da5 zfs set copies=2 tank On 2012-05-13 19:35, Marcelo Araujo wrote: > Hi All, > > Me and a co-work are working in a new feature for ZFS, we have 2 Machines > and 2 JBOD, every Machine are connected on those JBOD via SAS and we are > trying to make a fail-over server. Currently every each Machine has two SAS > cables, each one connected in both JBOD. > > We have worked last week to figure out, how we could make the data be > always alive in case one JBOD dies, and let me show you my console output > ;): > > controllerA# zpool status -v araujo > pool: araujo > state: ONLINE > scan: resilvered 57K in 0h0m with 0 errors on Sat May 12 14:32:29 2012 > config: > > NAME STATE READ WRITE CKSUM > araujo ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > da0 ONLINE 0 0 0 > da3 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > da1 ONLINE 0 0 0 > da4 ONLINE 0 0 0 > mirror-2 ONLINE 0 0 0 > da2 ONLINE 0 0 0 > da5 ONLINE 0 0 0 > > > What I have is: A file system called "araujo" created using raidz with 3 > disks, than, I can attache new disks to create a mirror for every disk that > is part of the raidz, with this case, if one of my JBOD fail, my raidz will > still alive, also I can scale with more JBOD to make sure that my data will > be always alive. > > Currently is possible to do the solution above, just comment few lines of > code, but our plan is bring something new like: > root# zpool create tank raidzm da0 da1 da2 da3 da4 da5 > > Where da0 da1 da2 will be raidz and da3 da4 da5 will be mirror of da0 da1 > da2. In this case, if da0 da1 or da2 fail, we have the mirror and the raidz > will still works. > > I’m wondering if there is any other elegant solution for this case, HAST > could be an option, but I don’t want use ETHERNET to sync JBOD, and in the > case above, it is faster to sync any hard driver more less locally. > > > Best Regards,