From owner-freebsd-fs@FreeBSD.ORG Sun May 13 17:35:52 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1FDC8106566C for ; Sun, 13 May 2012 17:35:52 +0000 (UTC) (envelope-from araujobsdport@gmail.com) Received: from mail-gg0-f182.google.com (mail-gg0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id CED6F8FC16 for ; Sun, 13 May 2012 17:35:51 +0000 (UTC) Received: by ggnm2 with SMTP id m2so3363956ggn.13 for ; Sun, 13 May 2012 10:35:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:date:message-id:subject:from:to:content-type; bh=CDl+9f1vQ+ulJ8GFW8lT//IBi6QNfSgL/tDZ/RF2LWo=; b=j0twDZjSer89LhDvnm/VAK4E4qKxYRa7QawTfqPNgNWHBEXtNWF+bW7d0yiMddzzKh sOmnNcUheU4V9dTRvCVp1bUo4mmZ2FNJMlzn0D7fTWHf9StEf5zKfR2V04t61sJEAqtq 6Wfo626HqHu3m26vfHtHHUyNAYM3qvNQ2GeAluH/RWcASoFJDa4GOppdM28aeNLBysin JnlXDXQzeCwz0ds9PXYbfi0fkW4RlO06EpUvh99XFMB5UoFB55dvvL+czbTF2RbHhk6y sHDePBpQA7AXxtH/G9IqJqxCKp78N2LcHvvYyKanv1cZIIoNicGMJrseY9DvphcLZ4UH baSQ== MIME-Version: 1.0 Received: by 10.50.15.137 with SMTP id x9mr2719632igc.8.1336930551076; Sun, 13 May 2012 10:35:51 -0700 (PDT) Received: by 10.231.31.196 with HTTP; Sun, 13 May 2012 10:35:51 -0700 (PDT) Date: Mon, 14 May 2012 01:35:51 +0800 Message-ID: From: Marcelo Araujo To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Mirror of Raidz for data reliability X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: araujo@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 13 May 2012 17:35:52 -0000 Hi All, Me and a co-work are working in a new feature for ZFS, we have 2 Machines and 2 JBOD, every Machine are connected on those JBOD via SAS and we are trying to make a fail-over server. Currently every each Machine has two SAS cables, each one connected in both JBOD. We have worked last week to figure out, how we could make the data be always alive in case one JBOD dies, and let me show you my console output ;): controllerA# zpool status -v araujo pool: araujo state: ONLINE scan: resilvered 57K in 0h0m with 0 errors on Sat May 12 14:32:29 2012 config: NAME STATE READ WRITE CKSUM araujo ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da0 ONLINE 0 0 0 da3 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 da1 ONLINE 0 0 0 da4 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 da2 ONLINE 0 0 0 da5 ONLINE 0 0 0 What I have is: A file system called "araujo" created using raidz with 3 disks, than, I can attache new disks to create a mirror for every disk that is part of the raidz, with this case, if one of my JBOD fail, my raidz will still alive, also I can scale with more JBOD to make sure that my data will be always alive. Currently is possible to do the solution above, just comment few lines of code, but our plan is bring something new like: root# zpool create tank raidzm da0 da1 da2 da3 da4 da5 Where da0 da1 da2 will be raidz and da3 da4 da5 will be mirror of da0 da1 da2. In this case, if da0 da1 or da2 fail, we have the mirror and the raidz will still works. I=92m wondering if there is any other elegant solution for this case, HAST could be an option, but I don=92t want use ETHERNET to sync JBOD, and in th= e case above, it is faster to sync any hard driver more less locally. Best Regards, --=20 Marcelo Araujo araujo@FreeBSD.org