From owner-freebsd-stable@FreeBSD.ORG Mon Jun 29 10:15:13 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 169B9106564A for ; Mon, 29 Jun 2009 10:15:13 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.248]) by mx1.freebsd.org (Postfix) with ESMTP id C354F8FC12 for ; Mon, 29 Jun 2009 10:15:12 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: by an-out-0708.google.com with SMTP id d14so1161079and.13 for ; Mon, 29 Jun 2009 03:15:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=ntDIfiNhFbRNZyB8EVohf9ezbCYkB4wrfgS20aSRgsE=; b=rDbUQrAdAN2rQRcjRLfscCXaR2MYhaKoEeDzl26XemM67ic+uD+0DActamYeHa++bv 8TtWujivcSEGpIcXliXnq8uxvGmxYHWv9kVgUraA4g7+u4QQ+FQmNU/BgLWF/OtDCZMn 3RFHisWvgrQ8qIk6YDedslF6iHvrSqBvnnTBw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=Qz6QPosYFw0DyrmjdvNizUiMY3vZzTZhypljfBtK9ssxtx4KZzqThme76wtbj62hI8 IrfdU/vCVVxa5oBeRBuQ7YBMrwZYr6porETA10dLKheDMsdPmDiuza2Ecf2bWmlbtcwC IT7m7uz0hFa7wSkKKW/dhcAenRjpm3JpyV2n0= MIME-Version: 1.0 Received: by 10.100.132.14 with SMTP id f14mr8830657and.79.1246270512045; Mon, 29 Jun 2009 03:15:12 -0700 (PDT) In-Reply-To: <20090629094359.GB24054@hugo10.ka.punkt.de> References: <20090629094359.GB24054@hugo10.ka.punkt.de> Date: Mon, 29 Jun 2009 13:15:12 +0300 Message-ID: From: Dan Naumov To: "Patrick M. Hausen" Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: FreeBSD Stable Mailing List Subject: Re: Zpool on raw disk and weird GEOM complaint X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jun 2009 10:15:13 -0000 On Mon, Jun 29, 2009 at 12:43 PM, Patrick M. Hausen wrote: > Hi, all, > > I have a system with 12 S-ATA disks attached that I set up > as a raidz2: > > %zpool status zfs > =A0pool: zfs > =A0state: ONLINE > =A0scrub: scrub in progress for 0h5m, 7.56% done, 1h3m to go > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0zfs =A0 =A0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0raidz2 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0da0 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da1 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da2 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da3 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da5 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da7 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da8 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da9 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da10 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0da11 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > > errors: No known data errors I can't address your issue at hand, but I would point out that having a raidz/raidz2 consisting of more than 9 vdevs is a BAD IDEA (tm). All SUN documentation recommends using groups from 3 to 9 vdevs in size. There are known cases where using more vdevs than recommended causes performance degradation and more importantly, parity computation problems which can result in crashes and potential data loss. In your case, I would have the pool built as a group of 2 x 6-disk raidz. Sincerely, - Dan Naumov